NASA Technical Reports Server (NTRS)
Tumer, Irem; Mehr, Ali Farhang
2005-01-01
In this paper, a two-level multidisciplinary design approach is described to optimize the effectiveness of ISHM s. At the top level, the overall safety of the mission consists of system-level variables, parameters, objectives, and constraints that are shared throughout the system and by all subsystems. Each subsystem level will then comprise of these shared values in addition to subsystem-specific variables, parameters, objectives and constraints. A hierarchical structure will be established to pass up or down shared values between the two levels with system-level and subsystem-level optimization routines.
Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm
NASA Technical Reports Server (NTRS)
Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.
2001-01-01
Multilevel Structural Optimization (MSO) continues to be an area of research interest in engineering optimization. In the present project, the weight optimization of beams and trusses using Displacement based Multilevel Structural Optimization (DMSO), a member of the MSO set of methodologies, is investigated. In the DMSO approach, the optimization task is subdivided into a single system and multiple subsystems level optimizations. The system level optimization minimizes the load unbalance resulting from the use of displacement functions to approximate the structural displacements. The function coefficients are then the design variables. Alternately, the system level optimization can be solved using the displacements themselves as design variables, as was shown in previous research. Both approaches ensure that the calculated loads match the applied loads. In the subsystems level, the weight of the structure is minimized using the element dimensions as design variables. The approach is expected to be very efficient for large structures, since parallel computing can be utilized in the different levels of the problem. In this paper, the method is applied to a one-dimensional beam and a large three-dimensional truss. The beam was tested to study possible simplifications to the system level optimization. In previous research, polynomials were used to approximate the global nodal displacements. The number of coefficients of the polynomials equally matched the number of degrees of freedom of the problem. Here it was desired to see if it is possible to only match a subset of the degrees of freedom in the system level. This would lead to a simplification of the system level, with a resulting increase in overall efficiency. However, the methods tested for this type of system level simplification did not yield positive results. The large truss was utilized to test further improvements in the efficiency of DMSO. In previous work, parallel processing was applied to the subsystems level, where the derivative verification feature of the optimizer NPSOL had been utilized in the optimizations. This resulted in large runtimes. In this paper, the optimizations were repeated without using the derivative verification, and the results are compared to those from the previous work. Also, the optimizations were run on both, a network of SUN workstations using the MPICH implementation of the Message Passing Interface (MPI) and on the faster Beowulf cluster at ICASE, NASA Langley Research Center, using the LAM implementation of UP]. The results on both systems were consistent and showed that it is not necessary to verify the derivatives and that this gives a large increase in efficiency of the DMSO algorithm.
Market-Based and System-Wide Fuel Cycle Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Paul Philip Hood; Scopatz, Anthony; Gidden, Matthew
This work introduces automated optimization into fuel cycle simulations in the Cyclus platform. This includes system-level optimizations, seeking a deployment plan that optimizes the performance over the entire transition, and market-level optimization, seeking an optimal set of material trades at each time step. These concepts were introduced in a way that preserves the flexibility of the Cyclus fuel cycle framework, one of its most important design principles.
Bi-Level Integrated System Synthesis (BLISS)
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Agte, Jeremy S.; Sandusky, Robert R., Jr.
1998-01-01
BLISS is a method for optimization of engineering systems by decomposition. It separates the system level optimization, having a relatively small number of design variables, from the potentially numerous subsystem optimizations that may each have a large number of local design variables. The subsystem optimizations are autonomous and may be conducted concurrently. Subsystem and system optimizations alternate, linked by sensitivity data, producing a design improvement in each iteration. Starting from a best guess initial design, the method improves that design in iterative cycles, each cycle comprised of two steps. In step one, the system level variables are frozen and the improvement is achieved by separate, concurrent, and autonomous optimizations in the local variable subdomains. In step two, further improvement is sought in the space of the system level variables. Optimum sensitivity data link the second step to the first. The method prototype was implemented using MATLAB and iSIGHT programming software and tested on a simplified, conceptual level supersonic business jet design, and a detailed design of an electronic device. Satisfactory convergence and favorable agreement with the benchmark results were observed. Modularity of the method is intended to fit the human organization and map well on the computing technology of concurrent processing.
Advanced Information Technology in Simulation Based Life Cycle Design
NASA Technical Reports Server (NTRS)
Renaud, John E.
2003-01-01
In this research a Collaborative Optimization (CO) approach for multidisciplinary systems design is used to develop a decision based design framework for non-deterministic optimization. To date CO strategies have been developed for use in application to deterministic systems design problems. In this research the decision based design (DBD) framework proposed by Hazelrigg is modified for use in a collaborative optimization framework. The Hazelrigg framework as originally proposed provides a single level optimization strategy that combines engineering decisions with business decisions in a single level optimization. By transforming this framework for use in collaborative optimization one can decompose the business and engineering decision making processes. In the new multilevel framework of Decision Based Collaborative Optimization (DBCO) the business decisions are made at the system level. These business decisions result in a set of engineering performance targets that disciplinary engineering design teams seek to satisfy as part of subspace optimizations. The Decision Based Collaborative Optimization framework more accurately models the existing relationship between business and engineering in multidisciplinary systems design.
Optimal Control for Quantum Driving of Two-Level Systems
NASA Astrophysics Data System (ADS)
Qi, Xiao-Qiu
2018-01-01
In this paper, the optimal quantum control of two-level systems is studied by the decompositions of SU(2). Using the Pontryagin maximum principle, the minimum time of quantum control is analyzed in detail. The solution scheme of the optimal control function is given in the general case. Finally, two specific cases, which can be applied in many quantum systems, are used to illustrate the scheme, while the corresponding optimal control functions are obtained.
NASA Astrophysics Data System (ADS)
Monica, Z.; Sękala, A.; Gwiazda, A.; Banaś, W.
2016-08-01
Nowadays a key issue is to reduce the energy consumption of road vehicles. In particular solution one could find different strategies of energy optimization. The most popular but not sophisticated is so called eco-driving. In this strategy emphasized is particular behavior of drivers. In more sophisticated solution behavior of drivers is supported by control system measuring driving parameters and suggesting proper operation of the driver. The other strategy is concerned with application of different engineering solutions that aid optimization the process of energy consumption. Such systems take into consideration different parameters measured in real time and next take proper action according to procedures loaded to the control computer of a vehicle. The third strategy bases on optimization of the designed vehicle taking into account especially main sub-systems of a technical mean. In this approach the optimal level of energy consumption by a vehicle is obtained by synergetic results of individual optimization of particular constructional sub-systems of a vehicle. It is possible to distinguish three main sub-systems: the structural one the drive one and the control one. In the case of the structural sub-system optimization of the energy consumption level is related with the optimization or the weight parameter and optimization the aerodynamic parameter. The result is optimized body of a vehicle. Regarding the drive sub-system the optimization of the energy consumption level is related with the fuel or power consumption using the previously elaborated physical models. Finally the optimization of the control sub-system consists in determining optimal control parameters.
A State-Space Approach to Optimal Level-Crossing Prediction for Linear Gaussian Processes
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2009-01-01
In many complex engineered systems, the ability to give an alarm prior to impending critical events is of great importance. These critical events may have varying degrees of severity, and in fact they may occur during normal system operation. In this article, we investigate approximations to theoretically optimal methods of designing alarm systems for the prediction of level-crossings by a zero-mean stationary linear dynamic system driven by Gaussian noise. An optimal alarm system is designed to elicit the fewest false alarms for a fixed detection probability. This work introduces the use of Kalman filtering in tandem with the optimal level-crossing problem. It is shown that there is a negligible loss in overall accuracy when using approximations to the theoretically optimal predictor, at the advantage of greatly reduced computational complexity. I
Analytical and Computational Properties of Distributed Approaches to MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
2000-01-01
Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis
Modeling joint restoration strategies for interdependent infrastructure systems.
Zhang, Chao; Kong, Jingjing; Simonovic, Slobodan P
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems.
Advancement of Bi-Level Integrated System Synthesis (BLISS)
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Emiley, Mark S.; Agte, Jeremy S.; Sandusky, Robert R., Jr.
2000-01-01
Bi-Level Integrated System Synthesis (BLISS) is a method for optimization of an engineering system, e.g., an aerospace vehicle. BLISS consists of optimizations at the subsystem (module) and system levels to divide the overall large optimization task into sets of smaller ones that can be executed concurrently. In the initial version of BLISS that was introduced and documented in previous publications, analysis in the modules was kept at the early conceptual design level. This paper reports on the next step in the BLISS development in which the fidelity of the aerodynamic drag and structural stress and displacement analyses were upgraded while the method's satisfactory convergence rate was retained.
Multi-level systems modeling and optimization for novel aircraft
NASA Astrophysics Data System (ADS)
Subramanian, Shreyas Vathul
This research combines the disciplines of system-of-systems (SoS) modeling, platform-based design, optimization and evolving design spaces to achieve a novel capability for designing solutions to key aeronautical mission challenges. A central innovation in this approach is the confluence of multi-level modeling (from sub-systems to the aircraft system to aeronautical system-of-systems) in a way that coordinates the appropriate problem formulations at each level and enables parametric search in design libraries for solutions that satisfy level-specific objectives. The work here addresses the topic of SoS optimization and discusses problem formulation, solution strategy, the need for new algorithms that address special features of this problem type, and also demonstrates these concepts using two example application problems - a surveillance UAV swarm problem, and the design of noise optimal aircraft and approach procedures. This topic is critical since most new capabilities in aeronautics will be provided not just by a single air vehicle, but by aeronautical Systems of Systems (SoS). At the same time, many new aircraft concepts are pressing the boundaries of cyber-physical complexity through the myriad of dynamic and adaptive sub-systems that are rising up the TRL (Technology Readiness Level) scale. This compositional approach is envisioned to be active at three levels: validated sub-systems are integrated to form conceptual aircraft, which are further connected with others to perform a challenging mission capability at the SoS level. While these multiple levels represent layers of physical abstraction, each discipline is associated with tools of varying fidelity forming strata of 'analysis abstraction'. Further, the design (composition) will be guided by a suitable hierarchical complexity metric formulated for the management of complexity in both the problem (as part of the generative procedure and selection of fidelity level) and the product (i.e., is the mission best achieved via a large collection of interacting simple systems, or a relatively few highly capable, complex air vehicles). The vastly unexplored area of optimization in evolving design spaces will be studied and incorporated into the SoS optimization framework. We envision a framework that resembles a multi-level, mult-fidelity, multi-disciplinary assemblage of optimization problems. The challenge is not simply one of scaling up to a new level (the SoS), but recognizing that the aircraft sub-systems and the integrated vehicle are now intensely cyber-physical, with hardware and software components interacting in complex ways that give rise to new and improved capabilities. The work presented here is a step closer to modeling the information flow that exists in realistic SoS optimization problems between sub-contractors, contractors and the SoS architect.
Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling
NASA Technical Reports Server (NTRS)
Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw
2005-01-01
The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.
NASA Astrophysics Data System (ADS)
Wang, Qingze; Chen, Xingying; Ji, Li; Liao, Yingchen; Yu, Kun
2017-05-01
The air-conditioning system of office building is a large power consumption terminal equipment, whose unreasonable operation mode leads to low energy efficiency. Realizing the optimization of the air-conditioning system has become one of the important research contents of the electric power demand response. In this paper, in order to save electricity cost and improve energy efficiency, bi-level optimization method of air-conditioning system based on TOU price is put forward by using the energy storage characteristics of the office building itself. In the upper level, the operation mode of the air-conditioning system is optimized in order to minimize the uses’ electricity cost in the premise of ensuring user’ comfort according to the information of outdoor temperature and TOU price, and the cooling load of the air-conditioning is output to the lower level; In the lower level, the distribution mode of cooling load among the multi chillers is optimized in order to maximize the energy efficiency according to the characteristics of each chiller. Finally, the experimental results under different modes demonstrate that the strategy can improve the energy efficiency of chillers and save the electricity cost for users.
Modeling joint restoration strategies for interdependent infrastructure systems
Simonovic, Slobodan P.
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems. PMID:29649300
Near-optimal integration of facial form and motion.
Dobs, Katharina; Ma, Wei Ji; Reddy, Leila
2017-09-08
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
Wind Turbine Optimization with WISDEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykes, Katherine L; Damiani, Rick R; Graf, Peter A
This presentation for the Fourth Wind Energy Systems Engineering Workshop explains the NREL wind energy systems engineering initiative-developed analysis platform and research capability to capture important system interactions to achieve a better understanding of how to improve system-level performance and achieve system-level cost reductions. Topics include Wind-Plant Integrated System Design and Engineering Model (WISDEM) and multidisciplinary design analysis and optimization.
Hierarchical optimal control of large-scale nonlinear chemical processes.
Ramezani, Mohammad Hossein; Sadati, Nasser
2009-01-01
In this paper, a new approach is presented for optimal control of large-scale chemical processes. In this approach, the chemical process is decomposed into smaller sub-systems at the first level, and a coordinator at the second level, for which a two-level hierarchical control strategy is designed. For this purpose, each sub-system in the first level can be solved separately, by using any conventional optimization algorithm. In the second level, the solutions obtained from the first level are coordinated using a new gradient-type strategy, which is updated by the error of the coordination vector. The proposed algorithm is used to solve the optimal control problem of a complex nonlinear chemical stirred tank reactor (CSTR), where its solution is also compared with the ones obtained using the centralized approach. The simulation results show the efficiency and the capability of the proposed hierarchical approach, in finding the optimal solution, over the centralized method.
Bi-Level Integrated System Synthesis (BLISS) for Concurrent and Distributed Processing
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Altus, Troy D.; Phillips, Matthew; Sandusky, Robert
2002-01-01
The paper introduces a new version of the Bi-Level Integrated System Synthesis (BLISS) methods intended for optimization of engineering systems conducted by distributed specialty groups working concurrently and using a multiprocessor computing environment. The method decomposes the overall optimization task into subtasks associated with disciplines or subsystems where the local design variables are numerous and a single, system-level optimization whose design variables are relatively few. The subtasks are fully autonomous as to their inner operations and decision making. Their purpose is to eliminate the local design variables and generate a wide spectrum of feasible designs whose behavior is represented by Response Surfaces to be accessed by a system-level optimization. It is shown that, if the problem is convex, the solution of the decomposed problem is the same as that obtained without decomposition. A simplified example of an aircraft design shows the method working as intended. The paper includes a discussion of the method merits and demerits and recommendations for further research.
NASA Technical Reports Server (NTRS)
Thareja, R.; Haftka, R. T.
1986-01-01
There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.
Optimal strategy analysis based on robust predictive control for inventory system with random demand
NASA Astrophysics Data System (ADS)
Saputra, Aditya; Widowati, Sutrisno
2017-12-01
In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.
Optimal allocation model of construction land based on two-level system optimization theory
NASA Astrophysics Data System (ADS)
Liu, Min; Liu, Yanfang; Xia, Yuping; Lei, Qihong
2007-06-01
The allocation of construction land is an important task in land-use planning. Whether implementation of planning decisions is a success or not, usually depends on a reasonable and scientific distribution method. Considering the constitution of land-use planning system and planning process in China, multiple levels and multiple objective decision problems is its essence. Also, planning quantity decomposition is a two-level system optimization problem and an optimal resource allocation decision problem between a decision-maker in the topper and a number of parallel decision-makers in the lower. According the characteristics of the decision-making process of two-level decision-making system, this paper develops an optimal allocation model of construction land based on two-level linear planning. In order to verify the rationality and the validity of our model, Baoan district of Shenzhen City has been taken as a test case. Under the assistance of the allocation model, construction land is allocated to ten townships of Baoan district. The result obtained from our model is compared to that of traditional method, and results show that our model is reasonable and usable. In the end, the paper points out the shortcomings of the model and further research directions.
Design of shared unit-dose drug distribution network using multi-level particle swarm optimization.
Chen, Linjie; Monteiro, Thibaud; Wang, Tao; Marcon, Eric
2018-03-01
Unit-dose drug distribution systems provide optimal choices in terms of medication security and efficiency for organizing the drug-use process in large hospitals. As small hospitals have to share such automatic systems for economic reasons, the structure of their logistic organization becomes a very sensitive issue. In the research reported here, we develop a generalized multi-level optimization method - multi-level particle swarm optimization (MLPSO) - to design a shared unit-dose drug distribution network. Structurally, the problem studied can be considered as a type of capacitated location-routing problem (CLRP) with new constraints related to specific production planning. This kind of problem implies that a multi-level optimization should be performed in order to minimize logistic operating costs. Our results show that with the proposed algorithm, a more suitable modeling framework, as well as computational time savings and better optimization performance are obtained than that reported in the literature on this subject.
A Response Surface Methodology for Bi-Level Integrated System Synthesis (BLISS)
NASA Technical Reports Server (NTRS)
Altus, Troy David; Sobieski, Jaroslaw (Technical Monitor)
2002-01-01
The report describes a new method for optimization of engineering systems such as aerospace vehicles whose design must harmonize a number of subsystems and various physical phenomena, each represented by a separate computer code, e.g., aerodynamics, structures, propulsion, performance, etc. To represent the system internal couplings, the codes receive output from other codes as part of their inputs. The system analysis and optimization task is decomposed into subtasks that can be executed concurrently, each subtask conducted using local state and design variables and holding constant a set of the system-level design variables. The subtasks results are stored in form of the Response Surfaces (RS) fitted in the space of the system-level variables to be used as the subtask surrogates in a system-level optimization whose purpose is to optimize the system objective(s) and to reconcile the system internal couplings. By virtue of decomposition and execution concurrency, the method enables a broad workfront in organization of an engineering project involving a number of specialty groups that might be geographically dispersed, and it exploits the contemporary computing technology of massively concurrent and distributed processing. The report includes a demonstration test case of supersonic business jet design.
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
Optimization Program for Drinking Water Systems
The Area-Wide Optimization Program (AWOP) provides tools and approaches for drinking water systems to meet water quality optimization goals and provide an increased – and sustainable – level of public health protection to their consumers.
Connection between optimal control theory and adiabatic-passage techniques in quantum systems
NASA Astrophysics Data System (ADS)
Assémat, E.; Sugny, D.
2012-08-01
This work explores the relationship between optimal control theory and adiabatic passage techniques in quantum systems. The study is based on a geometric analysis of the Hamiltonian dynamics constructed from Pontryagin's maximum principle. In a three-level quantum system, we show that the stimulated Raman adiabatic passage technique can be associated to a peculiar Hamiltonian singularity. One deduces that the adiabatic pulse is solution of the optimal control problem only for a specific cost functional. This analysis is extended to the case of a four-level quantum system.
Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett
2004-01-01
Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts. PMID:26357510
Penders, J; Pop, V; Caballero, L; van de Molengraft, J; van Schaijk, R; Vullers, R; Van Hoof, C
2010-01-01
Recent advances in ultra-low-power circuits and energy harvesters are making self-powered body sensor nodes a reality. Power optimization at the system and application level is crucial in achieving ultra-low-power consumption for the entire system. This paper reviews system-level power optimization techniques, and illustrates their impact on the case of autonomous wireless EMG monitoring. The resulting prototype, an Autonomous wireless EMG sensor power by PV-cells, is presented.
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
PSO-tuned PID controller for coupled tank system via priority-based fitness scheme
NASA Astrophysics Data System (ADS)
Jaafar, Hazriq Izzuan; Hussien, Sharifah Yuslinda Syed; Selamat, Nur Asmiza; Abidin, Amar Faiz Zainal; Aras, Mohd Shahrieel Mohd; Nasir, Mohamad Na'im Mohd; Bohari, Zul Hasrizal
2015-05-01
The industrial applications of Coupled Tank System (CTS) are widely used especially in chemical process industries. The overall process is require liquids to be pumped, stored in the tank and pumped again to another tank. Nevertheless, the level of liquid in tank need to be controlled and flow between two tanks must be regulated. This paper presents development of an optimal PID controller for controlling the desired liquid level of the CTS. Two method of Particle Swarm Optimization (PSO) algorithm will be tested in optimizing the PID controller parameters. These two methods of PSO are standard Particle Swarm Optimization (PSO) and Priority-based Fitness Scheme in Particle Swarm Optimization (PFPSO). Simulation is conducted within Matlab environment to verify the performance of the system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). It has been demonstrated that implementation of PSO via Priority-based Fitness Scheme (PFPSO) for this system is potential technique to control the desired liquid level and improve the system performances compared with standard PSO.
NASA Astrophysics Data System (ADS)
Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry
1998-08-01
All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
NASA Astrophysics Data System (ADS)
Yang, Chunhui; Su, Zhixiong; Wang, Xin; Liu, Yang; Qi, Yongwei
2017-03-01
The new normalization of the economic situation and the implementation of a new round of electric power system reform put forward higher requirements to the daily operation of power grid companies. As an important day-to-day operation of power grid companies, investment management is directly related to the promotion of the company's operating efficiency and management level. In this context, the establishment of power grid company investment management optimization system will help to improve the level of investment management and control the company, which is of great significance for power gird companies to adapt to market environment changing as soon as possible and meet the policy environment requirements. Therefore, the purpose of this paper is to construct the investment management optimization system of power grid companies, which includes investment management system, investment process control system, investment structure optimization system, and investment project evaluation system and investment management information platform support system.
Optimal coherent control of dissipative N -level systems
NASA Astrophysics Data System (ADS)
Jirari, H.; Pötz, W.
2005-07-01
General optimal coherent control of dissipative N -level systems in the Markovian time regime is formulated within Pointryagin’s principle and the Lindblad equation. In the present paper, we study feasibility and limitations of steering of dissipative two-, three-, and four-level systems from a given initial pure or mixed state into a desired final state under the influence of an external electric field. The time evolution of the system is computed within the Lindblad equation and a conjugate gradient method is used to identify optimal control fields. The influence of both field-independent population and polarization decay on achieving the objective is investigated in systematic fashion. It is shown that, for realistic dephasing times, optimum control fields can be identified which drive the system into the target state with very high success rate and in economical fashion, even when starting from a poor initial guess. Furthermore, the optimal fields obtained give insight into the system dynamics. However, if decay rates of the system cannot be subjected to electromagnetic control, the dissipative system cannot be maintained in a specific pure or mixed state, in general.
Harmony search algorithm: application to the redundancy optimization problem
NASA Astrophysics Data System (ADS)
Nahas, Nabil; Thien-My, Dao
2010-09-01
The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.
Optimality approaches to describe characteristic fluvial patterns on landscapes
Paik, Kyungrock; Kumar, Praveen
2010-01-01
Mother Nature has left amazingly regular geomorphic patterns on the Earth's surface. These patterns are often explained as having arisen as a result of some optimal behaviour of natural processes. However, there is little agreement on what is being optimized. As a result, a number of alternatives have been proposed, often with little a priori justification with the argument that successful predictions will lend a posteriori support to the hypothesized optimality principle. Given that maximum entropy production is an optimality principle attempting to predict the microscopic behaviour from a macroscopic characterization, this paper provides a review of similar approaches with the goal of providing a comparison and contrast between them to enable synthesis. While assumptions of optimal behaviour approach a system from a macroscopic viewpoint, process-based formulations attempt to resolve the mechanistic details whose interactions lead to the system level functions. Using observed optimality trends may help simplify problem formulation at appropriate levels of scale of interest. However, for such an approach to be successful, we suggest that optimality approaches should be formulated at a broader level of environmental systems' viewpoint, i.e. incorporating the dynamic nature of environmental variables and complex feedback mechanisms between fluvial and non-fluvial processes. PMID:20368257
Optimal state transfer of a single dissipative two-level system
NASA Astrophysics Data System (ADS)
Jirari, Hamza; Wu, Ning
2016-04-01
Optimal state transfer of a single two-level system (TLS) coupled to an Ohmic boson bath via off-diagonal TLS-bath coupling is studied by using optimal control theory. In the weak system-bath coupling regime where the time-dependent Bloch-Redfield formalism is applicable, we obtain the Bloch equation to probe the evolution of the dissipative TLS in the presence of a time-dependent external control field. By using the automatic differentiation technique to compute the gradient for the cost functional, we calculate the optimal transfer integral profile that can achieve an ideal transfer within a dimer system in the Fenna-Matthews-Olson (FMO) model. The robustness of the control profile against temperature variation is also analyzed.
NASA Astrophysics Data System (ADS)
Deng, Lujuan; Xie, Songhe; Cui, Jiantao; Liu, Tao
2006-11-01
It is the essential goal of intelligent greenhouse environment optimal control to enhance income of cropper and energy save. There were some characteristics such as uncertainty, imprecision, nonlinear, strong coupling, bigger inertia and different time scale in greenhouse environment control system. So greenhouse environment optimal control was not easy and especially model-based optimal control method was more difficult. So the optimal control problem of plant environment in intelligent greenhouse was researched. Hierarchical greenhouse environment control system was constructed. In the first level data measuring was carried out and executive machine was controlled. Optimal setting points of climate controlled variable in greenhouse was calculated and chosen in the second level. Market analysis and planning were completed in third level. The problem of the optimal setting point was discussed in this paper. Firstly the model of plant canopy photosynthesis responses and the model of greenhouse climate model were constructed. Afterwards according to experience of the planting expert, in daytime the optimal goals were decided according to the most maximal photosynthesis rate principle. In nighttime on plant better growth conditions the optimal goals were decided by energy saving principle. Whereafter environment optimal control setting points were computed by GA. Compared the optimal result and recording data in real system, the method is reasonable and can achieve energy saving and the maximal photosynthesis rate in intelligent greenhouse
Access to specialist care: Optimizing the geographic configuration of trauma systems
Jansen, Jan O.; Morrison, Jonathan J.; Wang, Handing; He, Shan; Lawrenson, Robin; Hutchison, James D.; Campbell, Marion K.
2015-01-01
BACKGROUND The optimal geographic configuration of health care systems is key to maximizing accessibility while promoting the efficient use of resources. This article reports the use of a novel approach to inform the optimal configuration of a national trauma system. METHODS This is a prospective cohort study of all trauma patients, 15 years and older, attended to by the Scottish Ambulance Service, between July 1, 2013, and June 30, 2014. Patients underwent notional triage to one of three levels of care (major trauma center [MTC], trauma unit, or local emergency hospital). We used geographic information systems software to calculate access times, by road and air, from all incident locations to all candidate hospitals. We then modeled the performance of all mathematically possible network configurations and used multiobjective optimization to determine geospatially optimized configurations. RESULTS A total of 80,391 casualties were included. A network with only high- or moderate-volume MTCs (admitting at least 650 or 400 severely injured patients per year, respectively) would be optimally configured with a single MTC. A network accepting lower-volume MTCs (at least 240 severely injured patients per year) would be optimally configured with two MTCs. Both configurations would necessitate an increase in the number of helicopter retrievals. CONCLUSION This study has shown that a novel combination of notional triage, network analysis, and mathematical optimization can be used to inform the planning of a national clinical network. Scotland’s trauma system could be optimized with one or two MTCs. LEVEL OF EVIDENCE Care management study, level IV. PMID:26335775
Optimal Quasi-steady Plasma Thruster system characteristics.
NASA Technical Reports Server (NTRS)
Ludwig, D. E.; Kelly, A. J.
1972-01-01
The overall characteristics of a generalized Quasi-steady Plasma Thruster (QPT) system consisting of thruster head, power conditioning network, propellant supply subsystem are studied. Energy balance equations for the system are coupled with component mass relationships in order to determine overall system mass and performance. Power supply power levels varying from 100 to 10,000 watts with thruster power levels ranging from 300 kw to 30 Mw employing argon as the propellant are considered. The manner in which overall system mass, average thrust, and burn time vary as a function power supply power level, quasi-steady power level, and pulse time are studied. Results indicate the existence of optimum pulse times when system mass is employed as an optimization criterion.
Locational Marginal Pricing in the Campus Power System at the Power Distribution Level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Jun; Gu, Yi; Zhang, Yingchen
2016-11-14
In the development of smart grid at distribution level, the realization of real-time nodal pricing is one of the key challenges. The research work in this paper implements and studies the methodology of locational marginal pricing at distribution level based on a real-world distribution power system. The pricing mechanism utilizes optimal power flow to calculate the corresponding distributional nodal prices. Both Direct Current Optimal Power Flow and Alternate Current Optimal Power Flow are utilized to calculate and analyze the nodal prices. The University of Denver campus power grid is used as the power distribution system test bed to demonstrate themore » pricing methodology.« less
Aerospace engineering design by systematic decomposition and multilevel optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Giles, G. L.; Barthelemy, J.-F. M.
1984-01-01
This paper describes a method for systematic analysis and optimization of large engineering systems, e.g., aircraft, by decomposition of a large task into a set of smaller, self-contained subtasks that can be solved concurrently. The subtasks may be arranged in many hierarchical levels with the assembled system at the top level. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization. It is pointed out that the method is intended to be compatible with the typical engineering organization and the modern technology of distributed computing.
Algorithms for bilevel optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.
GA-optimization for rapid prototype system demonstration
NASA Technical Reports Server (NTRS)
Kim, Jinwoo; Zeigler, Bernard P.
1994-01-01
An application of the Genetic Algorithm (GA) is discussed. A novel scheme of Hierarchical GA was developed to solve complicated engineering problems which require optimization of a large number of parameters with high precision. High level GAs search for few parameters which are much more sensitive to the system performance. Low level GAs search in more detail and employ a greater number of parameters for further optimization. Therefore, the complexity of the search is decreased and the computing resources are used more efficiently.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Heru Tjahjana, R.
2017-01-01
In this paper, we propose a mathematical model in the form of dynamic/multi-stage optimization to solve an integrated supplier selection problem and tracking control problem of single product inventory system with product discount. The product discount will be stated as a piece-wise linear function. We use dynamic programming to solve this proposed optimization to determine the optimal supplier and the optimal product volume that will be purchased from the optimal supplier for each time period so that the inventory level tracks a reference trajectory given by decision maker with minimal total cost. We give a numerical experiment to evaluate the proposed model. From the result, the optimal supplier was determined for each time period and the inventory level follows the given reference well.
NASA Astrophysics Data System (ADS)
Şoimoşan, Teodora M.; Danku, Gelu; Felseghi, Raluca A.
2017-12-01
Within the thermo-energy optimization process of an existing heating system, the increase of the system's energy efficiency and speeding-up the transition to green energy use are pursued. The concept of multi-energy district heating system, with high harnessing levels of the renewable energy sources (RES) in order to produce heat, is expected to be the key-element in the future urban energy infrastructure, due to the important role it can have in the strategies of optimizing and decarbonizing the existing district heating systems. The issues that arise are related to the efficient integration of different technologies of harnessing renewable energy sources in the energy mix and to the increase of the participation levels of RES, respectively. For the holistic modeling of the district heating system, the concept of the energy hub was used, where the synergy of different primary forms of entered energy provides the system a high degree energy security and flexibility in operation. The optimization of energy flows within the energy hub allows the optimization of the thermo-energy district system in order to approach the dual concept of smart city & smart energy.
NASA Astrophysics Data System (ADS)
Roy, Satadru
Traditional approaches to design and optimize a new system, often, use a system-centric objective and do not take into consideration how the operator will use this new system alongside of other existing systems. This "hand-off" between the design of the new system and how the new system operates alongside other systems might lead to a sub-optimal performance with respect to the operator-level objective. In other words, the system that is optimal for its system-level objective might not be best for the system-of-systems level objective of the operator. Among the few available references that describe attempts to address this hand-off, most follow an MDO-motivated subspace decomposition approach of first designing a very good system and then provide this system to the operator who decides the best way to use this new system along with the existing systems. The motivating example in this dissertation presents one such similar problem that includes aircraft design, airline operations and revenue management "subspaces". The research here develops an approach that could simultaneously solve these subspaces posed as a monolithic optimization problem. The monolithic approach makes the problem a Mixed Integer/Discrete Non-Linear Programming (MINLP/MDNLP) problem, which are extremely difficult to solve. The presence of expensive, sophisticated engineering analyses further aggravate the problem. To tackle this challenge problem, the work here presents a new optimization framework that simultaneously solves the subspaces to capture the "synergism" in the problem that the previous decomposition approaches may not have exploited, addresses mixed-integer/discrete type design variables in an efficient manner, and accounts for computationally expensive analysis tools. The framework combines concepts from efficient global optimization, Kriging partial least squares, and gradient-based optimization. This approach then demonstrates its ability to solve an 11 route airline network problem consisting of 94 decision variables including 33 integer and 61 continuous type variables. This application problem is a representation of an interacting group of systems and provides key challenges to the optimization framework to solve the MINLP problem, as reflected by the presence of a moderate number of integer and continuous type design variables and expensive analysis tool. The result indicates simultaneously solving the subspaces could lead to significant improvement in the fleet-level objective of the airline when compared to the previously developed sequential subspace decomposition approach. In developing the approach to solve the MINLP/MDNLP challenge problem, several test problems provided the ability to explore performance of the framework. While solving these test problems, the framework showed that it could solve other MDNLP problems including categorically discrete variables, indicating that the framework could have broader application than the new aircraft design-fleet allocation-revenue management problem.
Hybrid PV/diesel solar power system design using multi-level factor analysis optimization
NASA Astrophysics Data System (ADS)
Drake, Joshua P.
Solar power systems represent a large area of interest across a spectrum of organizations at a global level. It was determined that a clear understanding of current state of the art software and design methods, as well as optimization methods, could be used to improve the design methodology. Solar power design literature was researched for an in depth understanding of solar power system design methods and algorithms. Multiple software packages for the design and optimization of solar power systems were analyzed for a critical understanding of their design workflow. In addition, several methods of optimization were studied, including brute force, Pareto analysis, Monte Carlo, linear and nonlinear programming, and multi-way factor analysis. Factor analysis was selected as the most efficient optimization method for engineering design as it applied to solar power system design. The solar power design algorithms, software work flow analysis, and factor analysis optimization were combined to develop a solar power system design optimization software package called FireDrake. This software was used for the design of multiple solar power systems in conjunction with an energy audit case study performed in seven Tibetan refugee camps located in Mainpat, India. A report of solar system designs for the camps, as well as a proposed schedule for future installations was generated. It was determined that there were several improvements that could be made to the state of the art in modern solar power system design, though the complexity of current applications is significant.
Joint optimization of maintenance, buffers and machines in manufacturing lines
NASA Astrophysics Data System (ADS)
Nahas, Nabil; Nourelfath, Mustapha
2018-01-01
This article considers a series manufacturing line composed of several machines separated by intermediate buffers of finite capacity. The goal is to find the optimal number of preventive maintenance actions performed on each machine, the optimal selection of machines and the optimal buffer allocation plan that minimize the total system cost, while providing the desired system throughput level. The mean times between failures of all machines are assumed to increase when applying periodic preventive maintenance. To estimate the production line throughput, a decomposition method is used. The decision variables in the formulated optimal design problem are buffer levels, types of machines and times between preventive maintenance actions. Three heuristic approaches are developed to solve the formulated combinatorial optimization problem. The first heuristic consists of a genetic algorithm, the second is based on the nonlinear threshold accepting metaheuristic and the third is an ant colony system. The proposed heuristics are compared and their efficiency is shown through several numerical examples. It is found that the nonlinear threshold accepting algorithm outperforms the genetic algorithm and ant colony system, while the genetic algorithm provides better results than the ant colony system for longer manufacturing lines.
A supplier selection and order allocation problem with stochastic demands
NASA Astrophysics Data System (ADS)
Zhou, Yun; Zhao, Lei; Zhao, Xiaobo; Jiang, Jianhua
2011-08-01
We consider a system comprising a retailer and a set of candidate suppliers that operates within a finite planning horizon of multiple periods. The retailer replenishes its inventory from the suppliers and satisfies stochastic customer demands. At the beginning of each period, the retailer makes decisions on the replenishment quantity, supplier selection and order allocation among the selected suppliers. An optimisation problem is formulated to minimise the total expected system cost, which includes an outer level stochastic dynamic program for the optimal replenishment quantity and an inner level integer program for supplier selection and order allocation with a given replenishment quantity. For the inner level subproblem, we develop a polynomial algorithm to obtain optimal decisions. For the outer level subproblem, we propose an efficient heuristic for the system with integer-valued inventory, based on the structural properties of the system with real-valued inventory. We investigate the efficiency of the proposed solution approach, as well as the impact of parameters on the optimal replenishment decision with numerical experiments.
Optimal control of population and coherence in three-level Λ systems
NASA Astrophysics Data System (ADS)
Kumar, Praveen; Malinovskaya, Svetlana A.; Malinovsky, Vladimir S.
2011-08-01
Optimal control theory (OCT) implementations for an efficient population transfer and creation of maximum coherence in a three-level system are considered. We demonstrate that the half-stimulated Raman adiabatic passage scheme for creation of the maximum Raman coherence is the optimal solution according to the OCT. We also present a comparative study of several implementations of OCT applied to the complete population transfer and creation of the maximum coherence. Performance of the conjugate gradient method, the Zhu-Rabitz method and the Krotov method has been analysed.
Aerospace engineering design by systematic decomposition and multilevel optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Barthelemy, J. F. M.; Giles, G. L.
1984-01-01
A method for systematic analysis and optimization of large engineering systems, by decomposition of a large task into a set of smaller subtasks that is solved concurrently is described. The subtasks may be arranged in hierarchical levels. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization.
Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas
2017-01-01
As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.
Optimal management of non-Markovian biological populations
Williams, B.K.
2007-01-01
Wildlife populations typically are described by Markovian models, with population dynamics influenced at each point in time by current but not previous population levels. Considerable work has been done on identifying optimal management strategies under the Markovian assumption. In this paper we generalize this work to non-Markovian systems, for which population responses to management are influenced by lagged as well as current status and/or controls. We use the maximum principle of optimal control theory to derive conditions for the optimal management such a system, and illustrate the effects of lags on the structure of optimal habitat strategies for a predator-prey system.
NASA Astrophysics Data System (ADS)
Hassan, Rania A.
In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.
Recent Results on "Approximations to Optimal Alarm Systems for Anomaly Detection"
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2009-01-01
An optimal alarm system and its approximations may use Kalman filtering for univariate linear dynamic systems driven by Gaussian noise to provide a layer of predictive capability. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. An optimal alarm system can be designed to elicit the fewest false alarms for a fixed detection probability in this particular scenario.
Polyhedral Interpolation for Optimal Reaction Control System Jet Selection
NASA Technical Reports Server (NTRS)
Gefert, Leon P.; Wright, Theodore
2014-01-01
An efficient algorithm is described for interpolating optimal values for spacecraft Reaction Control System jet firing duty cycles. The algorithm uses the symmetrical geometry of the optimal solution to reduce the number of calculations and data storage requirements to a level that enables implementation on the small real time flight control systems used in spacecraft. The process minimizes acceleration direction errors, maximizes control authority, and minimizes fuel consumption.
Access to specialist care: Optimizing the geographic configuration of trauma systems.
Jansen, Jan O; Morrison, Jonathan J; Wang, Handing; He, Shan; Lawrenson, Robin; Hutchison, James D; Campbell, Marion K
2015-11-01
The optimal geographic configuration of health care systems is key to maximizing accessibility while promoting the efficient use of resources. This article reports the use of a novel approach to inform the optimal configuration of a national trauma system. This is a prospective cohort study of all trauma patients, 15 years and older, attended to by the Scottish Ambulance Service, between July 1, 2013, and June 30, 2014. Patients underwent notional triage to one of three levels of care (major trauma center [MTC], trauma unit, or local emergency hospital). We used geographic information systems software to calculate access times, by road and air, from all incident locations to all candidate hospitals. We then modeled the performance of all mathematically possible network configurations and used multiobjective optimization to determine geospatially optimized configurations. A total of 80,391 casualties were included. A network with only high- or moderate-volume MTCs (admitting at least 650 or 400 severely injured patients per year, respectively) would be optimally configured with a single MTC. A network accepting lower-volume MTCs (at least 240 severely injured patients per year) would be optimally configured with two MTCs. Both configurations would necessitate an increase in the number of helicopter retrievals. This study has shown that a novel combination of notional triage, network analysis, and mathematical optimization can be used to inform the planning of a national clinical network. Scotland's trauma system could be optimized with one or two MTCs. Care management study, level IV.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
A Language for Specifying Compiler Optimizations for Generic Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcock, Jeremiah J.
2007-01-01
Compiler optimization is important to software performance, and modern processor architectures make optimization even more critical. However, many modern software applications use libraries providing high levels of abstraction. Such libraries often hinder effective optimization — the libraries are difficult to analyze using current compiler technology. For example, high-level libraries often use dynamic memory allocation and indirectly expressed control structures, such as iteratorbased loops. Programs using these libraries often cannot achieve an optimal level of performance. On the other hand, software libraries have also been recognized as potentially aiding in program optimization. One proposed implementation of library-based optimization is to allowmore » the library author, or a library user, to define custom analyses and optimizations. Only limited systems have been created to take advantage of this potential, however. One problem in creating a framework for defining new optimizations and analyses is how users are to specify them: implementing them by hand inside a compiler is difficult and prone to errors. Thus, a domain-specific language for librarybased compiler optimizations would be beneficial. Many optimization specification languages have appeared in the literature, but they tend to be either limited in power or unnecessarily difficult to use. Therefore, I have designed, implemented, and evaluated the Pavilion language for specifying program analyses and optimizations, designed for library authors and users. These analyses and optimizations can be based on the implementation of a particular library, its use in a specific program, or on the properties of a broad range of types, expressed through concepts. The new system is intended to provide a high level of expressiveness, even though the intended users are unlikely to be compiler experts.« less
Efficient search, mapping, and optimization of multi-protein genetic systems in diverse bacteria
Farasat, Iman; Kushwaha, Manish; Collens, Jason; Easterbrook, Michael; Guido, Matthew; Salis, Howard M
2014-01-01
Developing predictive models of multi-protein genetic systems to understand and optimize their behavior remains a combinatorial challenge, particularly when measurement throughput is limited. We developed a computational approach to build predictive models and identify optimal sequences and expression levels, while circumventing combinatorial explosion. Maximally informative genetic system variants were first designed by the RBS Library Calculator, an algorithm to design sequences for efficiently searching a multi-protein expression space across a > 10,000-fold range with tailored search parameters and well-predicted translation rates. We validated the algorithm's predictions by characterizing 646 genetic system variants, encoded in plasmids and genomes, expressed in six gram-positive and gram-negative bacterial hosts. We then combined the search algorithm with system-level kinetic modeling, requiring the construction and characterization of 73 variants to build a sequence-expression-activity map (SEAMAP) for a biosynthesis pathway. Using model predictions, we designed and characterized 47 additional pathway variants to navigate its activity space, find optimal expression regions with desired activity response curves, and relieve rate-limiting steps in metabolism. Creating sequence-expression-activity maps accelerates the optimization of many protein systems and allows previous measurements to quantitatively inform future designs. PMID:24952589
Optimality of affine control system of several species in competition on a sequential batch reactor
NASA Astrophysics Data System (ADS)
Rodríguez, J. C.; Ramírez, H.; Gajardo, P.; Rapaport, A.
2014-09-01
In this paper, we analyse the optimality of affine control system of several species in competition for a single substrate on a sequential batch reactor, with the objective being to reach a given (low) level of the substrate. We allow controls to be bounded measurable functions of time plus possible impulses. A suitable modification of the dynamics leads to a slightly different optimal control problem, without impulsive controls, for which we apply different optimality conditions derived from Pontryagin principle and the Hamilton-Jacobi-Bellman equation. We thus characterise the singular trajectories of our problem as the extremal trajectories keeping the substrate at a constant level. We also establish conditions for which an immediate one impulse (IOI) strategy is optimal. Some numerical experiences are then included in order to illustrate our study and show that those conditions are also necessary to ensure the optimality of the IOI strategy.
Evolution of Query Optimization Methods
NASA Astrophysics Data System (ADS)
Hameurlain, Abdelkader; Morvan, Franck
Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).
Optimizing Sensor and Actuator Arrays for ASAC Noise Control
NASA Technical Reports Server (NTRS)
Palumbo, Dan; Cabell, Ran
2000-01-01
This paper summarizes the development of an approach to optimizing the locations for arrays of sensors and actuators in active noise control systems. A type of directed combinatorial search, called Tabu Search, is used to select an optimal configuration from a much larger set of candidate locations. The benefit of using an optimized set is demonstrated. The importance of limiting actuator forces to realistic levels when evaluating the cost function is discussed. Results of flight testing an optimized system are presented. Although the technique has been applied primarily to Active Structural Acoustic Control systems, it can be adapted for use in other active noise control implementations.
Autonomous Energy Grids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroposki, Benjamin D; Dall-Anese, Emiliano; Bernstein, Andrey
With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performancemore » while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.« less
Structural optimization by generalized, multilevel decomposition
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; James, B. B.; Riley, M. F.
1985-01-01
The developments toward a general multilevel optimization capability and results for a three-level structural optimization are described. The method partitions a structure into a number of substructuring levels where each substructure corresponds to a subsystem in the general case of an engineering system. The method is illustrated by a portal framework that decomposes into individual beams. Each beam is a box that can be further decomposed into stiffened plates. Substructuring for this example spans three different levels: (1) the bottom level of finite elements representing the plates; (2) an intermediate level of beams treated as substructures; and (3) the top level for the assembled structure. The three-level case is now considered to be qualitatively complete.
A system level model for preliminary design of a space propulsion solid rocket motor
NASA Astrophysics Data System (ADS)
Schumacher, Daniel M.
Preliminary design of space propulsion solid rocket motors entails a combination of components and subsystems. Expert design tools exist to find near optimal performance of subsystems and components. Conversely, there is no system level preliminary design process for space propulsion solid rocket motors that is capable of synthesizing customer requirements into a high utility design for the customer. The preliminary design process for space propulsion solid rocket motors typically builds on existing designs and pursues feasible rather than the most favorable design. Classical optimization is an extremely challenging method when dealing with the complex behavior of an integrated system. The complexity and combinations of system configurations make the number of the design parameters that are traded off unreasonable when manual techniques are used. Existing multi-disciplinary optimization approaches generally address estimating ratios and correlations rather than utilizing mathematical models. The developed system level model utilizes the Genetic Algorithm to perform the necessary population searches to efficiently replace the human iterations required during a typical solid rocket motor preliminary design. This research augments, automates, and increases the fidelity of the existing preliminary design process for space propulsion solid rocket motors. The system level aspect of this preliminary design process, and the ability to synthesize space propulsion solid rocket motor requirements into a near optimal design, is achievable. The process of developing the motor performance estimate and the system level model of a space propulsion solid rocket motor is described in detail. The results of this research indicate that the model is valid for use and able to manage a very large number of variable inputs and constraints towards the pursuit of the best possible design.
de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell
2007-01-10
We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.
Constrained Multi-Level Algorithm for Trajectory Optimization
NASA Astrophysics Data System (ADS)
Adimurthy, V.; Tandon, S. R.; Jessy, Antony; Kumar, C. Ravi
The emphasis on low cost access to space inspired many recent developments in the methodology of trajectory optimization. Ref.1 uses a spectral patching method for optimization, where global orthogonal polynomials are used to describe the dynamical constraints. A two-tier approach of optimization is used in Ref.2 for a missile mid-course trajectory optimization. A hybrid analytical/numerical approach is described in Ref.3, where an initial analytical vacuum solution is taken and gradually atmospheric effects are introduced. Ref.4 emphasizes the fact that the nonlinear constraints which occur in the initial and middle portions of the trajectory behave very nonlinearly with respect the variables making the optimization very difficult to solve in the direct and indirect shooting methods. The problem is further made complex when different phases of the trajectory have different objectives of optimization and also have different path constraints. Such problems can be effectively addressed by multi-level optimization. In the multi-level methods reported so far, optimization is first done in identified sub-level problems, where some coordination variables are kept fixed for global iteration. After all the sub optimizations are completed, higher-level optimization iteration with all the coordination and main variables is done. This is followed by further sub system optimizations with new coordination variables. This process is continued until convergence. In this paper we use a multi-level constrained optimization algorithm which avoids the repeated local sub system optimizations and which also removes the problem of non-linear sensitivity inherent in the single step approaches. Fall-zone constraints, structural load constraints and thermal constraints are considered. In this algorithm, there is only a single multi-level sequence of state and multiplier updates in a framework of an augmented Lagrangian. Han Tapia multiplier updates are used in view of their special role in diagonalised methods, being the only single update with quadratic convergence. For a single level, the diagonalised multiplier method (DMM) is described in Ref.5. The main advantage of the two-level analogue of the DMM approach is that it avoids the inner loop optimizations required in the other methods. The scheme also introduces a gradient change measure to reduce the computational time needed to calculate the gradients. It is demonstrated that the new multi-level scheme leads to a robust procedure to handle the sensitivity of the constraints, and the multiple objectives of different trajectory phases. Ref. 1. Fahroo, F and Ross, M., " A Spectral Patching Method for Direct Trajectory Optimization" The Journal of the Astronautical Sciences, Vol.48, 2000, pp.269-286 Ref. 2. Phililps, C.A. and Drake, J.C., "Trajectory Optimization for a Missile using a Multitier Approach" Journal of Spacecraft and Rockets, Vol.37, 2000, pp.663-669 Ref. 3. Gath, P.F., and Calise, A.J., " Optimization of Launch Vehicle Ascent Trajectories with Path Constraints and Coast Arcs", Journal of Guidance, Control, and Dynamics, Vol. 24, 2001, pp.296-304 Ref. 4. Betts, J.T., " Survey of Numerical Methods for Trajectory Optimization", Journal of Guidance, Control, and Dynamics, Vol.21, 1998, pp. 193-207 Ref. 5. Adimurthy, V., " Launch Vehicle Trajectory Optimization", Acta Astronautica, Vol.15, 1987, pp.845-850.
Beal, Jacob; Lu, Ting; Weiss, Ron
2011-01-01
Background The field of synthetic biology promises to revolutionize our ability to engineer biological systems, providing important benefits for a variety of applications. Recent advances in DNA synthesis and automated DNA assembly technologies suggest that it is now possible to construct synthetic systems of significant complexity. However, while a variety of novel genetic devices and small engineered gene networks have been successfully demonstrated, the regulatory complexity of synthetic systems that have been reported recently has somewhat plateaued due to a variety of factors, including the complexity of biology itself and the lag in our ability to design and optimize sophisticated biological circuitry. Methodology/Principal Findings To address the gap between DNA synthesis and circuit design capabilities, we present a platform that enables synthetic biologists to express desired behavior using a convenient high-level biologically-oriented programming language, Proto. The high level specification is compiled, using a regulatory motif based mechanism, to a gene network, optimized, and then converted to a computational simulation for numerical verification. Through several example programs we illustrate the automated process of biological system design with our platform, and show that our compiler optimizations can yield significant reductions in the number of genes () and latency of the optimized engineered gene networks. Conclusions/Significance Our platform provides a convenient and accessible tool for the automated design of sophisticated synthetic biological systems, bridging an important gap between DNA synthesis and circuit design capabilities. Our platform is user-friendly and features biologically relevant compiler optimizations, providing an important foundation for the development of sophisticated biological systems. PMID:21850228
Beal, Jacob; Lu, Ting; Weiss, Ron
2011-01-01
The field of synthetic biology promises to revolutionize our ability to engineer biological systems, providing important benefits for a variety of applications. Recent advances in DNA synthesis and automated DNA assembly technologies suggest that it is now possible to construct synthetic systems of significant complexity. However, while a variety of novel genetic devices and small engineered gene networks have been successfully demonstrated, the regulatory complexity of synthetic systems that have been reported recently has somewhat plateaued due to a variety of factors, including the complexity of biology itself and the lag in our ability to design and optimize sophisticated biological circuitry. To address the gap between DNA synthesis and circuit design capabilities, we present a platform that enables synthetic biologists to express desired behavior using a convenient high-level biologically-oriented programming language, Proto. The high level specification is compiled, using a regulatory motif based mechanism, to a gene network, optimized, and then converted to a computational simulation for numerical verification. Through several example programs we illustrate the automated process of biological system design with our platform, and show that our compiler optimizations can yield significant reductions in the number of genes (~ 50%) and latency of the optimized engineered gene networks. Our platform provides a convenient and accessible tool for the automated design of sophisticated synthetic biological systems, bridging an important gap between DNA synthesis and circuit design capabilities. Our platform is user-friendly and features biologically relevant compiler optimizations, providing an important foundation for the development of sophisticated biological systems.
An efficient multilevel optimization method for engineering design
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.
1988-01-01
An efficient multilevel deisgn optimization technique is presented. The proposed method is based on the concept of providing linearized information between the system level and subsystem level optimization tasks. The advantages of the method are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the method is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.
Discrete Event Supervisory Control Applied to Propulsion Systems
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Shah, Neerav
2005-01-01
The theory of discrete event supervisory (DES) control was applied to the optimal control of a twin-engine aircraft propulsion system and demonstrated in a simulation. The supervisory control, which is implemented as a finite-state automaton, oversees the behavior of a system and manages it in such a way that it maximizes a performance criterion, similar to a traditional optimal control problem. DES controllers can be nested such that a high-level controller supervises multiple lower level controllers. This structure can be expanded to control huge, complex systems, providing optimal performance and increasing autonomy with each additional level. The DES control strategy for propulsion systems was validated using a distributed testbed consisting of multiple computers--each representing a module of the overall propulsion system--to simulate real-time hardware-in-the-loop testing. In the first experiment, DES control was applied to the operation of a nonlinear simulation of a turbofan engine (running in closed loop using its own feedback controller) to minimize engine structural damage caused by a combination of thermal and structural loads. This enables increased on-wing time for the engine through better management of the engine-component life usage. Thus, the engine-level DES acts as a life-extending controller through its interaction with and manipulation of the engine s operation.
Optimal maintenance policy incorporating system level and unit level for mechanical systems
NASA Astrophysics Data System (ADS)
Duan, Chaoqun; Deng, Chao; Wang, Bingran
2018-04-01
The study works on a multi-level maintenance policy combining system level and unit level under soft and hard failure modes. The system experiences system-level preventive maintenance (SLPM) when the conditional reliability of entire system exceeds SLPM threshold, and also undergoes a two-level maintenance for each single unit, which is initiated when a single unit exceeds its preventive maintenance (PM) threshold, and the other is performed simultaneously the moment when any unit is going for maintenance. The units experience both periodic inspections and aperiodic inspections provided by failures of hard-type units. To model the practical situations, two types of economic dependence have been taken into account, which are set-up cost dependence and maintenance expertise dependence due to the same technology and tool/equipment can be utilised. The optimisation problem is formulated and solved in a semi-Markov decision process framework. The objective is to find the optimal system-level threshold and unit-level thresholds by minimising the long-run expected average cost per unit time. A formula for the mean residual life is derived for the proposed multi-level maintenance policy. The method is illustrated by a real case study of feed subsystem from a boring machine, and a comparison with other policies demonstrates the effectiveness of our approach.
NASA Astrophysics Data System (ADS)
Rainarli, E.; E Dewi, K.
2017-04-01
The research conducted by Fister & Panetta shown an optimal control model of bone marrow cells against Cell Cycle Specific chemotherapy drugs. The model used was a bilinear system model. Fister & Panetta research has proved existence, uniqueness, and characteristics of optimal control (the chemotherapy effect). However, by using this model, the amount of bone marrow at the final time could achieve less than 50 percent from the amount of bone marrow before given treatment. This could harm patients because the lack of bone marrow cells made the number of leukocytes declining and patients will experience leukemia. This research would examine the optimal control of a bilinear system that applied to fixed final state. It will be used to determine the length of optimal time in administering chemotherapy and kept bone marrow cells on the allowed level at the same time. Before simulation conducted, this paper shows that the system could be controlled by using a theory of Lie Algebra. Afterward, it shows the characteristics of optimal control. Based on the simulation, it indicates that strong chemotherapy drug given in a short time frame is the most optimal condition to keep bone marrow cells spine on the allowed level but still could put playing an effective treatment. It gives preference of the weight of treatment for keeping bone marrow cells. The result of chemotherapy’s effect (u) is not able to reach the maximum value. On the other words, it needs to make adjustments of medicine’s dosage to satisfy the final treatment condition e.g. the number of bone marrow cells should be at the allowed level.
A multilevel control system for the large space telescope. [numerical analysis/optimal control
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Sundareshan, S. K.; Vukcevic, M. B.
1975-01-01
A multilevel scheme was proposed for control of Large Space Telescope (LST) modeled by a three-axis-six-order nonlinear equation. Local controllers were used on the subsystem level to stabilize motions corresponding to the three axes. Global controllers were applied to reduce (and sometimes nullify) the interactions among the subsystems. A multilevel optimization method was developed whereby local quadratic optimizations were performed on the subsystem level, and global control was again used to reduce (nullify) the effect of interactions. The multilevel stabilization and optimization methods are presented as general tools for design and then used in the design of the LST Control System. The methods are entirely computerized, so that they can accommodate higher order LST models with both conceptual and numerical advantages over standard straightforward design techniques.
Optimization of cell seeding in a 2D bio-scaffold system using computational models.
Ho, Nicholas; Chua, Matthew; Chui, Chee-Kong
2017-05-01
The cell expansion process is a crucial part of generating cells on a large-scale level in a bioreactor system. Hence, it is important to set operating conditions (e.g. initial cell seeding distribution, culture medium flow rate) to an optimal level. Often, the initial cell seeding distribution factor is neglected and/or overlooked in the design of a bioreactor using conventional seeding distribution methods. This paper proposes a novel seeding distribution method that aims to maximize cell growth and minimize production time/cost. The proposed method utilizes two computational models; the first model represents cell growth patterns whereas the second model determines optimal initial cell seeding positions for adherent cell expansions. Cell growth simulation from the first model demonstrates that the model can be a representation of various cell types with known probabilities. The second model involves a combination of combinatorial optimization, Monte Carlo and concepts of the first model, and is used to design a multi-layer 2D bio-scaffold system that increases cell production efficiency in bioreactor applications. Simulation results have shown that the recommended input configurations obtained from the proposed optimization method are the most optimal configurations. The results have also illustrated the effectiveness of the proposed optimization method. The potential of the proposed seeding distribution method as a useful tool to optimize the cell expansion process in modern bioreactor system applications is highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hierarchical fuzzy control of low-energy building systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Zhen; Dexter, Arthur
2010-04-15
A hierarchical fuzzy supervisory controller is described that is capable of optimizing the operation of a low-energy building, which uses solar energy to heat and cool its interior spaces. The highest level fuzzy rules choose the most appropriate set of lower level rules according to the weather and occupancy information; the second level fuzzy rules determine an optimal energy profile and the overall modes of operation of the heating, ventilating and air-conditioning system (HVAC); the third level fuzzy rules select the mode of operation of specific equipment, and assign schedules to the local controllers so that the optimal energy profilemore » can be achieved in the most efficient way. Computer simulation is used to compare the hierarchical fuzzy control scheme with a supervisory control scheme based on expert rules. The performance is evaluated by comparing the energy consumption and thermal comfort. (author)« less
SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER
NASA Technical Reports Server (NTRS)
Scotti, S. J.
1994-01-01
SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.
Combining analysis with optimization at Langley Research Center. An evolutionary process
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.
1982-01-01
The evolutionary process of combining analysis and optimization codes was traced with a view toward providing insight into the long term goal of developing the methodology for an integrated, multidisciplinary software system for the concurrent analysis and optimization of aerospace structures. It was traced along the lines of strength sizing, concurrent strength and flutter sizing, and general optimization to define a near-term goal for combining analysis and optimization codes. Development of a modular software system combining general-purpose, state-of-the-art, production-level analysis computer programs for structures, aerodynamics, and aeroelasticity with a state-of-the-art optimization program is required. Incorporation of a modular and flexible structural optimization software system into a state-of-the-art finite element analysis computer program will facilitate this effort. This effort results in the software system used that is controlled with a special-purpose language, communicates with a data management system, and is easily modified for adding new programs and capabilities. A 337 degree-of-freedom finite element model is used in verifying the accuracy of this system.
Displacement based multilevel structural optimization
NASA Technical Reports Server (NTRS)
Striz, Alfred G.
1995-01-01
Multidisciplinary design optimization (MDO) is expected to play a major role in the competitive transportation industries of tomorrow, i.e., in the design of aircraft and spacecraft, of high speed trains, boats, and automobiles. All of these vehicles require maximum performance at minimum weight to keep fuel consumption low and conserve resources. Here, MDO can deliver mathematically based design tools to create systems with optimum performance subject to the constraints of disciplines such as structures, aerodynamics, controls, etc. Although some applications of MDO are beginning to surface, the key to a widespread use of this technology lies in the improvement of its efficiency. This aspect is investigated here for the MDO subset of structural optimization, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures (here, statically indeterminate trusses and beams for proof of concept) is performed. In the system level optimization, the design variables are the coefficients of assumed displacement functions, and the load unbalance resulting from the solution of the stiffness equations is minimized. Constraints are placed on the deflection amplitudes and the weight of the structure. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. This approach is expected to prove very efficient, especially for complex structures, since the design task is broken down into a large number of small and efficiently handled subtasks, each with only a small number of variables. This partitioning will also allow for the use of parallel computing, first, by sending the system and subsystems level computations to two different processors, ultimately, by performing all subsystems level optimizations in a massively parallel manner on separate processors. It is expected that the subsystems level optimizations can be further improved through the use of controlled growth, a method which reduces an optimization to a more efficient analysis with only a slight degradation in accuracy. The efficiency of all proposed techniques is being evaluated relative to the performance of the standard single level optimization approach where the complete structure is weight minimized under the action of all given constraints by one processor and to the performance of simultaneous analysis and design which combines analysis and optimization into a single step. It is expected that the present approach can be expanded to include additional structural constraints (buckling, free and forced vibration, etc.) or other disciplines (passive and active controls, aerodynamics, etc.) for true MDO.
NASA Astrophysics Data System (ADS)
Mangaud, E.; Puthumpally-Joseph, R.; Sugny, D.; Meier, C.; Atabek, O.; Desouter-Lecomte, M.
2018-04-01
Optimal control theory is implemented with fully converged hierarchical equations of motion (HEOM) describing the time evolution of an open system density matrix strongly coupled to the bath in a spin-boson model. The populations of the two-level sub-system are taken as control objectives; namely, their revivals or exchange when switching off the field. We, in parallel, analyze how the optimal electric field consequently modifies the information back flow from the environment through different non-Markovian witnesses. Although the control field has a dipole interaction with the central sub-system only, its indirect influence on the bath collective mode dynamics is probed through HEOM auxiliary matrices, revealing a strong correlation between control and dissipation during a non-Markovian process. A heterojunction is taken as an illustrative example for modeling in a realistic way the two-level sub-system parameters and its spectral density function leading to a non-perturbative strong coupling regime with the bath. Although, due to strong system-bath couplings, control performances remain rather modest, the most important result is a noticeable increase of the non-Markovian bath response induced by the optimally driven processes.
Dynamic modeling and optimization for space logistics using time-expanded networks
NASA Astrophysics Data System (ADS)
Ho, Koki; de Weck, Olivier L.; Hoffman, Jeffrey A.; Shishko, Robert
2014-12-01
This research develops a dynamic logistics network formulation for lifecycle optimization of mission sequences as a system-level integrated method to find an optimal combination of technologies to be used at each stage of the campaign. This formulation can find the optimal transportation architecture considering its technology trades over time. The proposed methodologies are inspired by the ground logistics analysis techniques based on linear programming network optimization. Particularly, the time-expanded network and its extension are developed for dynamic space logistics network optimization trading the quality of the solution with the computational load. In this paper, the methodologies are applied to a human Mars exploration architecture design problem. The results reveal multiple dynamic system-level trades over time and give recommendation of the optimal strategy for the human Mars exploration architecture. The considered trades include those between In-Situ Resource Utilization (ISRU) and propulsion technologies as well as the orbit and depot location selections over time. This research serves as a precursor for eventual permanent settlement and colonization of other planets by humans and us becoming a multi-planet species.
Power-constrained supercomputing
NASA Astrophysics Data System (ADS)
Bailey, Peter E.
As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.
Chassin, David P.; Behboodi, Sahand; Djilali, Ned
2018-01-28
This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Behboodi, Sahand; Djilali, Ned
This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less
An algorithm for solving the system-level problem in multilevel optimization
NASA Technical Reports Server (NTRS)
Balling, R. J.; Sobieszczanski-Sobieski, J.
1994-01-01
A multilevel optimization approach which is applicable to nonhierarchic coupled systems is presented. The approach includes a general treatment of design (or behavior) constraints and coupling constraints at the discipline level through the use of norms. Three different types of norms are examined: the max norm, the Kreisselmeier-Steinhauser (KS) norm, and the 1(sub p) norm. The max norm is recommended. The approach is demonstrated on a class of hub frame structures which simulate multidisciplinary systems. The max norm is shown to produce system-level constraint functions which are non-smooth. A cutting-plane algorithm is presented which adequately deals with the resulting corners in the constraint functions. The algorithm is tested on hub frames with increasing number of members (which simulate disciplines), and the results are summarized.
Using game theory for perceptual tuned rate control algorithm in video coding
NASA Astrophysics Data System (ADS)
Luo, Jiancong; Ahmad, Ishfaq
2005-03-01
This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.
Barnes, Priscilla A; Curtis, Amy B; Hall-Downey, Laura; Moonesinghe, Ramal
2012-01-01
This study examines whether partnership-related measures in the second version of the National Public Health Performance Standards (NPHPS) are useful in evaluating level of activity as well as identifying latent constructs that exist among local public health systems (LPHSs). In a sample of 110 LPHSs, descriptive analysis was conducted to determine frequency and percentage of 18 partnership-related NPHPS measures. Principal components factor analysis was conducted to identify unobserved characteristics that promote effective partnerships among LPHSs. Results revealed that 13 of the 18 measures were most frequently reported at the minimal-moderate level (conducted 1%-49% of the time). Coordination of personal health and social services to optimize access (74.6%) was the most frequently reported measure at minimal-moderate levels. Optimal levels (conducted >75% of the time) were reported most frequently in 2 activities: participation in emergency preparedness coalitions and local health departments ensuring service provision by working with state health departments (67% and 61% of respondents, respectively) and the least optimally reported activity was review partnership effectiveness (4% of respondents). Factor analysis revealed categories of partnership-related measures in 4 domains: resources and activities contributing to relationship building, evaluating community leadership activities, research, and state and local linkages to support public health activities. System-oriented public health assessments may have questions that serve as proxy measures to examine levels of interorganizational partnerships. Several measures from the NPHPS were useful in establishing a national baseline of minimal and optimal activity levels as well as identifying factors to enhance the delivery of the 10 essential public health services among organizations and individuals in public health systems.
A linear decomposition method for large optimization problems. Blueprint for development
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1982-01-01
A method is proposed for decomposing large optimization problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and optimizing each subsystem separately. Coupling of the subproblems is accounted for by subsequent optimization of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition method is also shown to be compatible with the natural human organization of the design process of engineering systems. The method is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.
An integrated radar model solution for mission level performance and cost trades
NASA Astrophysics Data System (ADS)
Hodge, John; Duncan, Kerron; Zimmerman, Madeline; Drupp, Rob; Manno, Mike; Barrett, Donald; Smith, Amelia
2017-05-01
A fully integrated Mission-Level Radar model is in development as part of a multi-year effort under the Northrop Grumman Mission Systems (NGMS) sector's Model Based Engineering (MBE) initiative to digitally interconnect and unify previously separate performance and cost models. In 2016, an NGMS internal research and development (IR and D) funded multidisciplinary team integrated radio frequency (RF), power, control, size, weight, thermal, and cost models together using a commercial-off-the-shelf software, ModelCenter, for an Active Electronically Scanned Array (AESA) radar system. Each represented model was digitally connected with standard interfaces and unified to allow end-to-end mission system optimization and trade studies. The radar model was then linked to the Air Force's own mission modeling framework (AFSIM). The team first had to identify the necessary models, and with the aid of subject matter experts (SMEs) understand and document the inputs, outputs, and behaviors of the component models. This agile development process and collaboration enabled rapid integration of disparate models and the validation of their combined system performance. This MBE framework will allow NGMS to design systems more efficiently and affordably, optimize architectures, and provide increased value to the customer. The model integrates detailed component models that validate cost and performance at the physics level with high-level models that provide visualization of a platform mission. This connectivity of component to mission models allows hardware and software design solutions to be better optimized to meet mission needs, creating cost-optimal solutions for the customer, while reducing design cycle time through risk mitigation and early validation of design decisions.
An Optimization Framework for Dynamic, Distributed Real-Time Systems
NASA Technical Reports Server (NTRS)
Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara
2003-01-01
Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.
A system-level cost-of-energy wind farm layout optimization with landowner modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Le; MacDonald, Erin
This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustiy. The system-level cost-of-energy (COE) optimization model is also tested under twomore » land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability. (C) 2013 Elsevier Ltd. All rights reserved.« less
MEDICARE PAYMENTS AND SYSTEM-LEVEL HEALTH-CARE USE
ROBBINS, JACOB A.
2015-01-01
The rapid growth of Medicare managed care over the past decade has the potential to increase the efficiency of health-care delivery. Improvements in care management for some may improve efficiency system-wide, with implications for optimal payment policy in public insurance programs. These system-level effects may depend on local health-care market structure and vary based on patient characteristics. We use exogenous variation in the Medicare payment schedule to isolate the effects of market-level managed care enrollment on the quantity and quality of care delivered. We find that in areas with greater enrollment of Medicare beneficiaries in managed care, the non–managed care beneficiaries have fewer days in the hospital but more outpatient visits, consistent with a substitution of less expensive outpatient care for more expensive inpatient care, particularly at high levels of managed care. We find no evidence that care is of lower quality. Optimal payment policies for Medicare managed care enrollees that account for system-level spillovers may thus be higher than those that do not. PMID:27042687
Contrast-enhanced spectral mammography with a photon-counting detector.
Fredenberg, Erik; Hemmendorff, Magnus; Cederström, Björn; Aslund, Magnus; Danielsson, Mats
2010-05-01
Spectral imaging is a method in medical x-ray imaging to extract information about the object constituents by the material-specific energy dependence of x-ray attenuation. The authors have investigated a photon-counting spectral imaging system with two energy bins for contrast-enhanced mammography. System optimization and the potential benefit compared to conventional non-energy-resolved absorption imaging was studied. A framework for system characterization was set up that included quantum and anatomical noise and a theoretical model of the system was benchmarked to phantom measurements. Optimal combination of the energy-resolved images corresponded approximately to minimization of the anatomical noise, which is commonly referred to as energy subtraction. In that case, an ideal-observer detectability index could be improved close to 50% compared to absorption imaging in the phantom study. Optimization with respect to the signal-to-quantum-noise ratio, commonly referred to as energy weighting, yielded only a minute improvement. In a simulation of a clinically more realistic case, spectral imaging was predicted to perform approximately 30% better than absorption imaging for an average glandularity breast with an average level of anatomical noise. For dense breast tissue and a high level of anatomical noise, however, a rise in detectability by a factor of 6 was predicted. Another approximately 70%-90% improvement was found to be within reach for an optimized system. Contrast-enhanced spectral mammography is feasible and beneficial with the current system, and there is room for additional improvements. Inclusion of anatomical noise is essential for optimizing spectral imaging systems.
Thomas, Bex George; Elasser, Ahmed; Bollapragada, Srinivas; Galbraith, Anthony William; Agamy, Mohammed; Garifullin, Maxim Valeryevich
2016-03-29
A system and method of using one or more DC-DC/DC-AC converters and/or alternative devices allows strings of multiple module technologies to coexist within the same PV power plant. A computing (optimization) framework estimates the percentage allocation of PV power plant capacity to selected PV module technologies. The framework and its supporting components considers irradiation, temperature, spectral profiles, cost and other practical constraints to achieve the lowest levelized cost of electricity, maximum output and minimum system cost. The system and method can function using any device enabling distributed maximum power point tracking at the module, string or combiner level.
Numerical aerodynamic simulation facility. Preliminary study extension
NASA Technical Reports Server (NTRS)
1978-01-01
The production of an optimized design of key elements of the candidate facility was the primary objective of this report. This was accomplished by effort in the following tasks: (1) to further develop, optimize and describe the function description of the custom hardware; (2) to delineate trade off areas between performance, reliability, availability, serviceability, and programmability; (3) to develop metrics and models for validation of the candidate systems performance; (4) to conduct a functional simulation of the system design; (5) to perform a reliability analysis of the system design; and (6) to develop the software specifications to include a user level high level programming language, a correspondence between the programming language and instruction set and outline the operation system requirements.
Bastian, Nathaniel D; Ekin, Tahir; Kang, Hyojung; Griffin, Paul M; Fulton, Lawrence V; Grannan, Benjamin C
2017-06-01
The management of hospitals within fixed-input health systems such as the U.S. Military Health System (MHS) can be challenging due to the large number of hospitals, as well as the uncertainty in input resources and achievable outputs. This paper introduces a stochastic multi-objective auto-optimization model (SMAOM) for resource allocation decision-making in fixed-input health systems. The model can automatically identify where to re-allocate system input resources at the hospital level in order to optimize overall system performance, while considering uncertainty in the model parameters. The model is applied to 128 hospitals in the three services (Air Force, Army, and Navy) in the MHS using hospital-level data from 2009 - 2013. The results are compared to the traditional input-oriented variable returns-to-scale Data Envelopment Analysis (DEA) model. The application of SMAOM to the MHS increases the expected system-wide technical efficiency by 18 % over the DEA model while also accounting for uncertainty of health system inputs and outputs. The developed method is useful for decision-makers in the Defense Health Agency (DHA), who have a strategic level objective of integrating clinical and business processes through better sharing of resources across the MHS and through system-wide standardization across the services. It is also less sensitive to data outliers or sampling errors than traditional DEA methods.
Qiao, Wei; Venayagamoorthy, Ganesh K; Harley, Ronald G
2008-01-01
Wide-area coordinating control is becoming an important issue and a challenging problem in the power industry. This paper proposes a novel optimal wide-area coordinating neurocontrol (WACNC), based on wide-area measurements, for a power system with power system stabilizers, a large wind farm and multiple flexible ac transmission system (FACTS) devices. An optimal wide-area monitor (OWAM), which is a radial basis function neural network (RBFNN), is designed to identify the input-output dynamics of the nonlinear power system. Its parameters are optimized through particle swarm optimization (PSO). Based on the OWAM, the WACNC is then designed by using the dual heuristic programming (DHP) method and RBFNNs, while considering the effect of signal transmission delays. The WACNC operates at a global level to coordinate the actions of local power system controllers. Each local controller communicates with the WACNC, receives remote control signals from the WACNC to enhance its dynamic performance and therefore helps improve system-wide dynamic and transient performance. The proposed control is verified by simulation studies on a multimachine power system.
Multi-Objective Mission Route Planning Using Particle Swarm Optimization
2002-03-01
solutions to complex problems using particles that interact with each other. Both Particle Swarm Optimization (PSO) and the Ant System (AS) have been...EXPERIMENTAL DESING PROCESS..............................................................55 5.1. Introduction...46 18. Phenotype level particle interaction
Maximal violation of Clauser-Horne-Shimony-Holt inequality for four-level systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu Libin; Max-Planck-Institute for the Physics of Complex Systems, Noethnitzer Strasse 38, 01187 Dresden; Chen Jingling
2004-03-01
Clauser-Horne-Shimony-Holt inequality for bipartite systems of four dimensions is studied in detail by employing the unbiased eight-port beam splitters measurements. The uniform formulas for the maximum and minimum values of this inequality for such measurements are obtained. Based on these formulas, we show that an optimal nonmaximally entangled state is about 6% more resistant to noise than the maximally entangled one. We also give the optimal state and the optimal angles which are important for experimental realization.
Emergency strategy optimization for the environmental control system in manned spacecraft
NASA Astrophysics Data System (ADS)
Li, Guoxiang; Pang, Liping; Liu, Meng; Fang, Yufeng; Zhang, Helin
2018-02-01
It is very important for a manned environmental control system (ECS) to be able to reconfigure its operation strategy in emergency conditions. In this article, a multi-objective optimization is established to design the optimal emergency strategy for an ECS in an insufficient power supply condition. The maximum ECS lifetime and the minimum power consumption are chosen as the optimization objectives. Some adjustable key variables are chosen as the optimization variables, which finally represent the reconfigured emergency strategy. The non-dominated sorting genetic algorithm-II is adopted to solve this multi-objective optimization problem. Optimization processes are conducted at four different carbon dioxide partial pressure control levels. The study results show that the Pareto-optimal frontiers obtained from this multi-objective optimization can represent the relationship between the lifetime and the power consumption of the ECS. Hence, the preferred emergency operation strategy can be recommended for situations when there is suddenly insufficient power.
Development of a Platform for Simulating and Optimizing Thermoelectric Energy Systems
NASA Astrophysics Data System (ADS)
Kreuder, John J.
Thermoelectrics are solid state devices that can convert thermal energy directly into electrical energy. They have historically been used only in niche applications because of their relatively low efficiencies. With the advent of nanotechnology and improved manufacturing processes thermoelectric materials have become less costly and more efficient As next generation thermoelectric materials become available there is a need for industries to quickly and cost effectively seek out feasible applications for thermoelectric heat recovery platforms. Determining the technical and economic feasibility of such systems requires a model that predicts performance at the system level. Current models focus on specific system applications or neglect the rest of the system altogether, focusing on only module design and not an entire energy system. To assist in screening and optimizing entire energy systems using thermoelectrics, a novel software tool, Thermoelectric Power System Simulator (TEPSS), is developed for system level simulation and optimization of heat recovery systems. The platform is designed for use with a generic energy system so that most types of thermoelectric heat recovery applications can be modeled. TEPSS is based on object-oriented programming in MATLABRTM. A modular, shell based architecture is developed to carry out concept generation, system simulation and optimization. Systems are defined according to the components and interconnectivity specified by the user. An iterative solution process based on Newton's Method is employed to determine the system's steady state so that an objective function representing the cost of the system can be evaluated at the operating point. An optimization algorithm from MATLAB's Optimization Toolbox uses sequential quadratic programming to minimize this objective function with respect to a set of user specified design variables and constraints. During this iterative process many independent system simulations are executed and the optimal operating condition of the system is determined. A comprehensive guide to using the software platform is included. TEPSS is intended to be expandable so that users can add new types of components and implement component models with an adequate degree of complexity for a required application. Special steps are taken to ensure that the system of nonlinear algebraic equations presented in the system engineering model is square and that all equations are independent. In addition, the third party program FluidProp is leveraged to allow for simulations of systems with a range of fluids. Sequential unconstrained minimization techniques are used to prevent physical variables like pressure and temperature from trending to infinity during optimization. Two case studies are performed to verify and demonstrate the simulation and optimization routines employed by TEPSS. The first is of a simple combined cycle in which the size of the heat exchanger and fuel rate are optimized. The second case study is the optimization of geometric parameters of a thermoelectric heat recovery platform in a regenerative Brayton Cycle. A basic package of components and interconnections are verified and provided as well.
NASA Astrophysics Data System (ADS)
Yang, Sam
The dissertation presents the mathematical formulation, experimental validation, and application of a volume element model (VEM) devised for modeling, simulation, and optimization of energy systems in their early design stages. The proposed model combines existing modeling techniques and experimental adjustment to formulate a reduced-order model, while retaining sufficient accuracy to serve as a practical system-level design analysis and optimization tool. In the VEM, the physical domain under consideration is discretized in space using lumped hexahedral elements (i.e., volume elements), and the governing equations for the variable of interest are applied to each element to quantify diverse types of flows that cross it. Subsequently, a system of algebraic and ordinary differential equations is solved with respect to time and scalar (e.g., temperature, relative humidity, etc.) fields are obtained in both spatial and temporal domains. The VEM is capable of capturing and predicting dynamic physical behaviors in the entire system domain (i.e., at system level), including mutual interactions among system constituents, as well as with their respective surroundings and cooling systems, if any. The VEM is also generalizable; that is, the model can be easily adapted to simulate and optimize diverse systems of different scales and complexity and attain numerical convergence with sufficient accuracy. Both the capability and generalizability of the VEM are demonstrated in the dissertation via thermal modeling and simulation of an Off-Grid Zero Emissions Building, an all-electric ship, and a vapor compression refrigeration (VCR) system. Furthermore, the potential of the VEM as an optimization tool is presented through the integrative thermodynamic optimization of a VCR system, whose results are used to evaluate the trade-offs between various objective functions, namely, coefficient of performance, second law efficiency, pull-down time, and refrigerated space temperature, in both transient and steady-state operations.
Optimal Resource Allocation under Fair QoS in Multi-tier Server Systems
NASA Astrophysics Data System (ADS)
Akai, Hirokazu; Ushio, Toshimitsu; Hayashi, Naoki
Recent development of network technology realizes multi-tier server systems, where several tiers perform functionally different processing requested by clients. It is an important issue to allocate resources of the systems to clients dynamically based on their current requests. On the other hand, Q-RAM has been proposed for resource allocation in real-time systems. In the server systems, it is important that execution results of all applications requested by clients are the same QoS(quality of service) level. In this paper, we extend Q-RAM to multi-tier server systems and propose a method for optimal resource allocation with fairness of the QoS levels of clients’ requests. We also consider an assignment problem of physical machines to be sleep in each tier sothat the energy consumption is minimized.
NASA Astrophysics Data System (ADS)
Kuenzig, Thomas; Dehé, Alfons; Krumbein, Ulrich; Schrag, Gabriele
2018-05-01
Maxing out the technological limits in order to satisfy the customers’ demands and obtain the best performance of micro-devices and-systems is a challenge of today’s manufacturers. Dedicated system simulation is key to investigate the potential of device and system concepts in order to identify the best design w.r.t. the given requirements. We present a tailored, physics-based system-level modeling approach combining lumped with distributed models that provides detailed insight into the device and system operation at low computational expense. The resulting transparent, scalable (i.e. reusable) and modularly composed models explicitly contain the physical dependency on all relevant parameters, thus being well suited for dedicated investigation and optimization of MEMS devices and systems. This is demonstrated for an industrial capacitive silicon microphone. The performance of such microphones is determined by distributed effects like viscous damping and inhomogeneous capacitance variation across the membrane as well as by system-level phenomena like package-induced acoustic effects and the impact of the electronic circuitry for biasing and read-out. The here presented model covers all relevant figures of merit and, thus, enables to evaluate the optimization potential of silicon microphones towards high fidelity applications. This work was carried out at the Technical University of Munich, Chair for Physics of Electrotechnology. Thomas Kuenzig is now with Infineon Technologies AG, Neubiberg.
Optimized Design of the SGA-WZ Strapdown Airborne Gravimeter Temperature Control System
Cao, Juliang; Wang, Minghao; Cai, Shaokun; Zhang, Kaidong; Cong, Danni; Wu, Meiping
2015-01-01
The temperature control system is one of the most important subsystems of the strapdown airborne gravimeter. Because the quartz flexible accelerometer based on springy support technology is the core sensor in the strapdown airborne gravimeter and the magnet steel in the electromagnetic force equilibrium circuits of the quartz flexible accelerometer is greatly affected by temperature, in order to guarantee the temperature control precision and minimize the effect of temperature on the gravimeter, the SGA-WZ temperature control system adopts a three-level control method. Based on the design experience of the SGA-WZ-01, the SGA-WZ-02 temperature control system came out with a further optimized design. In 1st level temperature control, thermoelectric cooler is used to conquer temperature change caused by hot weather. The experiments show that the optimized stability of 1st level temperature control is about 0.1 °C and the max cool down capability is about 10 °C. The temperature field is analyzed in the 2nd and 3rd level temperature control using the finite element analysis software ANSYS. The 2nd and 3rd level temperature control optimization scheme is based on the foundation of heat analysis. The experimental results show that static accuracy of SGA-WZ-02 reaches 0.21 mGal/24 h, with internal accuracy being 0.743 mGal/4.8 km and external accuracy being 0.37 mGal/4.8 km compared with the result of the GT-2A, whose internal precision is superior to 1 mGal/4.8 km and all of them are better than those in SGA-WZ-01. PMID:26633407
Optimized Design of the SGA-WZ Strapdown Airborne Gravimeter Temperature Control System.
Cao, Juliang; Wang, Minghao; Cai, Shaokun; Zhang, Kaidong; Cong, Danni; Wu, Meiping
2015-12-01
The temperature control system is one of the most important subsystems of the strapdown airborne gravimeter. Because the quartz flexible accelerometer based on springy support technology is the core sensor in the strapdown airborne gravimeter and the magnet steel in the electromagnetic force equilibrium circuits of the quartz flexible accelerometer is greatly affected by temperature, in order to guarantee the temperature control precision and minimize the effect of temperature on the gravimeter, the SGA-WZ temperature control system adopts a three-level control method. Based on the design experience of the SGA-WZ-01, the SGA-WZ-02 temperature control system came out with a further optimized design. In 1st level temperature control, thermoelectric cooler is used to conquer temperature change caused by hot weather. The experiments show that the optimized stability of 1st level temperature control is about 0.1 °C and the max cool down capability is about 10 °C. The temperature field is analyzed in the 2nd and 3rd level temperature control using the finite element analysis software ANSYS. The 2nd and 3rd level temperature control optimization scheme is based on the foundation of heat analysis. The experimental results show that static accuracy of SGA-WZ-02 reaches 0.21 mGal/24 h, with internal accuracy being 0.743 mGal/4.8 km and external accuracy being 0.37 mGal/4.8 km compared with the result of the GT-2A, whose internal precision is superior to 1 mGal/4.8 km and all of them are better than those in SGA-WZ-01.
A water management decision support system contributing to sustainability
NASA Astrophysics Data System (ADS)
Horváth, Klaudia; van Esch, Bart; Baayen, Jorn; Pothof, Ivo; Talsma, Jan; van Heeringen, Klaas-Jan
2017-04-01
Deltares and Eindhoven University of Technology are developing a new decision support system (DSS) for regional water authorities. In order to maintain water levels in the Dutch polder system, water should be drained and pumped out from the polders to the sea. The time and amount of pumping depends on the current sea level, the water level in the polder, the weather forecast and the electricity price forecast and possibly local renewable power production. This is a multivariable optimisation problem, where the goal is to keep the water level in the polder within certain bounds. By optimizing the operation of the pumps the energy usage and costs can be reduced, hence the operation of the regional water authorities can be more sustainable, while also anticipating on increasing share of renewables in the energy mix in a cost-effective way. The decision support system, based on Delft-FEWS as operational data-integration platform, is running an optimization model built in RTC-Tools 2, which is performing real-time optimization in order to calculate the pumping strategy. It is taking into account the present and future circumstances. As being the core of the real time decision support system, RTC-Tools 2 fulfils the key requirements to a DSS: it is fast, robust and always finds the optimal solution. These properties are associated with convex optimization. In such problems the global optimum can always be found. The challenge in the development is to maintain the convex formulation of all the non-linear components in the system, i.e. open channels, hydraulic structures, and pumps. The system is introduced through 4 pilot projects, one of which is a pilot of the Dutch Water Authority Rivierenland. This is a typical Dutch polder system: several polders are drained to the main water system, the Linge. The water from the Linge can be released to the main rivers that are subject to tidal fluctuations. In case of low tide, water can be released via the gates. In case of high tide, water should be pumped. The goal of the pilot is to make the operation of the regional water authority more sustainable and cost-efficient. Sustainability can be achieved by minimizing the CO2 production trough minimizing the energy used for pumping. This work is showing the functionalities of the new decision support system, using RTC-Tools 2, through the example of a pilot project.
Information's role in the estimation of chaotic signals
NASA Astrophysics Data System (ADS)
Drake, Daniel Fred
1998-11-01
Researchers have proposed several methods designed to recover chaotic signals from noise-corrupted observations. While the methods vary, their qualitative performance does not: in low levels of noise all methods effectively recover the underlying signal; in high levels of noise no method can recover the underlying signal to any meaningful degree of accuracy. Of the methods proposed to date, all represent sub-optimal estimators. So: Is the inability to recover the signal in high noise levels simply a consequence of estimator sub-optimality? Or is estimator failure actually a manifestation of some intrinsic property of chaos itself? These questions are answered by deriving an optimal estimator for a class of chaotic systems and noting that it, too, fails in high levels of noise. An exact, closed- form expression for the estimator is obtained for a class of chaotic systems whose signals are solutions to a set of linear (but noncausal) difference equations. The existence of this linear description circumvents the difficulties normally encountered when manipulating the nonlinear (but causal) expressions that govern. chaotic behavior. The reason why even the optimal estimator fails to recover underlying chaotic signals in high levels of noise has its roots in information theory. At such noise levels, the mutual information linking the corrupted observations to the underlying signal is essentially nil, reducing the estimator to a simple guessing strategy based solely on a priori statistics. Entropy, long the common bond between information theory and dynamical systems, is actually one aspect of a far more complete characterization of information sources: the rate distortion function. Determining the rate distortion function associated with the class of chaotic systems considered in this work provides bounds on estimator performance in high levels of noise. Finally, a slight modification of the linear description leads to a method of synthesizing on limited precision platforms ``pseudo-chaotic'' sequences that mimic true chaotic behavior to any finite degree of precision and duration. The use of such a technique in spread-spectrum communications is considered.
Raghavendra, U; Gudigar, Anjan; Maithri, M; Gertych, Arkadiusz; Meiburger, Kristen M; Yeong, Chai Hong; Madla, Chakri; Kongmebhol, Pailin; Molinari, Filippo; Ng, Kwan Hoong; Acharya, U Rajendra
2018-04-01
Ultrasound imaging is one of the most common visualizing tools used by radiologists to identify the location of thyroid nodules. However, visual assessment of nodules is difficult and often affected by inter- and intra-observer variabilities. Thus, a computer-aided diagnosis (CAD) system can be helpful to cross-verify the severity of nodules. This paper proposes a new CAD system to characterize thyroid nodules using optimized multi-level elongated quinary patterns. In this study, higher order spectral (HOS) entropy features extracted from these patterns appropriately distinguished benign and malignant nodules under particle swarm optimization (PSO) and support vector machine (SVM) frameworks. Our CAD algorithm achieved a maximum accuracy of 97.71% and 97.01% in private and public datasets respectively. The evaluation of this CAD system on both private and public datasets confirmed its effectiveness as a secondary tool in assisting radiological findings. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez-Cardona, D; Li, K; Lubner, M G
Purpose: The introduction of the highly nonlinear MBIR algorithm to clinical CT systems has made CNR an invalid metric for kV optimization. The purpose of this work was to develop a task-based framework to unify kV and mAs optimization for both FBP- and MBIR-based CT systems. Methods: The kV-mAs optimization was formulated as a constrained minimization problem: to select kV and mAs to minimize dose under the constraint of maintaining the detection performance as clinically prescribed. To experimentally solve this optimization problem, exhaustive measurements of detectability index (d’) for a hepatic lesion detection task were performed at 15 different mAmore » levels and 4 kV levels using an anthropomorphic phantom. The measured d’ values were used to generate an iso-detectability map; similarly, dose levels recorded at different kV-mAs combinations were used to generate an iso-dose map. The iso-detectability map was overlaid on top of the iso-dose map so that for a prescribed detectability level d’, the optimal kV-mA can be determined from the crossing between the d’ contour and the dose contour that corresponds to the minimum dose. Results: Taking d’=16 as an example: the kV-mAs combinations on the measured iso-d’ line of MBIR are 80–150 (3.8), 100–140 (6.6), 120–150 (11.3), and 140–160 (17.2), where values in the parentheses are measured dose values. As a Result, the optimal kV was 80 and optimal mA was 150. In comparison, the optimal kV and mA for FBP were 100 and 500, which corresponded to a dose level of 24 mGy. Results of in vivo animal experiments were consistent with the phantom results. Conclusion: A new method to optimize kV and mAs selection has been developed. This method is applicable to both linear and nonlinear CT systems such as those using MBIR. Additional dose savings can be achieved by combining MBIR with this method. This work was partially supported by an NIH grant R01CA169331 and GE Healthcare. K. Li, D. Gomez-Cardona, M. G. Lubner: Nothing to disclose. P. J. Pickhardt: Co-founder, VirtuoCTC, LLC Stockholder, Cellectar Biosciences, Inc. G.-H. Chen: Research funded, GE Healthcare; Research funded, Siemens AX.« less
The optimal dynamic immunization under a controlled heterogeneous node-based SIRS model
NASA Astrophysics Data System (ADS)
Yang, Lu-Xing; Draief, Moez; Yang, Xiaofan
2016-05-01
Dynamic immunizations, under which the state of the propagation network of electronic viruses can be changed by adjusting the control measures, are regarded as an alternative to static immunizations. This paper addresses the optimal dynamical immunization under the widely accepted SIRS assumption. First, based on a controlled heterogeneous node-based SIRS model, an optimal control problem capturing the optimal dynamical immunization is formulated. Second, the existence of an optimal dynamical immunization scheme is shown, and the corresponding optimality system is derived. Next, some numerical examples are given to show that an optimal immunization strategy can be worked out by numerically solving the optimality system, from which it is found that the network topology has a complex impact on the optimal immunization strategy. Finally, the difference between a payoff and the minimum payoff is estimated in terms of the deviation of the corresponding immunization strategy from the optimal immunization strategy. The proposed optimal immunization scheme is justified, because it can achieve a low level of infections at a low cost.
Pan-Zhou, Xin-Ru; Mayes, Benjamin A; Rashidzadeh, Hassan; Gasparac, Rahela; Smith, Steven; Bhadresa, Sanjeev; Gupta, Kusum; Cohen, Marita Larsson; Bu, Charlie; Good, Steven S; Moussa, Adel; Rush, Roger
2016-10-01
IDX184 is a phosphoramidate prodrug of 2'-methylguanosine-5'-monophosphate, developed to treat patients infected with hepatitis C virus. A mass balance study of radiolabeled IDX184 and pharmacokinetic studies of IDX184 in portal vein-cannulated monkeys revealed relatively low IDX184 absorption but higher exposure of IDX184 in the portal vein than in the systemic circulation, indicating >90 % of the absorbed dose was subject to hepatic extraction. Systemic exposures to the main metabolite, 2'-methylguanosine (2'-MeG), were used as a surrogate for liver levels of the pharmacologically active entity 2'-MeG triphosphate, and accordingly, systemic levels of 2'-MeG in the monkey were used to optimize formulations for further clinical development of IDX184. Capsule formulations of IDX184 delivered acceptable levels of 2'-MeG in humans; however, the encapsulation process introduced low levels of the genotoxic impurity ethylene sulfide (ES), which necessitated formulation optimization. Animal pharmacokinetic data guided the development of a tablet with trace levels of ES and pharmacokinetic performance equal to that of the clinical capsule in the monkey. Under fed conditions in humans, the new tablet formulation showed similar exposure to the capsule used in prior clinical trials.
A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Lindsay; Zéphyr, Luckny; Cardell, Judith B.
The evolution of the power system to the reliable, efficient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of renewable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distribution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for cooptimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this framework, microgrids encompass consumers, distributed renewables and storage. The energy managementmore » system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the development of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic optimization, including decomposition and stochastic dual dynamic programming.« less
A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, C. Lindsay; Zéphyr, Luckny; Liu, Jialin
The evolution of the power system to the reliable, effi- cient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of re- newable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distri- bution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for co- optimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this frame- work, microgrids encompass consumers, distributed renewablesmore » and storage. The energy management system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the devel- opment of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic op- timization, including decomposition and stochastic dual dynamic programming.« less
Chance-Constrained Day-Ahead Hourly Scheduling in Distribution System Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard
This paper aims to propose a two-step approach for day-ahead hourly scheduling in a distribution system operation, which contains two operation costs, the operation cost at substation level and feeder level. In the first step, the objective is to minimize the electric power purchase from the day-ahead market with the stochastic optimization. The historical data of day-ahead hourly electric power consumption is used to provide the forecast results with the forecasting error, which is presented by a chance constraint and formulated into a deterministic form by Gaussian mixture model (GMM). In the second step, the objective is to minimize themore » system loss. Considering the nonconvexity of the three-phase balanced AC optimal power flow problem in distribution systems, the second-order cone program (SOCP) is used to relax the problem. Then, a distributed optimization approach is built based on the alternating direction method of multiplier (ADMM). The results shows that the validity and effectiveness method.« less
Robot-Arm Dynamic Control by Computer
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Tarn, Tzyh J.; Chen, Yilong J.
1987-01-01
Feedforward and feedback schemes linearize responses to control inputs. Method for control of robot arm based on computed nonlinear feedback and state tranformations to linearize system and decouple robot end-effector motions along each of cartesian axes augmented with optimal scheme for correction of errors in workspace. Major new feature of control method is: optimal error-correction loop directly operates on task level and not on joint-servocontrol level.
NASA Astrophysics Data System (ADS)
Govindaraju, Parithi
Determining the optimal requirements for and design variable values of new systems, which operate along with existing systems to provide a set of overarching capabilities, as a single task is challenging due to the highly interconnected effects that setting requirements on a new system's design can have on how an operator uses this newly designed system. This task of determining the requirements and the design variable values becomes even more difficult because of the presence of uncertainties in the new system design and in the operational environment. This research proposed and investigated aspects of a framework that generates optimum design requirements of new, yet-to-be-designed systems that, when operating alongside other systems, will optimize fleet-level objectives while considering the effects of various uncertainties. Specifically, this research effort addresses the issues of uncertainty in the design of the new system through reliability-based design optimization methods, and uncertainty in the operations of the fleet through descriptive sampling methods and robust optimization formulations. In this context, fleet-level performance metrics result from using the new system alongside other systems to accomplish an overarching objective or mission. This approach treats the design requirements of a new system as decision variables in an optimization problem formulation that a user in the position of making an acquisition decision could solve. This solution would indicate the best new system requirements-and an associated description of the best possible design variable variables for that new system-to optimize the fleet level performance metric(s). Using a problem motivated by recorded operations of the United States Air Force Air Mobility Command for illustration, the approach is demonstrated first for a simplified problem that only considers demand uncertainties in the service network and the proposed methodology is used to identify the optimal design requirements and optimal aircraft sizing variables of new, yet-to-be-introduced aircraft. With this new aircraft serving alongside other existing aircraft, the fleet of aircraft satisfy the desired demand for cargo transportation, while maximizing fleet productivity and minimizing fuel consumption via a multi-objective problem formulation. The approach is then extended to handle uncertainties in both the design of the new system and in the operations of the fleet. The propagation of uncertainties associated with the conceptual design of the new aircraft to the uncertainties associated with the subsequent operations of the new and existing aircraft in the fleet presents some unique challenges. A computationally tractable hybrid robust counterpart formulation efficiently handles the confluence of the two types of domain-specific uncertainties. This hybrid formulation is tested on a larger route network problem to demonstrate the scalability of the approach. Following the presentation of the results obtained, a summary discussion indicates how decision-makers might use these results to set requirements for new aircraft that meet operational needs while balancing the environmental impact of the fleet with fleet-level performance. Comparing the solutions from the uncertainty-based and deterministic formulations via a posteriori analysis demonstrates the efficacy of the robust and reliability-based optimization formulations in addressing the different domain-specific uncertainties. Results suggest that the aircraft design requirements and design description determined through the hybrid robust counterpart formulation approach differ from solutions obtained from the simplistic deterministic approach, and leads to greater fleet-level fuel savings, when subjected to real-world uncertain scenarios (more robust to uncertainty). The research, though applied to a specific air cargo application, is technically agnostic in nature and can be applied to other facets of policy and acquisition management, to explore capability trade spaces for different vehicle systems, mitigate risks, define policy and potentially generate better returns on investment. Other domains relevant to policy and acquisition decisions could utilize the problem formulation and solution approach proposed in this dissertation provided that the problem can be split into a non-linear programming problem to describe the new system sizing and the fleet operations problem can be posed as a linear/integer programming problem.
NASA Astrophysics Data System (ADS)
Mortazavi-Naeini, Mohammad; Kuczera, George; Cui, Lijie
2014-06-01
Significant population increase in urban areas is likely to result in a deterioration of drought security and level of service provided by urban water resource systems. One way to cope with this is to optimally schedule the expansion of system resources. However, the high capital costs and environmental impacts associated with expanding or building major water infrastructure warrant the investigation of scheduling system operational options such as reservoir operating rules, demand reduction policies, and drought contingency plans, as a way of delaying or avoiding the expansion of water supply infrastructure. Traditionally, minimizing cost has been considered the primary objective in scheduling capacity expansion problems. In this paper, we consider some of the drawbacks of this approach. It is shown that there is no guarantee that the social burden of coping with drought emergencies is shared equitably across planning stages. In addition, it is shown that previous approaches do not adequately exploit the benefits of joint optimization of operational and infrastructure options and do not adequately address the need for the high level of drought security expected for urban systems. To address these shortcomings, a new multiobjective optimization approach to scheduling capacity expansion in an urban water resource system is presented and illustrated in a case study involving the bulk water supply system for Canberra. The results show that the multiobjective approach can address the temporal equity issue of sharing the burden of drought emergencies and that joint optimization of operational and infrastructure options can provide solutions superior to those just involving infrastructure options.
NASA Astrophysics Data System (ADS)
Hasan, T.; Kang, Y.-S.; Kim, Y.-J.; Park, S.-J.; Jang, S.-Y.; Hu, K.-Y.; Koop, E. J.; Hinnen, P. C.; Voncken, M. M. A. J.
2016-03-01
Advancement of the next generation technology nodes and emerging memory devices demand tighter lithographic focus control. Although the leveling performance of the latest-generation scanners is state of the art, challenges remain at the wafer edge due to large process variations. There are several customer configurable leveling control options available in ASML scanners, some of which are application specific in their scope of leveling improvement. In this paper, we assess the usability of leveling non-correctable error models to identify yield limiting edge dies. We introduce a novel dies-inspec based holistic methodology for leveling optimization to guide tool users in selecting an optimal configuration of leveling options. Significant focus gain, and consequently yield gain, can be achieved with this integrated approach. The Samsung site in Hwaseong observed an improved edge focus performance in a production of a mid-end memory product layer running on an ASML NXT 1960 system. 50% improvement in focus and a 1.5%p gain in edge yield were measured with the optimized configurations.
Li, Jing; He, Li; Fan, Xing; Chen, Yizhong; Lu, Hongwei
2017-08-01
This study presents a synergic optimization of control for greenhouse gas (GHG) emissions and system cost in integrated municipal solid waste (MSW) management on a basis of bi-level programming. The bi-level programming is formulated by integrating minimizations of GHG emissions at the leader level and system cost at the follower level into a general MSW framework. Different from traditional single- or multi-objective approaches, the proposed bi-level programming is capable of not only addressing the tradeoffs but also dealing with the leader-follower relationship between different decision makers, who have dissimilar perspectives interests. GHG emission control is placed at the leader level could emphasize the significant environmental concern in MSW management. A bi-level decision-making process based on satisfactory degree is then suitable for solving highly nonlinear problems with computationally effectiveness. The capabilities and effectiveness of the proposed bi-level programming are illustrated by an application of a MSW management problem in Canada. Results show that the obtained optimal management strategy can bring considerable revenues, approximately from 76 to 97 million dollars. Considering control of GHG emissions, it would give priority to the development of the recycling facility throughout the whole period, especially in latter periods. In terms of capacity, the existing landfill is enough in the future 30 years without development of new landfills, while expansion to the composting and recycling facilities should be paid more attention.
Implementation and Performance Issues in Collaborative Optimization
NASA Technical Reports Server (NTRS)
Braun, Robert; Gage, Peter; Kroo, Ilan; Sobieski, Ian
1996-01-01
Collaborative optimization is a multidisciplinary design architecture that is well-suited to large-scale multidisciplinary optimization problems. This paper compares this approach with other architectures, examines the details of the formulation, and some aspects of its performance. A particular version of the architecture is proposed to better accommodate the occurrence of multiple feasible regions. The use of system level inequality constraints is shown to increase the convergence rate. A series of simple test problems, demonstrated to challenge related optimization architectures, is successfully solved with collaborative optimization.
Automatic multi-organ segmentation using learning-based segmentation and level set optimization.
Kohlberger, Timo; Sofka, Michal; Zhang, Jingdan; Birkbeck, Neil; Wetzl, Jens; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin
2011-01-01
We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.
Review of optimization techniques of polygeneration systems for building applications
NASA Astrophysics Data System (ADS)
Y, Rong A.; Y, Su; R, Lahdelma
2016-08-01
Polygeneration means simultaneous production of two or more energy products in a single integrated process. Polygeneration is an energy-efficient technology and plays an important role in transition into future low-carbon energy systems. It can find wide applications in utilities, different types of industrial sectors and building sectors. This paper mainly focus on polygeneration applications in building sectors. The scales of polygeneration systems in building sectors range from the micro-level for a single home building to the large- level for residential districts. Also the development of polygeneration microgrid is related to building applications. The paper aims at giving a comprehensive review for optimization techniques for designing, synthesizing and operating different types of polygeneration systems for building applications.
Advanced Intelligent System Application to Load Forecasting and Control for Hybrid Electric Bus
NASA Technical Reports Server (NTRS)
Momoh, James; Chattopadhyay, Deb; Elfayoumy, Mahmoud
1996-01-01
The primary motivation for this research emanates from providing a decision support system to the electric bus operators in the municipal and urban localities which will guide the operators to maintain an optimal compromise among the noise level, pollution level, fuel usage etc. This study is backed up by our previous studies on study of battery characteristics, permanent magnet DC motor studies and electric traction motor size studies completed in the first year. The operator of the Hybrid Electric Car must determine optimal power management schedule to meet a given load demand for different weather and road conditions. The decision support system for the bus operator comprises three sub-tasks viz. forecast of the electrical load for the route to be traversed divided into specified time periods (few minutes); deriving an optimal 'plan' or 'preschedule' based on the load forecast for the entire time-horizon (i.e., for all time periods) ahead of time; and finally employing corrective control action to monitor and modify the optimal plan in real-time. A fully connected artificial neural network (ANN) model is developed for forecasting the kW requirement for hybrid electric bus based on inputs like climatic conditions, passenger load, road inclination, etc. The ANN model is trained using back-propagation algorithm employing improved optimization techniques like projected Lagrangian technique. The pre-scheduler is based on a Goal-Programming (GP) optimization model with noise, pollution and fuel usage as the three objectives. GP has the capability of analyzing the trade-off among the conflicting objectives and arriving at the optimal activity levels, e.g., throttle settings. The corrective control action or the third sub-task is formulated as an optimal control model with inputs from the real-time data base as well as the GP model to minimize the error (or deviation) from the optimal plan. These three activities linked with the ANN forecaster proving the output to the GP model which in turn produces the pre-schedule of the optimal control model. Some preliminary results based on a hypothetical test case will be presented for the load forecasting module. The computer codes for the three modules will be made available fe adoption by bus operating agencies. Sample results will be provided using these models. The software will be a useful tool for supporting the control systems for the Electric Bus project of NASA.
Economic Evaluation of Dual-Level-Residence Solar-Energy System
NASA Technical Reports Server (NTRS)
1982-01-01
105-page report is one in a series of economic evaluations of different solar-energy installations. Using study results, an optimal collector area is chosen that minimizes life-cycle costs. From this optimal size thermal and economic performance is evaluated.
A programing system for research and applications in structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Rogers, J. L., Jr.
1981-01-01
The paper describes a computer programming system designed to be used for methodology research as well as applications in structural optimization. The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities existing in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of contraints and design variables. Features shown in numerical examples include: (1) variability of structural layout and overall shape geometry, (2) static strength and stiffness constraints, (3) local buckling failure, and (4) vibration constraints. The paper concludes with a review of the further development trends of this programing system.
NASA Astrophysics Data System (ADS)
Hao, Qichen; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Huang, Linxian
2018-05-01
An optimization approach is used for the operation of groundwater artificial recharge systems in an alluvial fan in Beijing, China. The optimization model incorporates a transient groundwater flow model, which allows for simulation of the groundwater response to artificial recharge. The facilities' operation with regard to recharge rates is formulated as a nonlinear programming problem to maximize the volume of surface water recharged into the aquifers under specific constraints. This optimization problem is solved by the parallel genetic algorithm (PGA) based on OpenMP, which could substantially reduce the computation time. To solve the PGA with constraints, the multiplicative penalty method is applied. In addition, the facilities' locations are implicitly determined on the basis of the results of the recharge-rate optimizations. Two scenarios are optimized and the optimal results indicate that the amount of water recharged into the aquifers will increase without exceeding the upper limits of the groundwater levels. Optimal operation of this artificial recharge system can also contribute to the more effective recovery of the groundwater storage capacity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DuPont, Bryony; Cagan, Jonathan; Moriarty, Patrick
This paper presents a system of modeling advances that can be applied in the computational optimization of wind plants. These modeling advances include accurate cost and power modeling, partial wake interaction, and the effects of varying atmospheric stability. To validate the use of this advanced modeling system, it is employed within an Extended Pattern Search (EPS)-Multi-Agent System (MAS) optimization approach for multiple wind scenarios. The wind farm layout optimization problem involves optimizing the position and size of wind turbines such that the aerodynamic effects of upstream turbines are reduced, which increases the effective wind speed and resultant power at eachmore » turbine. The EPS-MAS optimization algorithm employs a profit objective, and an overarching search determines individual turbine positions, with a concurrent EPS-MAS determining the optimal hub height and rotor diameter for each turbine. Two wind cases are considered: (1) constant, unidirectional wind, and (2) three discrete wind speeds and varying wind directions, each of which have a probability of occurrence. Results show the advantages of applying the series of advanced models compared to previous application of an EPS with less advanced models to wind farm layout optimization, and imply best practices for computational optimization of wind farms with improved accuracy.« less
Optimal scan strategy for mega-pixel and kilo-gray-level OLED-on-silicon microdisplay.
Ji, Yuan; Ran, Feng; Ji, Weigui; Xu, Meihua; Chen, Zhangjing; Jiang, Yuxi; Shen, Weixin
2012-06-10
The digital pixel driving scheme makes the organic light-emitting diode (OLED) microdisplays more immune to the pixel luminance variations and simplifies the circuit architecture and design flow compared to the analog pixel driving scheme. Additionally, it is easily applied in full digital systems. However, the data bottleneck becomes a notable problem as the number of pixels and gray levels grow dramatically. This paper will discuss the digital driving ability to achieve kilogray-levels for megapixel displays. The optimal scan strategy is proposed for creating ultra high gray levels and increasing light efficiency and contrast ratio. Two correction schemes are discussed to improve the gray level linearity. A 1280×1024×3 OLED-on-silicon microdisplay, with 4096 gray levels, is designed based on the optimal scan strategy. The circuit driver is integrated in the silicon backplane chip in the 0.35 μm 3.3 V-6 V dual voltage one polysilicon layer, four metal layers (1P4M) complementary metal-oxide semiconductor (CMOS) process with custom top metal. The design aspects of the optimal scan controller are also discussed. The test results show the gray level linearity of the correction schemes for the optimal scan strategy is acceptable by the human eye.
Flight-Test Validation and Flying Qualities Evaluation of a Rotorcraft UAV Flight Control System
NASA Technical Reports Server (NTRS)
Mettler, Bernard; Tuschler, Mark B.; Kanade, Takeo
2000-01-01
This paper presents a process of design and flight-test validation and flying qualities evaluation of a flight control system for a rotorcraft-based unmanned aerial vehicle (RUAV). The keystone of this process is an accurate flight-dynamic model of the aircraft, derived by using system identification modeling. The model captures the most relevant dynamic features of our unmanned rotorcraft, and explicitly accounts for the presence of a stabilizer bar. Using the identified model we were able to determine the performance margins of our original control system and identify limiting factors. The performance limitations were addressed and the attitude control system was 0ptimize.d for different three performance levels: slow, medium, fast. The optimized control laws will be implemented in our RUAV. We will first determine the validity of our control design approach by flight test validating our optimized controllers. Subsequently, we will fly a series of maneuvers with the three optimized controllers to determine the level of flying qualities that can be attained. The outcome enable us to draw important conclusions on the flying qualities requirements for small-scale RUAVs.
Artificial blood circulation: stabilization, physiological control, and optimization.
Lerner, A Y
1990-04-01
The requirements for creating an efficient Artificial Blood Circulation System (ABCS) have been determined. A hierarchical three-level adaptive control system is suggested for ABCS to solve the following problems: stabilization of the circulation conditions, left and right pump coordination, physiological control for maintaining a proper relation between the cardiac output and the level of gas exchange required for metabolism, and optimization of the system behavior. The adaptations to varying load and body parameters will be accomplished using the signals which characterize the real-time computer-processed values of correlations between the changes in hydraulic resistance of blood vessels, or the changes in aortic pressure, and the oxygen (or carbon dioxide) concentration.
The Avoidance of Saturation Limits in Magnetic Bearing Systems During Transient Excitation
NASA Technical Reports Server (NTRS)
Rutland, Neil K.; Keogh, Patrick S.; Burrows, Clifford R.
1996-01-01
When a transient event, such as mass loss, occurs in a rotor/magnetic bearing system, optimal vibration control forces may exceed bearing capabilities. This will be inevitable when the mass loss is sufficiently large and a conditionally unstable dynamic system could result if the bearing characteristic become non-linear. This paper provides a controller design procedure to suppress, where possible, bearing force demands below saturation levels while maintaining vibration control. It utilizes H(sub infinity) optimization with appropriate input and output weightings. Simulation of transient behavior following mass loss from a flexible rotor is used to demonstrate the avoidance of conditional instability. A compromise between transient control force and vibration levels was achieved.
NASA Astrophysics Data System (ADS)
Shorikov, A. F.
2017-10-01
In this paper we study the problem of optimization of guaranteed result for program control by the final state of regional social and economic system in the presence of risks. For this problem we propose a mathematical model in the form of two-level hierarchical minimax program control problem of the final state of this process with incomplete information. For solving of its problem we constructed the common algorithm that has a form of a recurrent procedure of solving a linear programming and a finite optimization problems.
System Risk Assessment and Allocation in Conceptual Design
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Smith, Natasha L.; Zang, Thomas A. (Technical Monitor)
2003-01-01
As aerospace systems continue to evolve in addressing newer challenges in air and space transportation, there exists a heightened priority for significant improvement in system performance, cost effectiveness, reliability, and safety. Tools, which synthesize multidisciplinary integration, probabilistic analysis, and optimization, are needed to facilitate design decisions allowing trade-offs between cost and reliability. This study investigates tools for probabilistic analysis and probabilistic optimization in the multidisciplinary design of aerospace systems. A probabilistic optimization methodology is demonstrated for the low-fidelity design of a reusable launch vehicle at two levels, a global geometry design and a local tank design. Probabilistic analysis is performed on a high fidelity analysis of a Navy missile system. Furthermore, decoupling strategies are introduced to reduce the computational effort required for multidisciplinary systems with feedback coupling.
NASA Technical Reports Server (NTRS)
Mehr, Ali Farhang; Tumer, Irem; Barszcz, Eric
2005-01-01
Integrated Vehicle Health Management (ISHM) systems are used to detect, assess, and isolate functional failures in order to improve safety of space systems such as Orbital Space Planes (OSPs). An ISHM system, as a whole, consists of several subsystems that monitor different components of an OSP including: Spacecraft, Launch Vehicle, Ground Control, and the International Space Station. In this research, therefore, we propose a new methodology to design and optimize ISHM as a distributed system with multiple disciplines (that correspond to different subsystems of OSP safety). A paramount amount of interest has been given in the literature to the multidisciplinary design optimization of problems with such architecture (as will be reviewed in the full paper).
NASA Astrophysics Data System (ADS)
Sun, Xinyao; Wang, Xue; Wu, Jiangwei; Liu, Youda
2014-05-01
Cyber physical systems(CPS) recently emerge as a new technology which can provide promising approaches to demand side management(DSM), an important capability in industrial power systems. Meanwhile, the manufacturing center is a typical industrial power subsystem with dozens of high energy consumption devices which have complex physical dynamics. DSM, integrated with CPS, is an effective methodology for solving energy optimization problems in manufacturing center. This paper presents a prediction-based manufacturing center self-adaptive energy optimization method for demand side management in cyber physical systems. To gain prior knowledge of DSM operating results, a sparse Bayesian learning based componential forecasting method is introduced to predict 24-hour electric load levels for specific industrial areas in China. From this data, a pricing strategy is designed based on short-term load forecasting results. To minimize total energy costs while guaranteeing manufacturing center service quality, an adaptive demand side energy optimization algorithm is presented. The proposed scheme is tested in a machining center energy optimization experiment. An AMI sensing system is then used to measure the demand side energy consumption of the manufacturing center. Based on the data collected from the sensing system, the load prediction-based energy optimization scheme is implemented. By employing both the PSO and the CPSO method, the problem of DSM in the manufacturing center is solved. The results of the experiment show the self-adaptive CPSO energy optimization method enhances optimization by 5% compared with the traditional PSO optimization method.
A Systems View of Health, Wellness, and Gender: Implications for Mental Health Counseling.
ERIC Educational Resources Information Center
Nicholas, Donald R.; And Others
1992-01-01
Introduces systems view of optimal health and wellness that is consistent with mental health counseling's concern for holism and well-being of clients. Systems view suggests women may be more attentive to and receptive of self-regulatory feedback, whether temperature regulation at tissue level, marital distress at family level, or sex…
FY04 Advanced Life Support Architecture and Technology Studies: Mid-Year Presentation
NASA Technical Reports Server (NTRS)
Lange, Kevin; Anderson, Molly; Duffield, Bruce; Hanford, Tony; Jeng, Frank
2004-01-01
Long-Term Objective: Identify optimal advanced life support system designs that meet existing and projected requirements for future human spaceflight missions. a) Include failure-tolerance, reliability, and safe-haven requirements. b) Compare designs based on multiple criteria including equivalent system mass (ESM), technology readiness level (TRL), simplicity, commonality, etc. c) Develop and evaluate new, more optimal, architecture concepts and technology applications.
NASA Astrophysics Data System (ADS)
Cheng, Jilin; Zhang, Lihua; Zhang, Rentian; Gong, Yi; Zhu, Honggeng; Deng, Dongsheng; Feng, Xuesong; Qiu, Jinxian
2010-06-01
A dynamic planning model for optimizing operation of variable speed pumping system, aiming at minimum power consumption, was proposed to achieve economic operation. The No. 4 Jiangdu Pumping Station, a source pumping station in China's Eastern Route of South-to-North Water Diversion Project, is taken as a study case. Since the sump water level of Jiangdu Pumping Station is affected by the tide of Yangtze River, the daily-average heads of the pumping system varies yearly from 3.8m to 7.8m and the tide level difference in one day up to 1.2m. Comparisons of operation electricity cost between optimized variable speed and fixed speed operations of pumping system were made. When the full load operation mode is adopted, whether or not electricity prices in peak-valley periods are considered, the benefits of variable speed operation cannot compensate the energy consumption of the VFD. And when the pumping system operates in part load and the peak-valley electricity prices are considered, the pumping system should cease operation or lower its rotational speed in peak load hours since the electricity price are much higher, and to the contrary the pumping system should raise its rotational speed in valley load hours to pump more water. The computed results show that if the pumping system operates in 80% or 60% loads, the energy consumption cost of specified volume of water will save 14.01% and 26.69% averagely by means of optimal variable speed operation, and the investment on VFD will be paid back in 2 or 3 years. However, if the pumping system operates in 80% or 60% loads and the energy cost is calculated in non peak-valley electricity price, the repayment will be lengthened up to 18 years. In China's S-to-N Water Diversion Project, when the market operation and peak-valley electricity prices are taken into effect to supply water and regulate water levels in regulation reservoirs as Hongzehu Lake, Luomahu Lake, etc. the economic operation of water-diversion pumping stations will be vital, and the adoption of VFDs to achieve optimal operation may be a good choice.
The effect of decentralized behavioral decision making on system-level risk.
Kaivanto, Kim
2014-12-01
Certain classes of system-level risk depend partly on decentralized lay decision making. For instance, an organization's network security risk depends partly on its employees' responses to phishing attacks. On a larger scale, the risk within a financial system depends partly on households' responses to mortgage sales pitches. Behavioral economics shows that lay decisionmakers typically depart in systematic ways from the normative rationality of expected utility (EU), and instead display heuristics and biases as captured in the more descriptively accurate prospect theory (PT). In turn, psychological studies show that successful deception ploys eschew direct logical argumentation and instead employ peripheral-route persuasion, manipulation of visceral emotions, urgency, and familiar contextual cues. The detection of phishing emails and inappropriate mortgage contracts may be framed as a binary classification task. Signal detection theory (SDT) offers the standard normative solution, formulated as an optimal cutoff threshold, for distinguishing between good/bad emails or mortgages. In this article, we extend SDT behaviorally by rederiving the optimal cutoff threshold under PT. Furthermore, we incorporate the psychology of deception into determination of SDT's discriminability parameter. With the neo-additive probability weighting function, the optimal cutoff threshold under PT is rendered unique under well-behaved sampling distributions, tractable in computation, and transparent in interpretation. The PT-based cutoff threshold is (i) independent of loss aversion and (ii) more conservative than the classical SDT cutoff threshold. Independently of any possible misalignment between individual-level and system-level misclassification costs, decentralized behavioral decisionmakers are biased toward underdetection, and system-level risk is consequently greater than in analyses predicated upon normative rationality. © 2014 Society for Risk Analysis.
Systematic Propulsion Optimization Tools (SPOT)
NASA Technical Reports Server (NTRS)
Bower, Mark; Celestian, John
1992-01-01
This paper describes a computer program written by senior-level Mechanical Engineering students at the University of Alabama in Huntsville which is capable of optimizing user-defined delivery systems for carrying payloads into orbit. The custom propulsion system is designed by the user through the input of configuration, payload, and orbital parameters. The primary advantages of the software, called Systematic Propulsion Optimization Tools (SPOT), are a user-friendly interface and a modular FORTRAN 77 code designed for ease of modification. The optimization of variables in an orbital delivery system is of critical concern in the propulsion environment. The mass of the overall system must be minimized within the maximum stress, force, and pressure constraints. SPOT utilizes the Design Optimization Tools (DOT) program for the optimization techniques. The SPOT program is divided into a main program and five modules: aerodynamic losses, orbital parameters, liquid engines, solid engines, and nozzles. The program is designed to be upgraded easily and expanded to meet specific user needs. A user's manual and a programmer's manual are currently being developed to facilitate implementation and modification.
Measuring healthcare productivity - from unit to system level.
Kämäräinen, Vesa Johannes; Peltokorpi, Antti; Torkki, Paulus; Tallbacka, Kaj
2016-04-18
Purpose - Healthcare productivity is a growing issue in most Western countries where healthcare expenditure is rapidly increasing. Therefore, accurate productivity metrics are essential to avoid sub-optimization within a healthcare system. The purpose of this paper is to focus on healthcare production system productivity measurement. Design/methodology/approach - Traditionally, healthcare productivity has been studied and measured independently at the unit, organization and system level. Suggesting that productivity measurement should be done in different levels, while simultaneously linking productivity measurement to incentives, this study presents the challenges of productivity measurement at the different levels. The study introduces different methods to measure productivity in healthcare. In addition, it provides background information on the methods used to measure productivity and the parameters used in these methods. A pilot investigation of productivity measurement is used to illustrate the challenges of measurement, to test the developed measures and to prove the practical information for managers. Findings - The study introduces different approaches and methods to measure productivity in healthcare. Practical implications - A pilot investigation of productivity measurement is used to illustrate the challenges of measurement, to test the developed measures and to prove the practical benefits for managers. Originality/value - The authors focus on the measurement of the whole healthcare production system and try to avoid sub-optimization. Additionally considering an individual patient approach, productivity measurement is examined at the unit level, the organizational level and the system level.
Bates, Timothy C.
2015-01-01
Optimism and pessimism are associated with important outcomes including health and depression. Yet it is unclear if these apparent polar opposites form a single dimension or reflect two distinct systems. The extent to which personality accounts for differences in optimism/pessimism is also controversial. Here, we addressed these questions in a genetically informative sample of 852 pairs of twins. Distinct genetic influences on optimism and pessimism were found. Significant family-level environment effects also emerged, accounting for much of the negative relationship between optimism and pessimism, as well as a link to neuroticism. A general positive genetics factor exerted significant links among both personality and life-orientation traits. Both optimism bias and pessimism also showed genetic variance distinct from all effects of personality, and from each other. PMID:26561494
Data centers as dispatchable loads to harness stranded power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kibaek; Yang, Fan; Zavala, Victor M.
Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less
Data centers as dispatchable loads to harness stranded power
Kim, Kibaek; Yang, Fan; Zavala, Victor M.; ...
2016-07-20
Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less
Organizational attributes that assure optimal utilization of public health nurses.
Meagher-Stewart, Donna; Underwood, Jane; MacDonald, Mary; Schoenfeld, Bonnie; Blythe, Jennifer; Knibbs, Kristin; Munroe, Val; Lavoie-Tremblay, Mélanie; Ehrlich, Anne; Ganann, Rebecca; Crea, Mary
2010-01-01
Optimal utilization of public health nurses (PHNs) is important for strengthening public health capacity and sustaining interest in public health nursing in the face of a global nursing shortage. To gain an insight into the organizational attributes that support PHNs to work effectively, 23 focus groups were held with PHNs, managers, and policymakers in diverse regions and urban and rural/remote settings across Canada. Participants identified attributes at all levels of the public health system: government and system-level action, local organizational culture of their employers, and supportive management practices. Effective leadership emerged as a strong message throughout all levels. Other organizational attributes included valuing and promoting public health nursing; having a shared vision, goals, and planning; building partnerships and collaboration; demonstrating flexibility and creativity; and supporting ongoing learning and knowledge sharing. The results of this study highlight opportunities for fostering organizational development and leadership in public health, influencing policies and programs to optimize public health nursing services and resources, and supporting PHNs to realize the full scope of their competencies.
Chance-Constrained AC Optimal Power Flow for Distribution Systems With Renewables
DOE Office of Scientific and Technical Information (OSTI.GOV)
DallAnese, Emiliano; Baker, Kyri; Summers, Tyler
This paper focuses on distribution systems featuring renewable energy sources (RESs) and energy storage systems, and presents an AC optimal power flow (OPF) approach to optimize system-level performance objectives while coping with uncertainty in both RES generation and loads. The proposed method hinges on a chance-constrained AC OPF formulation where probabilistic constraints are utilized to enforce voltage regulation with prescribed probability. A computationally more affordable convex reformulation is developed by resorting to suitable linear approximations of the AC power-flow equations as well as convex approximations of the chance constraints. The approximate chance constraints provide conservative bounds that hold for arbitrarymore » distributions of the forecasting errors. An adaptive strategy is then obtained by embedding the proposed AC OPF task into a model predictive control framework. Finally, a distributed solver is developed to strategically distribute the solution of the optimization problems across utility and customers.« less
Design of a 0.13-μm CMOS cascade expandable ΣΔ modulator for multi-standard RF telecom systems
NASA Astrophysics Data System (ADS)
Morgado, Alonso; del Río, Rocío; de la Rosa, José M.
2007-05-01
This paper reports a 130-nm CMOS programmable cascade ΣΔ modulator for multi-standard wireless terminals, capable of operating on three standards: GSM, Bluetooth and UMTS. The modulator is reconfigured at both architecture- and circuit- level in order to adapt its performance to the different standards specifications with optimized power consumption. The design of the building blocks is based upon a top-down CAD methodology that combines simulation and statistical optimization at different levels of the system hierarchy. Transistor-level simulations show correct operation for all standards, featuring 13-bit, 11.3-bit and 9-bit effective resolution within 200-kHz, 1-MHz and 4-MHz bandwidth, respectively.
Direct handling of equality constraints in multilevel optimization
NASA Technical Reports Server (NTRS)
Renaud, John E.; Gabriele, Gary A.
1990-01-01
In recent years there have been several hierarchic multilevel optimization algorithms proposed and implemented in design studies. Equality constraints are often imposed between levels in these multilevel optimizations to maintain system and subsystem variable continuity. Equality constraints of this nature will be referred to as coupling equality constraints. In many implementation studies these coupling equality constraints have been handled indirectly. This indirect handling has been accomplished using the coupling equality constraints' explicit functional relations to eliminate design variables (generally at the subsystem level), with the resulting optimization taking place in a reduced design space. In one multilevel optimization study where the coupling equality constraints were handled directly, the researchers encountered numerical difficulties which prevented their multilevel optimization from reaching the same minimum found in conventional single level solutions. The researchers did not explain the exact nature of the numerical difficulties other than to associate them with the direct handling of the coupling equality constraints. The coupling equality constraints are handled directly, by employing the Generalized Reduced Gradient (GRG) method as the optimizer within a multilevel linear decomposition scheme based on the Sobieski hierarchic algorithm. Two engineering design examples are solved using this approach. The results show that the direct handling of coupling equality constraints in a multilevel optimization does not introduce any problems when the GRG method is employed as the internal optimizer. The optimums achieved are comparable to those achieved in single level solutions and in multilevel studies where the equality constraints have been handled indirectly.
NASA Astrophysics Data System (ADS)
Tofighi, Elham; Mahdizadeh, Amin
2016-09-01
This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.
NASA Astrophysics Data System (ADS)
Yang, Feiling; Hu, Jinming; Wu, Ruidong
2016-08-01
Suitable surrogates are critical for identifying optimal priority conservation areas (PCAs) to protect regional biodiversity. This study explored the efficiency of using endangered plants and animals as surrogates for identifying PCAs at the county level in Yunnan, southwest China. We ran the Dobson algorithm under three surrogate scenarios at 75% and 100% conservation levels and identified four types of PCAs. Assessment of the protection efficiencies of the four types of PCAs showed that endangered plants had higher surrogacy values than endangered animals but that the two were not substitutable; coupled endangered plants and animals as surrogates yielded a higher surrogacy value than endangered plants or animals as surrogates; the plant-animal priority areas (PAPAs) was the optimal among the four types of PCAs for conserving both endangered plants and animals in Yunnan. PAPAs could well represent overall species diversity distribution patterns and overlap with critical biogeographical regions in Yunnan. Fourteen priority units in PAPAs should be urgently considered as optimizing Yunnan’s protected area system. The spatial pattern of PAPAs at the 100% conservation level could be conceptualized into three connected conservation belts, providing a valuable reference for optimizing the layout of the in situ protected area system in Yunnan.
Optimal service using Matlab - simulink controlled Queuing system at call centers
NASA Astrophysics Data System (ADS)
Balaji, N.; Siva, E. P.; Chandrasekaran, A. D.; Tamilazhagan, V.
2018-04-01
This paper presents graphical integrated model based academic research on telephone call centres. This paper introduces an important feature of impatient customers and abandonments in the queue system. However the modern call centre is a complex socio-technical system. Queuing theory has now become a suitable application in the telecom industry to provide better online services. Through this Matlab-simulink multi queuing structured models provide better solutions in complex situations at call centres. Service performance measures analyzed at optimal level through Simulink queuing model.
Game Theory and Risk-Based Levee System Design
NASA Astrophysics Data System (ADS)
Hui, R.; Lund, J. R.; Madani, K.
2014-12-01
Risk-based analysis has been developed for optimal levee design for economic efficiency. Along many rivers, two levees on opposite riverbanks act as a simple levee system. Being rational and self-interested, land owners on each river bank would tend to independently optimize their levees with risk-based analysis, resulting in a Pareto-inefficient levee system design from the social planner's perspective. Game theory is applied in this study to analyze decision making process in a simple levee system in which the land owners on each river bank develop their design strategies using risk-based economic optimization. For each land owner, the annual expected total cost includes expected annual damage cost and annualized construction cost. The non-cooperative Nash equilibrium is identified and compared to the social planner's optimal distribution of flood risk and damage cost throughout the system which results in the minimum total flood cost for the system. The social planner's optimal solution is not feasible without appropriate level of compensation for the transferred flood risk to guarantee and improve conditions for all parties. Therefore, cooperative game theory is then employed to develop an economically optimal design that can be implemented in practice. By examining the game in the reversible and irreversible decision making modes, the cost of decision making myopia is calculated to underline the significance of considering the externalities and evolution path of dynamic water resource problems for optimal decision making.
Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.
2018-01-01
Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690
Li, Ke; Gomez-Cardona, Daniel; Hsieh, Jiang; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong
2015-01-01
Purpose: For a given imaging task and patient size, the optimal selection of x-ray tube potential (kV) and tube current-rotation time product (mAs) is pivotal in achieving the maximal radiation dose reduction while maintaining the needed diagnostic performance. Although contrast-to-noise (CNR)-based strategies can be used to optimize kV/mAs for computed tomography (CT) imaging systems employing the linear filtered backprojection (FBP) reconstruction method, a more general framework needs to be developed for systems using the nonlinear statistical model-based iterative reconstruction (MBIR) method. The purpose of this paper is to present such a unified framework for the optimization of kV/mAs selection for both FBP- and MBIR-based CT systems. Methods: The optimal selection of kV and mAs was formulated as a constrained optimization problem to minimize the objective function, Dose(kV,mAs), under the constraint that the achievable detectability index d′(kV,mAs) is not lower than the prescribed value of d℞′ for a given imaging task. Since it is difficult to analytically model the dependence of d′ on kV and mAs for the highly nonlinear MBIR method, this constrained optimization problem is solved with comprehensive measurements of Dose(kV,mAs) and d′(kV,mAs) at a variety of kV–mAs combinations, after which the overlay of the dose contours and d′ contours is used to graphically determine the optimal kV–mAs combination to achieve the lowest dose while maintaining the needed detectability for the given imaging task. As an example, d′ for a 17 mm hypoattenuating liver lesion detection task was experimentally measured with an anthropomorphic abdominal phantom at four tube potentials (80, 100, 120, and 140 kV) and fifteen mA levels (25 and 50–700) with a sampling interval of 50 mA at a fixed rotation time of 0.5 s, which corresponded to a dose (CTDIvol) range of [0.6, 70] mGy. Using the proposed method, the optimal kV and mA that minimized dose for the prescribed detectability level of d℞′=16 were determined. As another example, the optimal kV and mA for an 8 mm hyperattenuating liver lesion detection task were also measured using the developed framework. Both an in vivo animal and human subject study were used as demonstrations of how the developed framework can be applied to the clinical work flow. Results: For the first task, the optimal kV and mAs were measured to be 100 and 500, respectively, for FBP, which corresponded to a dose level of 24 mGy. In comparison, the optimal kV and mAs for MBIR were 80 and 150, respectively, which corresponded to a dose level of 4 mGy. The topographies of the iso-d′ map and the iso-CNR map were the same for FBP; thus, the use of d′- and CNR-based optimization methods generated the same results for FBP. However, the topographies of the iso-d′ and iso-CNR map were significantly different in MBIR; the CNR-based method overestimated the performance of MBIR, predicting an overly aggressive dose reduction factor. For the second task, the developed framework generated the following optimization results: for FBP, kV = 140, mA = 350, dose = 37.5 mGy; for MBIR, kV = 120, mA = 250, dose = 18.8 mGy. Again, the CNR-based method overestimated the performance of MBIR. Results of the preliminary in vivo studies were consistent with those of the phantom experiments. Conclusions: A unified and task-driven kV/mAs optimization framework has been developed in this work. The framework is applicable to both linear and nonlinear CT systems such as those using the MBIR method. As expected, the developed framework can be reduced to the conventional CNR-based kV/mAs optimization frameworks if the system is linear. For MBIR-based nonlinear CT systems, however, the developed task-based kV/mAs optimization framework is needed to achieve the maximal dose reduction while maintaining the desired diagnostic performance. PMID:26328971
A framework for modeling and optimizing dynamic systems under uncertainty
Nicholson, Bethany; Siirola, John
2017-11-11
Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less
A framework for modeling and optimizing dynamic systems under uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, Bethany; Siirola, John
Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less
Saner, Dominik; Vadenbo, Carl; Steubing, Bernhard; Hellweg, Stefanie
2014-07-01
This paper presents a regionalized LCA-based multiobjective optimization model of building energy demand and supply for the case of a Swiss municipality for the minimization of greenhouse gas emissions and particulate matter formation. The results show that the environmental improvement potential is very large: in the optimal case, greenhouse gas emissions from energy supply could be reduced by more than 75% and particulate emissions by over 50% in the municipality. This scenario supposes a drastic shift of heat supply systems from a fossil fuel dominated portfolio to a portfolio consisting of mainly heat pump and woodchip incineration systems. In addition to a change in heat supply technologies, roofs, windows and walls would need to be refurbished in more than 65% of the municipality's buildings. The full potential of the environmental impact reductions will hardly be achieved in reality, particularly in the short term, for example, because of financial constraints and social acceptance, which were not taken into account in this study. Nevertheless, the results of the optimization model can help policy makers to identify the most effective measures for improvement at the decision making level, for example, at the building level for refurbishment and selection of heating systems or at the municipal level for designing district heating networks. Therefore, this work represents a starting point for designing effective incentives to reduce the environmental impact of buildings. While the results of the optimization model are specific to the municipality studied, the model could readily be adapted to other regions.
A mesh gradient technique for numerical optimization
NASA Technical Reports Server (NTRS)
Willis, E. A., Jr.
1973-01-01
A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.
Kamauu, Aaron W C; DuVall, Scott L; Wiggins, Richard H; Avrin, David E
2008-09-01
In the creation of interesting radiological cases in a digital teaching file, it is necessary to adjust the window and level settings of an image to effectively display the educational focus. The web-based applet described in this paper presents an effective solution for real-time window and level adjustments without leaving the picture archiving and communications system workstation. Optimized images are created, as user-defined parameters are passed between the applet and a servlet on the Health Insurance Portability and Accountability Act-compliant teaching file server.
Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems
NASA Astrophysics Data System (ADS)
Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo
2017-07-01
In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.
HERCULES: A Pattern Driven Code Transformation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing
2012-01-01
New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss themore » design, implementation and an initial evaluation of HERCULES.« less
Flexible operation strategy for environment control system in abnormal supply power condition
NASA Astrophysics Data System (ADS)
Liping, Pang; Guoxiang, Li; Hongquan, Qu; Yufeng, Fang
2017-04-01
This paper establishes an optimization method that can be applied to the flexible operation of the environment control system in an abnormal supply power condition. A proposed conception of lifespan is used to evaluate the depletion time of the non-regenerative substance. The optimization objective function is to maximize the lifespans. The optimization variables are the allocated powers of subsystems. The improved Non-dominated Sorting Genetic Algorithm is adopted to obtain the pareto optimization frontier with the constraints of the cabin environmental parameters and the adjustable operating parameters of the subsystems. Based on the same importance of objective functions, the preferred power allocation of subsystems can be optimized. Then the corresponding running parameters of subsystems can be determined to ensure the maximum lifespans. A long-duration space station with three astronauts is used to show the implementation of the proposed optimization method. Three different CO2 partial pressure levels are taken into consideration in this study. The optimization results show that the proposed optimization method can obtain the preferred power allocation for the subsystems when the supply power is at a less-than-nominal value. The method can be applied to the autonomous control for the emergency response of the environment control system.
System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO
NASA Technical Reports Server (NTRS)
Olds, John R.
1994-01-01
This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.
NASA Astrophysics Data System (ADS)
Wang, L.; Koike, T.
2010-12-01
The climate change-induced variability in hydrological cycles directly affects regional water resources management. For improved multiple multi-objective reservoir operation, an integrated modeling system has been developed by incorporating a global optimization system (SCE-UA) into a distributed biosphere hydrological model (WEB-DHM) coupled with the reservoir routing module. The reservoir storage change is estimated from the difference between the simulated inflows and outflows; while the reservoir water level can be defined from the updated reservoir storage by using the H-V curve. According to the reservoir water level, the new operation rule can be decided. For optimization: (1) WEB-DHM is calibrated for each dam’s inflows separately; (2) then the calibrated WEB-DHM is used to simulate inflows and outflows by assuming outflow proportional to inflow; and (3) the proportion coefficients are optimized with Shuffle Complex Evolution method (SCE-UA), to fulfill an objective function towards minimum flood risk at downstream and maximum reservoir water storage for future use. The GSMaP product offers hourly global precipitation maps in near real-time (about four hours after observation). Aiming at near real-time reservoir operation in large river basins, the integrated modeling system takes the inputs from both an operational global quantitative precipitation forecast (JMA-GPV; to achieve an optimal operation rule in the assumed lead time period) and the GSMaP product (to perform current operation with the obtained optimal rule, after correction by gauge rainfall). The newly-developed system was then applied to the Red River Basin, with an area of 160,000 km2, to test its performance for near real-time dam operation. In Vietnam, three reservoirs are located in the upstream of Hanoi city, with Hoa Binh the largest (69% of total volume). After calibration with the gauge rainfall, the inflows to three reservoirs are well simulated; the discharge and water level at Hanoi city are also well reproduced with the actual dam releases. With the corrected GSMaP rainfall (by using gauge rainfall), the inflows to reservoirs and the water level at Hanoi city can be also reasonably reproduced. The study aims at achieving an optimal operation rule in the lead time period (with the quantitative precipitation forecast) and then using it to perform current operation (with the corrected GSMaP rainfall). At Hanoi, there are relatively low flows in July, but high floods in August 2005. Results show that with the actual operation, dangerous water level in Hanoi was observed; while with the lead-time operation, the water level in Hanoi can be obviously cut down, and maximum water storage is also achieved for Hoa Binh reservoir at the end of flood season.
A programing system for research and applications in structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Rogers, J. L., Jr.
1981-01-01
The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of constraints and design variables. Features shown in numerical examples include: variability of structural layout and overall shape geometry, static strength and stiffness constraints, local buckling failure, and vibration constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano
Past works that focused on addressing power-quality and reliability concerns related to renewable energy resources (RESs) operating with business-as-usual practices have looked at the design of Volt/VAr and Volt/Watt strategies to regulate real or reactive powers based on local voltage measurements, so that terminal voltages are within acceptable levels. These control strategies have the potential of operating at the same time scale of distribution-system dynamics, and can therefore mitigate disturbances precipitated fast time-varying loads and ambient conditions; however, they do not necessarily guarantee system-level optimality, and stability claims are mainly based on empirical evidences. On a different time scale, centralizedmore » and distributed optimal power flow (OPF) algorithms have been proposed to compute optimal steady-state inverter setpoints, so that power losses and voltage deviations are minimized and economic benefits to end-users providing ancillary services are maximized. However, traditional OPF schemes may offer decision making capabilities that do not match the dynamics of distribution systems. Particularly, during the time required to collect data from all the nodes of the network (e.g., loads), solve the OPF, and subsequently dispatch setpoints, the underlying load, ambient, and network conditions may have already changed; in this case, the DER output powers would be consistently regulated around outdated setpoints, leading to suboptimal system operation and violation of relevant electrical limits. The present work focuses on the synthesis of distributed RES-inverter controllers that leverage the opportunities for fast feedback offered by power-electronics interfaced RESs. The overarching objective is to bridge the temporal gap between long-term system optimization and real-time control, to enable seamless RES integration in large scale with stability and efficiency guarantees, while congruently pursuing system-level optimization objectives. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. The proposed controllers enable an update of the power outputs at a time scale that is compatible with the underlying dynamics of loads and ambient conditions, and continuously drive the system operation towards OPF-based solutions.« less
Optimization studies on compression coated floating-pulsatile drug delivery of bisoprolol.
Jagdale, Swati C; Bari, Nilesh A; Kuchekar, Bhanudas S; Chabukswar, Aniruddha R
2013-01-01
The purpose of the present work was to design and optimize compression coated floating pulsatile drug delivery systems of bisoprolol. Floating pulsatile concept was applied to increase the gastric residence of the dosage form having lag phase followed by a burst release. The prepared system consisted of two parts: a core tablet containing the active ingredient and an erodible outer shell with gas generating agent. The rapid release core tablet (RRCT) was prepared by using superdisintegrants with active ingredient. Press coating of optimized RRCT was done by polymer. A 3² full factorial design was used for optimization. The amount of Polyox WSR205 and Polyox WSR N12K was selected as independent variables. Lag period, drug release, and swelling index were selected as dependent variables. Floating pulsatile release formulation (FPRT) F13 at level 0 (55 mg) for Polyox WSR205 and level +1 (65 mg) for Polyox WSR N12K showed lag time of 4 h with >90% drug release. The data were statistically analyzed using ANOVA, and P < 0.05 was statistically significant. Release kinetics of the optimized formulation best fitted the zero order model. In vivo study confirms burst effect at 4 h in indicating the optimization of the dosage form.
Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.
Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144
Optimization Studies on Compression Coated Floating-Pulsatile Drug Delivery of Bisoprolol
Jagdale, Swati C.; Bari, Nilesh A.; Kuchekar, Bhanudas S.; Chabukswar, Aniruddha R.
2013-01-01
The purpose of the present work was to design and optimize compression coated floating pulsatile drug delivery systems of bisoprolol. Floating pulsatile concept was applied to increase the gastric residence of the dosage form having lag phase followed by a burst release. The prepared system consisted of two parts: a core tablet containing the active ingredient and an erodible outer shell with gas generating agent. The rapid release core tablet (RRCT) was prepared by using superdisintegrants with active ingredient. Press coating of optimized RRCT was done by polymer. A 32 full factorial design was used for optimization. The amount of Polyox WSR205 and Polyox WSR N12K was selected as independent variables. Lag period, drug release, and swelling index were selected as dependent variables. Floating pulsatile release formulation (FPRT) F13 at level 0 (55 mg) for Polyox WSR205 and level +1 (65 mg) for Polyox WSR N12K showed lag time of 4 h with >90% drug release. The data were statistically analyzed using ANOVA, and P < 0.05 was statistically significant. Release kinetics of the optimized formulation best fitted the zero order model. In vivo study confirms burst effect at 4 h in indicating the optimization of the dosage form. PMID:24367788
Sakai, Kenichi; Obata, Kouki; Yoshikawa, Mayumi; Takano, Ryusuke; Shibata, Masaki; Maeda, Hiroyuki; Mizutani, Akihiko; Terada, Katsuhide
2012-10-01
To design a high drug loading formulation of self-microemulsifying/micelle system. A poorly-soluble model drug (CH5137291), 8 hydrophilic surfactants (HS), 10 lipophilic surfactants (LS), 5 oils, and PEG400 were used. A high loading formulation was designed by a following stepwise approach using a high-throughput formulation screening (HTFS) system: (1) an oil/solvent was selected by solubility of the drug; (2) a suitable HS for highly loading was selected by the screenings of emulsion/micelle size and phase stability in binary systems (HS, oil/solvent) with increasing loading levels; (3) a LS that formed a broad SMEDDS/micelle area on a phase diagram containing the HS and oil/solvent was selected by the same screenings; (4) an optimized formulation was selected by evaluating the loading capacity of the crystalline drug. Aqueous solubility behavior and oral absorption (Beagle dog) of the optimized formulation were compared with conventional formulations (jet-milled, PEG400). As an optimized formulation, d-α-tocopheryl polyoxyethylene 1000 succinic ester: PEG400 = 8:2 was selected, and achieved the target loading level (200 mg/mL). The formulation formed fine emulsion/micelle (49.1 nm), and generated and maintained a supersaturated state at a higher level compared with the conventional formulations. In the oral absorption test, the area under the plasma concentration-time curve of the optimized formulation was 16.5-fold higher than that of the jet-milled formulation. The high loading formulation designed by the stepwise approach using the HTFS system improved the oral absorption of the poorly-soluble model drug.
An intelligent remote control system for ECEI on EAST
NASA Astrophysics Data System (ADS)
Chen, Dongxu; Zhu, Yilun; Zhao, Zhenling; Qu, Chengming; Liao, Wang; Xie, Jinlin; Liu, Wandong
2017-08-01
An intelligent remote control system based on a power distribution unit (PDU) and Arduino has been designed for the electron cyclotron emission imaging (ECEI) system on Experimental Advanced Superconducting Tokamak (EAST). This intelligent system has three major functions: ECEI system reboot, measurement region adjustment and signal amplitude optimization. The observation region of ECEI can be modified for different physics proposals by remotely tuning the optical and electronics systems. Via the remote adjustment of the attenuation level, the ECEI intermediate frequency signal amplitude can be efficiently optimized. The remote control system provides a feasible and reliable solution for the improvement of signal quality and the efficiency of the ECEI diagnostic system, which is also valuable for other diagnostic systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahowald, Natalie
Soils in natural and managed ecosystems and wetlands are well known sources of methane, nitrous oxides, and reactive nitrogen gases, but the magnitudes of gas flux to the atmosphere are still poorly constrained. Thus, the reasons for the large increases in atmospheric concentrations of methane and nitrous oxide since the preindustrial time period are not well understood. The low atmospheric concentrations of methane and nitrous oxide, despite being more potent greenhouse gases than carbon dioxide, complicate empirical studies to provide explanations. In addition to climate concerns, the emissions of reactive nitrogen gases from soils are important to the changing nitrogenmore » balance in the earth system, subject to human management, and may change substantially in the future. Thus improved modeling of the emission fluxes of these species from the land surface is important. Currently, there are emission modules for methane and some nitrogen species in the Community Earth System Model’s Community Land Model (CLM-ME/N); however, there are large uncertainties and problems in the simulations, resulting in coarse estimates. In this proposal, we seek to improve these emission modules by combining state-of-the-art process modules for emissions, available data, and new optimization methods. In earth science problems, we often have substantial data and knowledge of processes in disparate systems, and thus we need to combine data and a general process level understanding into a model for projections of future climate that are as accurate as possible. The best methodologies for optimization of parameters in earth system models are still being developed. In this proposal we will develop and apply surrogate algorithms that a) were especially developed for computationally expensive simulations like CLM-ME/N models; b) were (in the earlier surrogate optimization Stochastic RBF) demonstrated to perform very well on computationally expensive complex partial differential equations in earth science with limited numbers of simulations; and, c) will be (as part of the proposed research) significantly improved both by adding asynchronous parallelism, early truncation of unsuccessful simulations, and the improvement of both serial and parallel performance by the use of derivative and sensitivity information from global and local surrogate approximations S(x). The algorithm development and testing will be focused on the CLM-ME/N model application, but the methods are general and are expected to also perform well on optimization for parameter estimation of other climate models and other classes of continuous multimodal optimization problems arising from complex simulation models. In addition, this proposal will compile available datasets of emissions of methane, nitrous oxides and reactive nitrogen species and develop protocols for site level comparisons with the CLM-ME/N. Once the model parameters are optimized against site level data, the model will be simulated at the global level and compared to atmospheric concentration measurements for the current climate, and future emissions will be estimated using climate change as simulated by the CESM. This proposal combines experts in earth system modeling, optimization, computer science, and process level understanding of soil gas emissions in an interdisciplinary team in order to improve the modeling of methane and nitrogen gas emissions. This proposal thus meets the requirements of the SciDAC RFP, by integrating state-of-the-art computer science and earth system to build an improved earth system model.« less
A Large-Telescope Natural Guide Star AO System
NASA Technical Reports Server (NTRS)
Redding, David; Milman, Mark; Needels, Laura
1994-01-01
None given. From overview and conclusion:Keck Telescope case study. Objectives-low cost, good sky coverage. Approach--natural guide star at 0.8um, correcting at 2.2um.Concl- Good performance is possible for Keck with natural guide star AO system (SR>0.2 to mag 17+).AO-optimized CCD should b every effective. Optimizing td is very effective.Spatial Coadding is not effective except perhaps at extreme low light levels.
2008-02-01
is called EFS-POM. EFS-POM is forced by surface atmospheric forcing (wind, heating / cooling , sea level pressure) and by boundary forcing derived from...Peter Olsson, University of Alaska Anchorage. Heating and cooling is given by the climatological monthly heat flux from COADS (Comprehensive Ocean...Environmental Information Products for Search and Rescue Optimal Planning System (SAROPS) - Version for Public Release FINAL REPORT February
Super-optimal CO2 reduces seed yield but not vegetative growth in wheat
NASA Technical Reports Server (NTRS)
Grotenhuis, T. P.; Bugbee, B.
1997-01-01
Although terrestrial atmospheric CO2 levels will not reach 1000 micromoles mol-1 (0.1%) for decades, CO2 levels in growth chambers and greenhouses routinely exceed that concentration. CO2 levels in life support systems in space can exceed 10000 micromoles mol-1(1%). Numerous studies have examined CO2 effects up to 1000 micromoles mol-1, but biochemical measurements indicate that the beneficial effects of CO2 can continue beyond this concentration. We studied the effects of near-optimal (approximately 1200 micromoles mol-1) and super-optimal CO2 levels (2400 micromoles mol-1) on yield of two cultivars of hydroponically grown wheat (Triticum aestivum L.) in 12 trials in growth chambers. Increasing CO2 from sub-optimal to near-optimal (350-1200 micromoles mol-1) increased vegetative growth by 25% and seed yield by 15% in both cultivars. Yield increases were primarily the result of an increased number of heads per square meter. Further elevation of CO2 to 2500 micromoles mol-1 reduced seed yield by 22% (P < 0.001) in cv. Veery-10 and by 15% (P < 0.001) in cv. USU-Apogee. Super-optimal CO2 did not decrease the number of heads per square meter, but reduced seeds per head by 10% and mass per seed by 11%. The toxic effect of CO2 was similar over a range of light levels from half to full sunlight. Subsequent trials revealed that super-optimal CO2 during the interval between 2 wk before and after anthesis mimicked the effect of constant super-optimal CO2. Furthermore, near-optimal CO2 during the same interval mimicked the effect of constant near-optimal CO2. Nutrient concentration of leaves and heads was not affected by CO2. These results suggest that super-optimal CO2 inhibits some process that occurs near the time of seed set resulting in decreased seed set, seed mass, and yield.
Using the Gurobi Solvers on the Peregrine System | High-Performance
Peregrine System Gurobi Optimizer is a suite of solvers for mathematical programming. It is licensed for ('GRB_MATLAB_PATH') >> path(path,grb) Gurobi and GAMS GAMS is a high-level modeling system for mathematical
Modeling level change in Lake Urmia using hybrid artificial intelligence approaches
NASA Astrophysics Data System (ADS)
Esbati, M.; Ahmadieh Khanesar, M.; Shahzadi, Ali
2017-06-01
The investigation of water level fluctuations in lakes for protecting them regarding the importance of these water complexes in national and regional scales has found a special place among countries in recent years. The importance of the prediction of water level balance in Lake Urmia is necessary due to several-meter fluctuations in the last decade which help the prevention from possible future losses. For this purpose, in this paper, the performance of adaptive neuro-fuzzy inference system (ANFIS) for predicting the lake water level balance has been studied. In addition, for the training of the adaptive neuro-fuzzy inference system, particle swarm optimization (PSO) and hybrid backpropagation-recursive least square method algorithm have been used. Moreover, a hybrid method based on particle swarm optimization and recursive least square (PSO-RLS) training algorithm for the training of ANFIS structure is introduced. In order to have a more fare comparison, hybrid particle swarm optimization and gradient descent are also applied. The models have been trained, tested, and validated based on lake level data between 1991 and 2014. For performance evaluation, a comparison is made between these methods. Numerical results obtained show that the proposed methods with a reasonable error have a good performance in water level balance prediction. It is also clear that with continuing the current trend, Lake Urmia will experience more drop in the water level balance in the upcoming years.
NASA Astrophysics Data System (ADS)
Roshanian, Jafar; Jodei, Jahangir; Mirshams, Mehran; Ebrahimi, Reza; Mirzaee, Masood
A new automated multi-level of fidelity Multi-Disciplinary Design Optimization (MDO) methodology has been developed at the MDO Laboratory of K.N. Toosi University of Technology. This paper explains a new design approach by formulation of developed disciplinary modules. A conceptual design for a small, solid-propellant launch vehicle was considered at two levels of fidelity structure. Low and medium level of fidelity disciplinary codes were developed and linked. Appropriate design and analysis codes were defined according to their effect on the conceptual design process. Simultaneous optimization of the launch vehicle was performed at the discipline level and system level. Propulsion, aerodynamics, structure and trajectory disciplinary codes were used. To reach the minimum launch weight, the Low LoF code first searches the whole design space to achieve the mission requirements. Then the medium LoF code receives the output of the low LoF and gives a value near the optimum launch weight with more details and higher fidelity.
The art and science of missile defense sensor design
NASA Astrophysics Data System (ADS)
McComas, Brian K.
2014-06-01
A Missile Defense Sensor is a complex optical system, which sits idle for long periods of time, must work with little or no on-board calibration, be used to find and discriminate targets, and guide the kinetic warhead to the target within minutes of launch. A short overview of the Missile Defense problem will be discussed here, as well as, the top-level performance drivers, like Noise Equivalent Irradiance (NEI), Acquisition Range, and Dynamic Range. These top-level parameters influence the choice of optical system, mechanical system, focal plane array (FPA), Read Out Integrated Circuit (ROIC), and cryogenic system. This paper will not only discuss the physics behind the performance of the sensor, but it will also discuss the "art" of optimizing the performance of the sensor given the top level performance parameters. Balancing the sensor sub-systems is key to the sensor's performance in these highly stressful missions. Top-level performance requirements impact the choice of lower level hardware and requirements. The flow down of requirements to the lower level hardware will be discussed. This flow down directly impacts the FPA, where careful selection of the detector is required. The flow down also influences the ROIC and cooling requirements. The key physics behind the detector and cryogenic system interactions will be discussed, along with the balancing of subsystem performance. Finally, the overall system balance and optimization will be discussed in the context of missile defense sensors and expected performance of the overall kinetic warhead.
Water supply pipe dimensioning using hydraulic power dissipation
NASA Astrophysics Data System (ADS)
Sreemathy, J. R.; Rashmi, G.; Suribabu, C. R.
2017-07-01
Proper sizing of the pipe component of water distribution networks play an important role in the overall design of the any water supply system. Several approaches have been applied for the design of networks from an economical point of view. Traditional optimization techniques and population based stochastic algorithms are widely used to optimize the networks. But the use of these approaches is mostly found to be limited to the research level due to difficulties in understanding by the practicing engineers, design engineers and consulting firms. More over due to non-availability of commercial software related to the optimal design of water distribution system,it forces the practicing engineers to adopt either trial and error or experience-based design. This paper presents a simple approach based on power dissipation in each pipeline as a parameter to design the network economically, but not to the level of global minimum cost.
Intelligent Control Systems Research
NASA Technical Reports Server (NTRS)
Loparo, Kenneth A.
1994-01-01
Results of a three phase research program into intelligent control systems are presented. The first phase looked at implementing the lowest or direct level of a hierarchical control scheme using a reinforcement learning approach assuming no a priori information about the system under control. The second phase involved the design of an adaptive/optimizing level of the hierarchy and its interaction with the direct control level. The third and final phase of the research was aimed at combining the results of the previous phases with some a priori information about the controlled system.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1993-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1992-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Finite Energy and Bounded Actuator Attacks on Cyber-Physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Djouadi, Seddik M; Melin, Alexander M; Ferragut, Erik M
As control system networks are being connected to enterprise level networks for remote monitoring, operation, and system-wide performance optimization, these same connections are providing vulnerabilities that can be exploited by malicious actors for attack, financial gain, and theft of intellectual property. Much effort in cyber-physical system (CPS) protection has focused on protecting the borders of the system through traditional information security techniques. Less effort has been applied to the protection of cyber-physical systems from intelligent attacks launched after an attacker has defeated the information security protections to gain access to the control system. In this paper, attacks on actuator signalsmore » are analyzed from a system theoretic context. The threat surface is classified into finite energy and bounded attacks. These two broad classes encompass a large range of potential attacks. The effect of theses attacks on a linear quadratic (LQ) control are analyzed, and the optimal actuator attacks for both finite and infinite horizon LQ control are derived, therefore the worst case attack signals are obtained. The closed-loop system under the optimal attack signals is given and a numerical example illustrating the effect of an optimal bounded attack is provided.« less
Static and Dynamic Aeroelastic Tailoring With Variable Camber Control
NASA Technical Reports Server (NTRS)
Stanford, Bret K.
2016-01-01
This paper examines the use of a Variable Camber Continuous Trailing Edge Flap (VCCTEF) system for aeroservoelastic optimization of a transport wingbox. The quasisteady and unsteady motions of the flap system are utilized as design variables, along with patch-level structural variables, towards minimizing wingbox weight via maneuver load alleviation and active flutter suppression. The resulting system is, in general, very successful at removing structural weight in a feasible manner. Limitations to this success are imposed by including load cases where the VCCTEF system is not active (open-loop) in the optimization process, and also by including actuator operating cost constraints.
Seasonal-Scale Optimization of Conventional Hydropower Operations in the Upper Colorado System
NASA Astrophysics Data System (ADS)
Bier, A.; Villa, D.; Sun, A.; Lowry, T. S.; Barco, J.
2011-12-01
Sandia National Laboratories is developing the Hydropower Seasonal Concurrent Optimization for Power and the Environment (Hydro-SCOPE) tool to examine basin-wide conventional hydropower operations at seasonal time scales. This tool is part of an integrated, multi-laboratory project designed to explore different aspects of optimizing conventional hydropower operations. The Hydro-SCOPE tool couples a one-dimensional reservoir model with a river routing model to simulate hydrology and water quality. An optimization engine wraps around this model framework to solve for long-term operational strategies that best meet the specific objectives of the hydrologic system while honoring operational and environmental constraints. The optimization routines are provided by Sandia's open source DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) software. Hydro-SCOPE allows for multi-objective optimization, which can be used to gain insight into the trade-offs that must be made between objectives. The Hydro-SCOPE tool is being applied to the Upper Colorado Basin hydrologic system. This system contains six reservoirs, each with its own set of objectives (such as maximizing revenue, optimizing environmental indicators, meeting water use needs, or other objectives) and constraints. This leads to a large optimization problem with strong connectedness between objectives. The systems-level approach used by the Hydro-SCOPE tool allows simultaneous analysis of these objectives, as well as understanding of potential trade-offs related to different objectives and operating strategies. The seasonal-scale tool will be tightly integrated with the other components of this project, which examine day-ahead and real-time planning, environmental performance, hydrologic forecasting, and plant efficiency.
Optimal reservoir operation policies using novel nested algorithms
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri
2015-04-01
Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested optimization algorithm into the state transition that lowers the starting problem dimension and alleviates the curse of dimensionality. The algorithms can solve multi-objective optimization problems, without significantly increasing the complexity and the computational expenses. The algorithms can handle dense and irregular variable discretization, and are coded in Java as prototype applications. The three algorithms were tested at the multipurpose reservoir Knezevo of the Zletovica hydro-system located in the Republic of Macedonia, with eight objectives, including urban water supply, agriculture, ensuring ecological flow, and generation of hydropower. Because the Zletovica hydro-system is relatively complex, the novel algorithms were pushed to their limits, demonstrating their capabilities and limitations. The nSDP and nRL derived/learned the optimal reservoir policy using 45 (1951-1995) years historical data. The nSDP and nRL optimal reservoir policy was tested on 10 (1995-2005) years historical data, and compared with nDP optimal reservoir operation in the same period. The nested algorithms and optimal reservoir operation results are analysed and explained.
Integrative energy-systems design: System structure from thermodynamic optimization
NASA Astrophysics Data System (ADS)
Ordonez, Juan Carlos
This thesis deals with the application of thermodynamic optimization to find optimal structure and operation conditions of energy systems. Chapter 1 outlines the thermodynamic optimization of a combined power and refrigeration system subject to constraints. It is shown that the thermodynamic optimum is reached by distributing optimally the heat exchanger inventory. Chapter 2 considers the maximization of power extraction from a hot stream in the presence of phase change. It shows that when the receiving (cold) stream boils in a counterflow heat exchanger, the thermodynamic optimization consists of locating the optimal capacity rate of the cold stream. Chapter 3 shows that the main architectural features of a counterflow heat exchanger can be determined based on thermodynamic optimization subject to volume constraint. Chapter 4 addresses two basic issues in the thermodynamic optimization of environmental control systems (ECS) for aircraft: realistic limits for the minimal power requirement, and design features that facilitate operation at minimal power consumption. Several models of the ECS-Cabin interaction are considered and it is shown that in all the models the temperature of the air stream that the ECS delivers to the cabin can be optimized for operation at minimal power. In chapter 5 it is shown that the sizes (weights) of heat and fluid flow systems that function on board vehicles such as aircraft can be derived from the maximization of overall (system level) performance. Chapter 6 develops analytically the optimal sizes (hydraulic diameters) of parallel channels that penetrate and cool a volume with uniformly distributed internal heat generation and Chapter 7 shows analytically and numerically how an originally uniform flow structure transforms itself into a nonuniform one when the objective is to minimize global flow losses. It is shown that flow maldistribution and the abandonment of symmetry are necessary for the development of flow structures with minimal resistance. In the second part of the chapter, the flow medium is continuous and permeated by Darcy flow. As flow systems become smaller and more compact, the flow systems themselves become "designed porous media".
Mathematical Methods of System Analysis in Construction Materials
NASA Astrophysics Data System (ADS)
Garkina, Irina; Danilov, Alexander
2017-10-01
System attributes of construction materials are defined: complexity of an object, integrity of set of elements, existence of essential, stable relations between elements defining integrative properties of system, existence of structure, etc. On the basis of cognitive modelling (intensive and extensive properties; the operating parameters) materials (as difficult systems) and creation of the cognitive map the hierarchical modular structure of criteria of quality is under construction. It actually is a basis for preparation of the specification on development of material (the required organization and properties). Proceeding from a modern paradigm (model of statement of problems and their decisions) of development of materials, levels and modules are specified in structure of material. It when using the principles of the system analysis allows to considered technological process as the difficult system consisting of elements of the distinguished specification level: from atomic before separate process. Each element of system depending on an effective objective is considered as separate system with more detailed levels of decomposition. Among them, semantic and qualitative analyses of an object (are considered a research objective, decomposition levels, separate elements and communications between them come to light). Further formalization of the available knowledge in the form of mathematical models (structural identification) is carried out; communications between input and output parameters (parametrical identification) are defined. Hierarchical structures of criteria of quality are under construction for each allocated level. On her the relevant hierarchical structures of system (material) are under construction. Regularities of structurization and formation of properties, generally are considered at the levels from micro to a macrostructure. The mathematical model of material is represented as set of the models corresponding to private criteria by which separate modules and their levels (the mathematical description, a decision algorithm) are defined. Adequacy is established (compliance of results of modelling to experimental data; is defined by the level of knowledge of process and validity of the accepted assumptions). The global criterion of quality of material is considered as a set of private criteria (properties). Synthesis of material is carried out on the basis of one-criteria optimization on each of the chosen private criteria. Results of one-criteria optimization are used at multicriteria optimization. The methods of developing materials as single-purpose, multi-purpose, including contradictory, systems are indicated. The scheme of synthesis of composite materials as difficult systems is developed. The specified system approach effectively was used in case of synthesis of composite materials with special properties.
Grey fuzzy optimization model for water quality management of a river system
NASA Astrophysics Data System (ADS)
Karmakar, Subhankar; Mujumdar, P. P.
2006-07-01
A grey fuzzy optimization model is developed for water quality management of river system to address uncertainty involved in fixing the membership functions for different goals of Pollution Control Agency (PCA) and dischargers. The present model, Grey Fuzzy Waste Load Allocation Model (GFWLAM), has the capability to incorporate the conflicting goals of PCA and dischargers in a deterministic framework. The imprecision associated with specifying the water quality criteria and fractional removal levels are modeled in a fuzzy mathematical framework. To address the imprecision in fixing the lower and upper bounds of membership functions, the membership functions themselves are treated as fuzzy in the model and the membership parameters are expressed as interval grey numbers, a closed and bounded interval with known lower and upper bounds but unknown distribution information. The model provides flexibility for PCA and dischargers to specify their aspirations independently, as the membership parameters for different membership functions, specified for different imprecise goals are interval grey numbers in place of a deterministic real number. In the final solution optimal fractional removal levels of the pollutants are obtained in the form of interval grey numbers. This enhances the flexibility and applicability in decision-making, as the decision-maker gets a range of optimal solutions for fixing the final decision scheme considering technical and economic feasibility of the pollutant treatment levels. Application of the GFWLAM is illustrated with case study of the Tunga-Bhadra river system in India.
Automated control of hierarchical systems using value-driven methods
NASA Technical Reports Server (NTRS)
Pugh, George E.; Burke, Thomas E.
1990-01-01
An introduction is given to the Value-driven methodology, which has been successfully applied to solve a variety of difficult decision, control, and optimization problems. Many real-world decision processes (e.g., those encountered in scheduling, allocation, and command and control) involve a hierarchy of complex planning considerations. For such problems it is virtually impossible to define a fixed set of rules that will operate satisfactorily over the full range of probable contingencies. Decision Science Applications' value-driven methodology offers a systematic way of automating the intuitive, common-sense approach used by human planners. The inherent responsiveness of value-driven systems to user-controlled priorities makes them particularly suitable for semi-automated applications in which the user must remain in command of the systems operation. Three examples of the practical application of the approach in the automation of hierarchical decision processes are discussed: the TAC Brawler air-to-air combat simulation is a four-level computerized hierarchy; the autonomous underwater vehicle mission planning system is a three-level control system; and the Space Station Freedom electrical power control and scheduling system is designed as a two-level hierarchy. The methodology is compared with rule-based systems and with other more widely-known optimization techniques.
Haneda, Kiyofumi; Umeda, Tokuo; Koyama, Tadashi; Harauchi, Hajime; Inamura, Kiyonari
2002-01-01
The target of our study is to establish the methodology for analyzing level of security requirements, for searching suitable security measures and for optimizing security distribution to every portion of medical practice. Quantitative expression must be introduced to our study as possible for the purpose of easy follow up of security procedures and easy evaluation of security outcomes or results. Results of system analysis by fault tree analysis (FTA) clarified that subdivided system elements in detail contribute to much more accurate analysis. Such subdivided composition factors very much depended on behavior of staff, interactive terminal devices, kinds of service, and routes of network. As conclusion, we found the methods to analyze levels of security requirements for each medical information systems employing FTA, basic events for each composition factor and combination of basic events. Methods for searching suitable security measures were found. Namely risk factors for each basic event, number of elements for each composition factor and candidates of security measure elements were found. Method to optimize the security measures for each medical information system was proposed. Namely optimum distribution of risk factors in terms of basic events were figured out, and comparison of them between each medical information systems became possible.
Consistent integration of experimental and ab initio data into molecular and coarse-grained models
NASA Astrophysics Data System (ADS)
Vlcek, Lukas
As computer simulations are increasingly used to complement or replace experiments, highly accurate descriptions of physical systems at different time and length scales are required to achieve realistic predictions. The questions of how to objectively measure model quality in relation to reference experimental or ab initio data, and how to transition seamlessly between different levels of resolution are therefore of prime interest. To address these issues, we use the concept of statistical distance to define a measure of similarity between statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the systems' measurable properties. Through systematic coarse-graining, we arrive at appropriate expressions for optimization loss functions consistently incorporating microscopic ab initio data as well as macroscopic experimental data. The design of coarse-grained and multiscale models is then based on factoring the model system partition function into terms describing the system at different resolution levels. The optimization algorithm takes advantage of thermodynamic perturbation expressions for fast exploration of the model parameter space, enabling us to scan millions of parameter combinations per hour on a single CPU. The robustness and generality of the new model optimization framework and its efficient implementation are illustrated on selected examples including aqueous solutions, magnetic systems, and metal alloys.
Multiscale approach to contour fitting for MR images
NASA Astrophysics Data System (ADS)
Rueckert, Daniel; Burger, Peter
1996-04-01
We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.
Intelligent control for PMSM based on online PSO considering parameters change
NASA Astrophysics Data System (ADS)
Song, Zhengqiang; Yang, Huiling
2018-03-01
A novel online particle swarm optimization method is proposed to design speed and current controllers of vector controlled interior permanent magnet synchronous motor drives considering stator resistance variation. In the proposed drive system, the space vector modulation technique is employed to generate the switching signals for a two-level voltage-source inverter. The nonlinearity of the inverter is also taken into account due to the dead-time, threshold and voltage drop of the switching devices in order to simulate the system in the practical condition. Speed and PI current controller gains are optimized with PSO online, and the fitness function is changed according to the system dynamic and steady states. The proposed optimization algorithm is compared with conventional PI control method in the condition of step speed change and stator resistance variation, showing that the proposed online optimization method has better robustness and dynamic characteristics compared with conventional PI controller design.
Yang, Feiling; Hu, Jinming; Wu, Ruidong
2016-01-01
Suitable surrogates are critical for identifying optimal priority conservation areas (PCAs) to protect regional biodiversity. This study explored the efficiency of using endangered plants and animals as surrogates for identifying PCAs at the county level in Yunnan, southwest China. We ran the Dobson algorithm under three surrogate scenarios at 75% and 100% conservation levels and identified four types of PCAs. Assessment of the protection efficiencies of the four types of PCAs showed that endangered plants had higher surrogacy values than endangered animals but that the two were not substitutable; coupled endangered plants and animals as surrogates yielded a higher surrogacy value than endangered plants or animals as surrogates; the plant-animal priority areas (PAPAs) was the optimal among the four types of PCAs for conserving both endangered plants and animals in Yunnan. PAPAs could well represent overall species diversity distribution patterns and overlap with critical biogeographical regions in Yunnan. Fourteen priority units in PAPAs should be urgently considered as optimizing Yunnan’s protected area system. The spatial pattern of PAPAs at the 100% conservation level could be conceptualized into three connected conservation belts, providing a valuable reference for optimizing the layout of the in situ protected area system in Yunnan. PMID:27538537
NASA Astrophysics Data System (ADS)
Senkpiel, Charlotte; Biener, Wolfgang; Shammugam, Shivenes; Längle, Sven
2018-02-01
Energy system models serve as a basis for long term system planning. Joint optimization of electricity generating technologies, storage systems and the electricity grid leads to lower total system cost compared to an approach in which the grid expansion follows a given technology portfolio and their distribution. Modelers often face the problem of finding a good tradeoff between computational time and the level of detail that can be modeled. This paper analyses the differences between a transport model and a DC load flow model to evaluate the validity of using a simple but faster transport model within the system optimization model in terms of system reliability. The main findings in this paper are that a higher regional resolution of a system leads to better results compared to an approach in which regions are clustered as more overloads can be detected. An aggregation of lines between two model regions compared to a line sharp representation has little influence on grid expansion within a system optimizer. In a DC load flow model overloads can be detected in a line sharp case, which is therefore preferred. Overall the regions that need to reinforce the grid are identified within the system optimizer. Finally the paper recommends the usage of a load-flow model to test the validity of the model results.
Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua
2018-02-01
High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Displacement Based Multilevel Structural Optimization
NASA Technical Reports Server (NTRS)
Sobieszezanski-Sobieski, J.; Striz, A. G.
1996-01-01
In the complex environment of true multidisciplinary design optimization (MDO), efficiency is one of the most desirable attributes of any approach. In the present research, a new and highly efficient methodology for the MDO subset of structural optimization is proposed and detailed, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures is performed. In the system level optimization, the design variables are the coefficients of assumed polynomially based global displacement functions, and the load unbalance resulting from the solution of the global stiffness equations is minimized. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. The approach is expected to prove very efficient since the design task is broken down into a large number of small and efficient subtasks, each with a small number of variables, which are amenable to parallel computing.
NASA Technical Reports Server (NTRS)
Goel, R.; Kofman, I.; DeDios, Y. E.; Jeevarajan, J.; Stepanyan, V.; Nair, M.; Congdon, S.; Fregia, M.; Peters, B.; Cohen, H.;
2015-01-01
Sensorimotor changes such as postural and gait instabilities can affect the functional performance of astronauts when they transition across different gravity environments. We are developing a method, based on stochastic resonance (SR), to enhance information transfer by applying non-zero levels of external noise on the vestibular system (vestibular stochastic resonance, VSR). The goal of this project was to determine optimal levels of stimulation for SR applications by using a defined vestibular threshold of motion detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Toomey, Bridget
Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less
A hybrid inventory management system respondingto regular demand and surge demand
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohammad S. Roni; Mingzhou Jin; Sandra D. Eksioglu
2014-06-01
This paper proposes a hybrid policy for a stochastic inventory system facing regular demand and surge demand. The combination of two different demand patterns can be observed in many areas, such as healthcare inventory and humanitarian supply chain management. The surge demand has a lower arrival rate but higher demand volume per arrival. The solution approach proposed in this paper incorporates the level crossing method and mixed integer programming technique to optimize the hybrid inventory policy with both regular orders and emergency orders. The level crossing method is applied to obtain the equilibrium distributions of inventory levels under a givenmore » policy. The model is further transformed into a mixed integer program to identify an optimal hybrid policy. A sensitivity analysis is conducted to investigate the impact of parameters on the optimal inventory policy and minimum cost. Numerical results clearly show the benefit of using the proposed hybrid inventory model. The model and solution approach could help healthcare providers or humanitarian logistics providers in managing their emergency supplies in responding to surge demands.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chotiyarnwong, Pojchong; Medical Molecular Biology Unit, Faculty of Medicine, Siriraj Hospital, Mahidol University; Stewart-Jones, Guillaume B.
Crystals of an MHC class I molecule bound to naturally occurring peptide variants from the dengue virus NS3 protein contained high levels of solvent and required optimization of cryoprotectant and dehydration protocols for each complex to yield well ordered diffraction, a process facilitated by the use of a free-mounting system. T-cell recognition of the antigenic peptides presented by MHC class I molecules normally triggers protective immune responses, but can result in immune enhancement of disease. Cross-reactive T-cell responses may underlie immunopathology in dengue haemorrhagic fever. To analyze these effects at the molecular level, the functional MHC class I molecule HLA-A*1101more » was crystallized bound to six naturally occurring peptide variants from the dengue virus NS3 protein. The crystals contained high levels of solvent and required optimization of the cryoprotectant and dehydration protocols for each complex to yield well ordered diffraction, a process that was facilitated by the use of a free-mounting system.« less
On the decentralized control of large-scale systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chong, C.
1973-01-01
The decentralized control of stochastic large scale systems was considered. Particular emphasis was given to control strategies which utilize decentralized information and can be computed in a decentralized manner. The deterministic constrained optimization problem is generalized to the stochastic case when each decision variable depends on different information and the constraint is only required to be satisfied on the average. For problems with a particular structure, a hierarchical decomposition is obtained. For the stochastic control of dynamic systems with different information sets, a new kind of optimality is proposed which exploits the coupled nature of the dynamic system. The subsystems are assumed to be uncoupled and then certain constraints are required to be satisfied, either in a off-line or on-line fashion. For off-line coordination, a hierarchical approach of solving the problem is obtained. The lower level problems are all uncoupled. For on-line coordination, distinction is made between open loop feedback optimal coordination and closed loop optimal coordination.
NASA Astrophysics Data System (ADS)
Sun, Hui-Chen; Liu, Yu-xi; Ian, Hou; You, J. Q.; Il'ichev, E.; Nori, Franco
2014-06-01
We study the microwave absorption of a driven three-level quantum system, which is realized by a superconducting flux quantum circuit (SFQC), with a magnetic driving field applied to the two upper levels. The interaction between the three-level system and its environment is studied within the Born-Markov approximation, and we take into account the effects of the driving field on the damping rates of the three-level system. We study the linear response of the driven three-level SFQC to a weak probe field. The linear magnetic susceptibility of the SFQC can be changed by both the driving field and the bias magnetic flux. When the bias magnetic flux is at the optimal point, the transition from the ground state to the second-excited state is forbidden and the three-level SFQC has a ladder-type transition. Thus, the SFQC responds to the probe field like natural atoms with ladder-type transitions. However, when the bias magnetic flux deviates from the optimal point, the three-level SFQC has a cyclic transition, thus it responds to the probe field like a combination of natural atoms with ladder-type transitions and natural atoms with Λ-type transitions. In particular, we provide detailed discussions on the conditions for realizing electromagnetically induced transparency and Autler-Townes splitting in three-level SFQCs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bales, Benjamin B; Barrett, Richard F
In almost all modern scientific applications, developers achieve the greatest performance gains by tuning algorithms, communication systems, and memory access patterns, while leaving low level instruction optimizations to the compiler. Given the increasingly varied and complicated x86 architectures, the value of these optimizations is unclear, and, due to time and complexity constraints, it is difficult for many programmers to experiment with them. In this report we explore the potential gains of these 'last mile' optimization efforts on an AMD Barcelona processor, providing readers with relevant information so that they can decide whether investment in the presented optimizations is worthwhile.
An optimizing start-up strategy for a bio-methanator.
Sbarciog, Mihaela; Loccufier, Mia; Vande Wouwer, Alain
2012-05-01
This paper presents an optimizing start-up strategy for a bio-methanator. The goal of the control strategy is to maximize the outflow rate of methane in anaerobic digestion processes, which can be described by a two-population model. The methodology relies on a thorough analysis of the system dynamics and involves the solution of two optimization problems: steady-state optimization for determining the optimal operating point and transient optimization. The latter is a classical optimal control problem, which can be solved using the maximum principle of Pontryagin. The proposed control law is of the bang-bang type. The process is driven from an initial state to a small neighborhood of the optimal steady state by switching the manipulated variable (dilution rate) from the minimum to the maximum value at a certain time instant. Then the dilution rate is set to the optimal value and the system settles down in the optimal steady state. This control law ensures the convergence of the system to the optimal steady state and substantially increases its stability region. The region of attraction of the steady state corresponding to maximum production of methane is considerably enlarged. In some cases, which are related to the possibility of selecting the minimum dilution rate below a certain level, the stability region of the optimal steady state equals the interior of the state space. Aside its efficiency, which is evaluated not only in terms of biogas production but also from the perspective of treatment of the organic load, the strategy is also characterized by simplicity, being thus appropriate for implementation in real-life systems. Another important advantage is its generality: this technique may be applied to any anaerobic digestion process, for which the acidogenesis and methanogenesis are, respectively, characterized by Monod and Haldane kinetics.
The Noise Level Optimization for Induction Magnetometer of SEP System
NASA Astrophysics Data System (ADS)
Zhu, W.; Fang, G.
2011-12-01
The Surface Electromagnetic Penetration (SEP) System, subsidized by the SinoProbe Plan in China, is designed for 3D conductivity imaging in geophysical mineral exploration, underground water distribution exploration, oil and gas reservoir exploration. Both the Controlled Source Audio Magnetotellurics (CSAMT) method and Magnetotellurics (MT) method can be surveyed by SEP system. In this article, an optimization design is introduced, which can minimize the noise level of the induction magnetometer for SEP system magnetic field's acquisition. The induction magnetometer transfers the rate of the magnetic field's change to voltage signal by induction coil, and amplified it by Low Noise Amplifier The noise parts contributed to the magnetometer are: the coil's thermal noise, the equivalent input voltage and current noise of the pre-amplifier. The coil's thermal noise is decided by coil's DC resistance. The equivalent input voltage and current noise of the pre-amplifier depend on the amplifier's type and DC operation condition. The design here optimized the DC operation point of pre-amplifier, adjusted the DC current source, and realized the minimum of total noise level of magnetometer. The calculation and test results show that: the total noise is about 1pT/√Hz, the thermal noise of coils is 1.7nV/√Hz, the preamplifier equivalent input voltage and current noise is 3nV/ √Hz and 0.1pA/√Hz, the weight of the magnetometer is 4.5kg and meet the requirement of SEP system.
System design document for the INFLO prototype.
DOT National Transportation Integrated Search
2014-03-01
This report documents the high level System Design Document (SDD) for the prototype development and demonstration of the Intelligent Network Flow Optimization (INFLO) application bundle, with a focus on the Speed Harmonization (SPD-HARM) and Queue Wa...
NASA Astrophysics Data System (ADS)
Tan, Zhukui; Xie, Baiming; Zhao, Yuanliang; Dou, Jinyue; Yan, Tong; Liu, Bin; Zeng, Ming
2018-06-01
This paper presents a new integrated planning framework for effective accommodating electric vehicles in smart distribution systems (SDS). The proposed method incorporates various investment options available for the utility collectively, including distributed generation (DG), capacitors and network reinforcement. Using a back-propagation algorithm combined with cost-benefit analysis, the optimal network upgrade plan, allocation and sizing of the selected components are determined, with the purpose of minimizing the total system capital and operating costs of DG and EV accommodation. Furthermore, a new iterative reliability test method is proposed. It can check the optimization results by subsequently simulating the reliability level of the planning scheme, and modify the generation reserve margin to guarantee acceptable adequacy levels for each year of the planning horizon. Numerical results based on a 32-bus distribution system verify the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Flores, Robert Joseph
Growing concerns over greenhouse gas and pollutant emissions have increased the pressure to shift energy conversion paradigms from current forms to more sustainable methods, such as through the use of distributed energy resources (DER) at industrial and commercial buildings. This dissertation is concerned with the optimal design and dispatch of a DER system installed at an industrial or commercial building. An optimization model that accurately captures typical utility costs and the physical constraints of a combined cooling, heating, and power (CCHP) system is designed to size and operate a DER system at a building. The optimization model is then used with cooperative game theory to evaluate the financial performance of a CCHP investment. The CCHP model is then modified to include energy storage, solar powered generators, alternative fuel sources, carbon emission limits, and building interactions with public and fleet PEVs. Then, a separate plugin electric vehicle (PEV) refueling model is developed to determine the cost to operate a public Level 3 fast charging station. The CCHP design and dispatch results show the size of the building load and consistency of the thermal loads are critical to positive financial performance. While using the CCHP system to produce cooling can provide savings, heat production drives positive financial performance. When designing the DER system to reduce carbon emissions, the use of renewable fuels can allow for a gas turbine system with heat recovery to reduce carbon emissions for a large university by 67%. Further reductions require large photovoltaic installations coupled with energy storage or the ability to export electricity back to the grid if costs are to remain relatively low. When considering Level 3 fast charging equipment, demand charges at low PEV travel levels are sufficiently high to discourage adoption. Integration of the equipment can reduce demand charge costs only if the building maximum demand does not coincide with PEV refueling. Electric vehicle refueling does not typically affect DER design at low PEV travel levels, but can as electric vehicle travel increases. However, as PEV travel increases, the stochastic nature of PEV refueling disappears, and the optimization problem may become deterministic.
Hierarchical Control Using Networks Trained with Higher-Level Forward Models
Wayne, Greg; Abbott, L.F.
2015-01-01
We propose and develop a hierarchical approach to network control of complex tasks. In this approach, a low-level controller directs the activity of a “plant,” the system that performs the task. However, the low-level controller may only be able to solve fairly simple problems involving the plant. To accomplish more complex tasks, we introduce a higher-level controller that controls the lower-level controller. We use this system to direct an articulated truck to a specified location through an environment filled with static or moving obstacles. The final system consists of networks that have memorized associations between the sensory data they receive and the commands they issue. These networks are trained on a set of optimal associations that are generated by minimizing cost functions. Cost function minimization requires predicting the consequences of sequences of commands, which is achieved by constructing forward models, including a model of the lower-level controller. The forward models and cost minimization are only used during training, allowing the trained networks to respond rapidly. In general, the hierarchical approach can be extended to larger numbers of levels, dividing complex tasks into more manageable sub-tasks. The optimization procedure and the construction of the forward models and controllers can be performed in similar ways at each level of the hierarchy, which allows the system to be modified to perform other tasks, or to be extended for more complex tasks without retraining lower-levels. PMID:25058706
Halim, Dunant; Cheng, Li; Su, Zhongqing
2011-04-01
The work proposed an optimization approach for structural sensor placement to improve the performance of vibro-acoustic virtual sensor for active noise control applications. The vibro-acoustic virtual sensor was designed to estimate the interior sound pressure of an acoustic-structural coupled enclosure using structural sensors. A spectral-spatial performance metric was proposed, which was used to quantify the averaged structural sensor output energy of a vibro-acoustic system excited by a spatially varying point source. It was shown that (i) the overall virtual sensing error energy was contributed additively by the modal virtual sensing error and the measurement noise energy; (ii) each of the modal virtual sensing error system was contributed by both the modal observability levels for the structural sensing and the target acoustic virtual sensing; and further (iii) the strength of each modal observability level was influenced by the modal coupling and resonance frequencies of the associated uncoupled structural/cavity modes. An optimal design of structural sensor placement was proposed to achieve sufficiently high modal observability levels for certain important panel- and cavity-controlled modes. Numerical analysis on a panel-cavity system demonstrated the importance of structural sensor placement on virtual sensing and active noise control performance, particularly for cavity-controlled modes.
NASA Astrophysics Data System (ADS)
Braun, Robert Joseph
The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell-stack sizing and operating strategy (base-load or load-following and cogeneration or electric-only) are also presented.
Optimal Campaign Strategies in Fractional-Order Smoking Dynamics
NASA Astrophysics Data System (ADS)
Zeb, Anwar; Zaman, Gul; Jung, Il Hyo; Khan, Madad
2014-06-01
This paper deals with the optimal control problem in the giving up smoking model of fractional order. For the eradication of smoking in a community, we introduce three control variables in the form of education campaign, anti-smoking gum, and anti-nicotive drugs/medicine in the proposed fractional order model. We discuss the necessary conditions for the optimality of a general fractional optimal control problem whose fractional derivative is described in the Caputo sense. In order to do this, we minimize the number of potential and occasional smokers and maximize the number of ex-smokers. We use Pontryagin's maximum principle to characterize the optimal levels of the three controls. The resulting optimality system is solved numerically by MATLAB.
Simulating changes to emergency care resources to compare system effectiveness.
Branas, Charles C; Wolff, Catherine S; Williams, Justin; Margolis, Gregg; Carr, Brendan G
2013-08-01
To apply systems optimization methods to simulate and compare the most effective locations for emergency care resources as measured by access to care. This study was an optimization analysis of the locations of trauma centers (TCs), helicopter depots (HDs), and severely injured patients in need of time-critical care in select US states. Access was defined as the percentage of injured patients who could reach a level I/II TC within 45 or 60 minutes. Optimal locations were determined by a search algorithm that considered all candidate sites within a set of existing hospitals and airports in finding the best solutions that maximized access. Across a dozen states, existing access to TCs within 60 minutes ranged from 31.1% to 95.6%, with a mean of 71.5%. Access increased from 0.8% to 35.0% after optimal addition of one or two TCs. Access increased from 1.0% to 15.3% after optimal addition of one or two HDs. Relocation of TCs and HDs (optimal removal followed by optimal addition) produced similar results. Optimal changes to TCs produced greater increases in access to care than optimal changes to HDs although these results varied across states. Systems optimization methods can be used to compare the impacts of different resource configurations and their possible effects on access to care. These methods to determine optimal resource allocation can be applied to many domains, including comparative effectiveness and patient-centered outcomes research. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhiying, Chen; Ping, Zhou
2017-11-01
Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.
All-Optical Implementation of the Ant Colony Optimization Algorithm
Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare
2016-01-01
We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems. PMID:27222098
System, apparatus and methods to implement high-speed network analyzers
Ezick, James; Lethin, Richard; Ros-Giralt, Jordi; Szilagyi, Peter; Wohlford, David E
2015-11-10
Systems, apparatus and methods for the implementation of high-speed network analyzers are provided. A set of high-level specifications is used to define the behavior of the network analyzer emitted by a compiler. An optimized inline workflow to process regular expressions is presented without sacrificing the semantic capabilities of the processing engine. An optimized packet dispatcher implements a subset of the functions implemented by the network analyzer, providing a fast and slow path workflow used to accelerate specific processing units. Such dispatcher facility can also be used as a cache of policies, wherein if a policy is found, then packet manipulations associated with the policy can be quickly performed. An optimized method of generating DFA specifications for network signatures is also presented. The method accepts several optimization criteria, such as min-max allocations or optimal allocations based on the probability of occurrence of each signature input bit.
NASA Astrophysics Data System (ADS)
Vitório, Paulo Cezar; Leonel, Edson Denner
2017-12-01
The structural design must ensure suitable working conditions by attending for safe and economic criteria. However, the optimal solution is not easily available, because these conditions depend on the bodies' dimensions, materials strength and structural system configuration. In this regard, topology optimization aims for achieving the optimal structural geometry, i.e. the shape that leads to the minimum requirement of material, respecting constraints related to the stress state at each material point. The present study applies an evolutionary approach for determining the optimal geometry of 2D structures using the coupling of the boundary element method (BEM) and the level set method (LSM). The proposed algorithm consists of mechanical modelling, topology optimization approach and structural reconstruction. The mechanical model is composed of singular and hyper-singular BEM algebraic equations. The topology optimization is performed through the LSM. Internal and external geometries are evolved by the LS function evaluated at its zero level. The reconstruction process concerns the remeshing. Because the structural boundary moves at each iteration, the body's geometry change and, consequently, a new mesh has to be defined. The proposed algorithm, which is based on the direct coupling of such approaches, introduces internal cavities automatically during the optimization process, according to the intensity of Von Mises stress. The developed optimization model was applied in two benchmarks available in the literature. Good agreement was observed among the results, which demonstrates its efficiency and accuracy.
The preliminary SOL (Sizing and Optimization Language) reference manual
NASA Technical Reports Server (NTRS)
Lucas, Stephen H.; Scotti, Stephen J.
1989-01-01
The Sizing and Optimization Language, SOL, a high-level special-purpose computer language has been developed to expedite application of numerical optimization to design problems and to make the process less error-prone. This document is a reference manual for those wishing to write SOL programs. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler and runtime library routines. An overview of SOL appears in NASA TM 100565.
An Energy-Aware Trajectory Optimization Layer for sUAS
NASA Astrophysics Data System (ADS)
Silva, William A.
The focus of this work is the implementation of an energy-aware trajectory optimization algorithm that enables small unmanned aircraft systems (sUAS) to operate in unknown, dynamic severe weather environments. The software is designed as a component of an Energy-Aware Dynamic Data Driven Application System (EA-DDDAS) for sUAS. This work addresses the challenges of integrating and executing an online trajectory optimization algorithm during mission operations in the field. Using simplified aircraft kinematics, the energy-aware algorithm enables extraction of kinetic energy from measured winds to optimize thrust use and endurance during flight. The optimization layer, based upon a nonlinear program formulation, extracts energy by exploiting strong wind velocity gradients in the wind field, a process known as dynamic soaring. The trajectory optimization layer extends the energy-aware path planner developed by Wenceslao Shaw-Cortez te{Shaw-cortez2013} to include additional mission configurations, simulations with a 6-DOF model, and validation of the system with flight testing in June 2015 in Lubbock, Texas. The trajectory optimization layer interfaces with several components within the EA-DDDAS to provide an sUAS with optimal flight trajectories in real-time during severe weather. As a result, execution timing, data transfer, and scalability are considered in the design of the software. Severe weather also poses a measure of unpredictability to the system with respect to communication between systems and available data resources during mission operations. A heuristic mission tree with different cost functions and constraints is implemented to provide a level of adaptability to the optimization layer. Simulations and flight experiments are performed to assess the efficacy of the trajectory optimization layer. The results are used to assess the feasibility of flying dynamic soaring trajectories with existing controllers as well as to verify the interconnections between EA-DDDAS components. Results also demonstrate the usage of the trajectory optimization layer in conjunction with a lattice-based path planner as a method of guiding the optimization layer and stitching together subsequent trajectories.
NASA Astrophysics Data System (ADS)
Setiawan, R.
2018-03-01
In this paper, Economic Order Quantity (EOQ) of probabilistic two-level supply – chain system for items with imperfect quality has been analyzed under service level constraint. A firm applies an active service level constraint to avoid unpredictable shortage terms in the objective function. Mathematical analysis of optimal result is delivered using two equilibrium scheme concept in game theory approach. Stackelberg’s equilibrium for cooperative strategy and Stackelberg’s Equilibrium for noncooperative strategy. This is a new approach to game theory result in inventory system whether service level constraint is applied by a firm in his moves.
A homotopy algorithm for digital optimal projection control GASD-HADOC
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.
1993-01-01
The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.
Ghanta, Ravi K; Rangaraj, Aravind; Umakanthan, Ramanan; Lee, Lawrence; Laurence, Rita G; Fox, John A; Bolman, R Morton; Cohn, Lawrence H; Chen, Frederick Y
2007-03-13
Ventricular restraint is a nontransplantation surgical treatment for heart failure. The effect of varying restraint level on left ventricular (LV) mechanics and remodeling is not known. We hypothesized that restraint level may affect therapy efficacy. We studied the immediate effect of varying restraint levels in an ovine heart failure model. We then studied the long-term effect of restraint applied over a 2-month period. Restraint level was quantified by use of fluid-filled epicardial balloons placed around the ventricles and measurement of balloon luminal pressure at end diastole. At 4 different restraint levels (0, 3, 5, and 8 mm Hg), transmural myocardial pressure (P(tm)) and indices of myocardial oxygen consumption (MVO2) were determined in control (n=5) and ovine heart failure (n=5). Ventricular restraint therapy decreased P(tm) and MVO2, and improved mechanical efficiency. An optimal physiological restraint level of 3 mm Hg was identified to maximize improvement without an adverse affect on systemic hemodynamics. At this optimal level, end-diastolic P(tm) and MVO2 indices decreased by 27% and 20%, respectively. The serial longitudinal effects of optimized ventricular restraint were then evaluated in ovine heart failure with (n=3) and without (n=3) restraint over 2 months. Optimized ventricular restraint prevented and reversed pathological LV dilatation (130+/-22 mL to 91+/-18 mL) and improved LV ejection fraction (27+/-3% to 43+/-5%). Measured restraint level decreased over time as the LV became smaller, and reverse remodeling slowed. Ventricular restraint level affects the degree of decrease in P(tm), the degree of decrease in MVO2, and the rate of LV reverse remodeling. Periodic physiological adjustments of restraint level may be required for optimal restraint therapy efficacy.
Rainio, Anna-Kaisa; Ohinmaa, Arto E
2005-07-01
RAFAELA is a new Finnish PCS, which is used in several University Hospitals and Central Hospitals and has aroused considerable interest in hospitals in Europe. The aim of the research is firstly to assess the feasibility of the RAFAELA Patient Classification System (PCS) in nursing staff management and, secondly, whether it can be seen as the transferring of nursing resources between wards according to the information received from nursing care intensity classification. The material was received from the Central Hospital's 12 general wards between 2000 and 2001. The RAFAELA PCS consists of three different measures: a system measuring patient care intensity, a system recording daily nursing resources, and a system measuring the optimal nursing care intensity/nurse situation. The data were analysed in proportion to the labour costs of nursing work and, from that, we calculated the employer's loss (a situation below the optimal level) and savings (a situation above the optimal level) per ward as both costs and the number of nurses. In 2000 the wards had on average 77 days below the optimal level and 106 days above it. In 2001 the wards had on average 71 days below the optimal level and 129 above it. Converting all these days to monetary and personnel resources the employer lost 307,745 or 9.84 nurses and saved 369,080 or 11.80 nurses in total in 2000. In 2001 the employer lost in total 242,143 or 7.58 nurses and saved 457,615 or 14.32 nurses. During the time period of the research nursing resources seemed not have been transferred between wards. RAFAELA PCS is applicable to the allocation of nursing resources but its possibilities have not been entirely used in the researched hospital. The management of nursing work should actively use the information received in nursing care intensity classification and plan and implement the transferring of nursing resources in order to ensure the quality of patient care. Information on which units resources should be allocated to is needed in the planning of staff resources of the whole hospital. More resources do not solve the managerial problem of the right allocation of resources. If resources are placed wrongly, the problems of daily staff management and cost control continue.
Small Spacecraft System-Level Design and Optimization for Interplanetary Trajectories
NASA Technical Reports Server (NTRS)
Spangelo, Sara; Dalle, Derek; Longmier, Ben
2014-01-01
The feasibility of an interplanetary mission for a CubeSat, a type of miniaturized spacecraft, that uses an emerging technology, the CubeSat Ambipolar Thruster (CAT) is investigated. CAT is a large delta-V propulsion system that uses a high-density plasma source that has been miniaturized for small spacecraft applications. An initial feasibility assessment that demonstrated escaping Low Earth Orbit (LEO) and achieving Earth-escape trajectories with a 3U CubeSat and this thruster technology was demonstrated in previous work. We examine a mission architecture with a trajectory that begins in Earth orbits such as LEO and Geostationary Earth Orbit (GEO) which escapes Earth orbit and travels to Mars, Jupiter, or Saturn. The goal was to minimize travel time to reach the destinations and considering trade-offs between spacecraft dry mass, fuel mass, and solar power array size. Sensitivities to spacecraft dry mass and available power are considered. CubeSats are extremely size, mass, and power constrained, and their subsystems are tightly coupled, limiting their performance potential. System-level modeling, simulation, and optimization approaches are necessary to find feasible and optimal operational solutions to ensure system-level interactions are modeled. Thus, propulsion, power/energy, attitude, and orbit transfer models are integrated to enable systems-level analysis and trades. The CAT technology broadens the possible missions achievable with small satellites. In particular, this technology enables more sophisticated maneuvers by small spacecraft such as polar orbit insertion from an equatorial orbit, LEO to GEO transfers, Earth-escape trajectories, and transfers to other interplanetary bodies. This work lays the groundwork for upcoming CubeSat launch opportunities and supports future development of interplanetary and constellation CubeSat and small satellite mission concepts.
NASA Astrophysics Data System (ADS)
Göll, S.; Samsun, R. C.; Peters, R.
Fuel-cell-based auxiliary power units can help to reduce fuel consumption and emissions in transportation. For this application, the combination of solid oxide fuel cells (SOFCs) with upstream fuel processing by autothermal reforming (ATR) is seen as a highly favorable configuration. Notwithstanding the necessity to improve each single component, an optimized architecture of the fuel cell system as a whole must be achieved. To enable model-based analyses, a system-level approach is proposed in which the fuel cell system is modeled as a multi-stage thermo-chemical process using the "flowsheeting" environment PRO/II™. Therein, the SOFC stack and the ATR are characterized entirely by corresponding thermodynamic processes together with global performance parameters. The developed model is then used to achieve an optimal system layout by comparing different system architectures. A system with anode and cathode off-gas recycling was identified to have the highest electric system efficiency. Taking this system as a basis, the potential for further performance enhancement was evaluated by varying four parameters characterizing different system components. Using methods from the design and analysis of experiments, the effects of these parameters and of their interactions were quantified, leading to an overall optimized system with encouraging performance data.
Structural optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.
1983-01-01
A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.
Patel, B N; Thomas, J V; Lockhart, M E; Berland, L L; Morgan, D E
2013-02-01
To evaluate lesion contrast in pancreatic adenocarcinoma patients using spectral multidetector computed tomography (MDCT) analysis. The present institutional review board-approved, Health Insurance Portability and Accountability Act of 1996 (HIPAA)-compliant retrospective study evaluated 64 consecutive adults with pancreatic adenocarcinoma examined using a standardized, multiphasic protocol on a single-source, dual-energy MDCT system. Pancreatic phase images (35 s) were acquired in dual-energy mode; unenhanced and portal venous phases used standard MDCT. Lesion contrast was evaluated on an independent workstation using dual-energy analysis software, comparing tumour to non-tumoural pancreas attenuation (HU) differences and tumour diameter at three energy levels: 70 keV; individual subject-optimized viewing energy level (based on the maximum contrast-to-noise ratio, CNR); and 45 keV. The image noise was measured for the same three energies. Differences in lesion contrast, diameter, and noise between the different energy levels were analysed using analysis of variance (ANOVA). Quantitative differences in contrast gain between 70 keV and CNR-optimized viewing energies, and between CNR-optimized and 45 keV were compared using the paired t-test. Thirty-four women and 30 men (mean age 68 years) had a mean tumour diameter of 3.6 cm. The median optimized energy level was 50 keV (range 40-77). The mean ± SD lesion contrast values (non-tumoural pancreas - tumour attenuation) were: 57 ± 29, 115 ± 70, and 146 ± 74 HU (p = 0.0005); the lengths of the tumours were: 3.6, 3.3, and 3.1 cm, respectively (p = 0.026); and the contrast to noise ratios were: 24 ± 7, 39 ± 12, and 59 ± 17 (p = 0.0005) for 70 keV, the optimized energy level, and 45 keV, respectively. For individuals, the mean ± SD contrast gain from 70 keV to the optimized energy level was 59 ± 45 HU; and the mean ± SD contrast gain from the optimized energy level to 45 keV was 31 ± 25 HU (p = 0.007). Significantly increased pancreatic lesion contrast was noted at lower viewing energies using spectral MDCT. Individual patient CNR-optimized energy level images have the potential to improve lesion conspicuity. Copyright © 2012 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
[Design of medical devices management system supporting full life-cycle process management].
Su, Peng; Zhong, Jianping
2014-03-01
Based on the analysis of the present status of medical devices management, this paper optimized management process, developed a medical devices management system with Web technologies. With information technology to dynamic master the use of state of the entire life-cycle of medical devices. Through the closed-loop management with pre-event budget, mid-event control and after-event analysis, improved the delicacy management level of medical devices, optimized asset allocation, promoted positive operation of devices.
Advanced air revitalization for optimized crew and plant environments
NASA Technical Reports Server (NTRS)
Lee, M. G.; Grigger, David J.; Brown, Mariann F.
1991-01-01
The Hybrid Air Revitalization System (HARS) closed ecosystem concept presented encompasses electrochemical CO2 and O2 separators, in conjunction with a mechanical condenser/separator for maintaining CO2, O2, and humidity levels in crew and plant habitats at optimal conditions. HARS requires no expendables, and allows flexible process control on the bases of electrochemical cell current, temperature, and airflow rate variations. HARS capacity can be easily increased through the incorporation of additional chemical cells. Detailed system flowcharts are provided.
Shuttle filter study. Volume 1: Characterization and optimization of filtration devices
NASA Technical Reports Server (NTRS)
1974-01-01
A program to develop a new technology base for filtration equipment and comprehensive fluid particulate contamination management techniques was conducted. The study has application to the systems used in the space shuttle and space station projects. The scope of the program is as follows: (1) characterization and optimization of filtration devices, (2) characterization of contaminant generation and contaminant sensitivity at the component level, and (3) development of a comprehensive particulate contamination management plane for space shuttle fluid systems.
Chen, Zhihuan; Yuan, Yanbin; Yuan, Xiaohui; Huang, Yuehua; Li, Xianshan; Li, Wenwu
2015-05-01
A hydraulic turbine regulating system (HTRS) is one of the most important components of hydropower plant, which plays a key role in maintaining safety, stability and economical operation of hydro-electrical installations. At present, the conventional PID controller is widely applied in the HTRS system for its practicability and robustness, and the primary problem with respect to this control law is how to optimally tune the parameters, i.e. the determination of PID controller gains for satisfactory performance. In this paper, a kind of multi-objective evolutionary algorithms, named adaptive grid particle swarm optimization (AGPSO) is applied to solve the PID gains tuning problem of the HTRS system. This newly AGPSO optimized method, which differs from a traditional one-single objective optimization method, is designed to take care of settling time and overshoot level simultaneously, in which a set of non-inferior alternatives solutions (i.e. Pareto solution) is generated. Furthermore, a fuzzy-based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto set. An illustrative example associated with the best compromise solution for parameter tuning of the nonlinear HTRS system is introduced to verify the feasibility and the effectiveness of the proposed AGPSO-based optimization approach, as compared with two another prominent multi-objective algorithms, i.e. Non-dominated Sorting Genetic Algorithm II (NSGAII) and Strength Pareto Evolutionary Algorithm II (SPEAII), for the quality and diversity of obtained Pareto solutions set. Consequently, simulation results show that this AGPSO optimized approach outperforms than compared methods with higher efficiency and better quality no matter whether the HTRS system works under unload or load conditions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Flight Test of an Adaptive Configuration Optimization System for Transport Aircraft
NASA Technical Reports Server (NTRS)
Gilyard, Glenn B.; Georgie, Jennifer; Barnicki, Joseph S.
1999-01-01
A NASA Dryden Flight Research Center program explores the practical application of real-time adaptive configuration optimization for enhanced transport performance on an L-1011 aircraft. This approach is based on calculation of incremental drag from forced-response, symmetric, outboard aileron maneuvers. In real-time operation, the symmetric outboard aileron deflection is directly optimized, and the horizontal stabilator and angle of attack are indirectly optimized. A flight experiment has been conducted from an onboard research engineering test station, and flight research results are presented herein. The optimization system has demonstrated the capability of determining the minimum drag configuration of the aircraft in real time. The drag-minimization algorithm is capable of identifying drag to approximately a one-drag-count level. Optimizing the symmetric outboard aileron position realizes a drag reduction of 2-3 drag counts (approximately 1 percent). Algorithm analysis of maneuvers indicate that two-sided raised-cosine maneuvers improve definition of the symmetric outboard aileron drag effect, thereby improving analysis results and consistency. Ramp maneuvers provide a more even distribution of data collection as a function of excitation deflection than raised-cosine maneuvers provide. A commercial operational system would require airdata calculations and normal output of current inertial navigation systems; engine pressure ratio measurements would be optional.
Mitigating Provider Uncertainty in Service Provision Contracts
NASA Astrophysics Data System (ADS)
Smith, Chris; van Moorsel, Aad
Uncertainty is an inherent property of open, distributed and multiparty systems. The viability of the mutually beneficial relationships which motivate these systems relies on rational decision-making by each constituent party under uncertainty. Service provision in distributed systems is one such relationship. Uncertainty is experienced by the service provider in his ability to deliver a service with selected quality level guarantees due to inherent non-determinism, such as load fluctuations and hardware failures. Statistical estimators utilized to model this non-determinism introduce additional uncertainty through sampling error. Inability of the provider to accurately model and analyze uncertainty in the quality level guarantees can result in the formation of sub-optimal service provision contracts. Emblematic consequences include loss of revenue, inefficient resource utilization and erosion of reputation and consumer trust. We propose a utility model for contract-based service provision to provide a systematic approach to optimal service provision contract formation under uncertainty. Performance prediction methods to enable the derivation of statistical estimators for quality level are introduced, with analysis of their resultant accuracy and cost.
NASA Astrophysics Data System (ADS)
Haneda, Kiyofumi; Kajima, Toshio; Koyama, Tadashi; Muranaka, Hiroyuki; Dojo, Hirofumi; Aratani, Yasuhiko
2002-05-01
The target of our study is to analyze the level of necessary security requirements, to search for suitable security measures and to optimize security distribution to every portion of the medical practice. Quantitative expression must be introduced to our study, if possible, to enable simplified follow-up security procedures and easy evaluation of security outcomes or results. Using fault tree analysis (FTA), system analysis showed that system elements subdivided into groups by details result in a much more accurate analysis. Such subdivided composition factors greatly depend on behavior of staff, interactive terminal devices, kinds of services provided, and network routes. Security measures were then implemented based on the analysis results. In conclusion, we identified the methods needed to determine the required level of security and proposed security measures for each medical information system, and the basic events and combinations of events that comprise the threat composition factors. Methods for identifying suitable security measures were found and implemented. Risk factors for each basic event, a number of elements for each composition factor, and potential security measures were found. Methods to optimize the security measures for each medical information system were proposed, developing the most efficient distribution of risk factors for basic events.
NASA Astrophysics Data System (ADS)
McCurdy, David R.; Krivanek, Thomas M.; Roche, Joseph M.; Zinolabedini, Reza
2006-01-01
The concept of a human rated transport vehicle for various near earth missions is evaluated using a liquid hydrogen fueled Bimodal Nuclear Thermal Propulsion (BNTP) approach. In an effort to determine the preliminary sizing and optimal propulsion system configuration, as well as the key operating design points, an initial investigation into the main system level parameters was conducted. This assessment considered not only the performance variables but also the more subjective reliability, operability, and maintainability attributes. The SIZER preliminary sizing tool was used to facilitate rapid modeling of the trade studies, which included tank materials, propulsive versus an aero-capture trajectory, use of artificial gravity, reactor chamber operating pressure and temperature, fuel element scaling, engine thrust rating, engine thrust augmentation by adding oxygen to the flow in the nozzle for supersonic combustion, and the baseline turbopump configuration to address mission redundancy and safety requirements. A high level system perspective was maintained to avoid focusing solely on individual component optimization at the expense of system level performance, operability, and development cost.
Optimization of Water Management of Cranberry Fields under Current and Future Climate Conditions
NASA Astrophysics Data System (ADS)
Létourneau, G.; Gumiere, S.; Mailhot, E.; Rousseau, A. N.
2016-12-01
In North America, cranberry production is on the rise. Since 2005, land area dedicated to cranberry doubled, principally in Canada. Recent studies have shown that sub-irrigation could lead to improvements in yield, water use efficiency and pumping energy requirements compared to conventional sprinkler irrigation. However, the experimental determination of the optimal water table level of each production site may be expensiveand time-consuming. The primary objective of this study is to optimize the water table level as a function of typical soil properties, and climatic conditions observed in major production areas using a numerical modeling approach. The second objective is to evaluate the impacts of projected climatic conditions on water management of cranberry fields. To that end, cranberry-specific management operations such as harvest flooding, rapid drainage following heavy rainfall, or hydric stress management during dry weather conditions were simulated with the HYDRUS 2D software. Results have shown that maintaining the water table approximately at 60 cm provides optimal results for most of the studied soils. However, under certain extreme climatic conditions, the drainage system design may not allow maintaining optimal hydric conditions for cranberry growth. The long-term benefit of this study has potential to advance the design of drainage/sub-irrigation systems.
Larson, David B; Malarik, Remo J; Hall, Seth M; Podberesky, Daniel J
2013-10-01
To evaluate the effect of an automated computed tomography (CT) radiation dose optimization and process control system on the consistency of estimated image noise and size-specific dose estimates (SSDEs) of radiation in CT examinations of the chest, abdomen, and pelvis. This quality improvement project was determined not to constitute human subject research. An automated system was developed to analyze each examination immediately after completion, and to report individual axial-image-level and study-level summary data for patient size, image noise, and SSDE. The system acquired data for 4 months beginning October 1, 2011. Protocol changes were made by using parameters recommended by the prediction application, and 3 months of additional data were acquired. Preimplementation and postimplementation mean image noise and SSDE were compared by using unpaired t tests and F tests. Common-cause variation was differentiated from special-cause variation by using a statistical process control individual chart. A total of 817 CT examinations, 490 acquired before and 327 acquired after the initial protocol changes, were included in the study. Mean patient age and water-equivalent diameter were 12.0 years and 23.0 cm, respectively. The difference between actual and target noise increased from -1.4 to 0.3 HU (P < .01) and the standard deviation decreased from 3.9 to 1.6 HU (P < .01). Mean SSDE decreased from 11.9 to 7.5 mGy, a 37% reduction (P < .01). The process control chart identified several special causes of variation. Implementation of an automated CT radiation dose optimization system led to verifiable simultaneous decrease in image noise variation and SSDE. The automated nature of the system provides the opportunity for consistent CT radiation dose optimization on a broad scale. © RSNA, 2013.
NASA Astrophysics Data System (ADS)
Ozbulut, O. E.; Silwal, B.
2014-04-01
This study investigates the optimum design parameters of a superelastic friction base isolator (S-FBI) system through a multi-objective genetic algorithm and performance-based evaluation approach. The S-FBI system consists of a flat steel- PTFE sliding bearing and a superelastic NiTi shape memory alloy (SMA) device. Sliding bearing limits the transfer of shear across the isolation interface and provides damping from sliding friction. SMA device provides restoring force capability to the isolation system together with additional damping characteristics. A three-story building is modeled with S-FBI isolation system. Multiple-objective numerical optimization that simultaneously minimizes isolation-level displacements and superstructure response is carried out with a genetic algorithm (GA) in order to optimize S-FBI system. Nonlinear time history analyses of the building with S-FBI system are performed. A set of 20 near-field ground motion records are used in numerical simulations. Results show that S-FBI system successfully control response of the buildings against near-fault earthquakes without sacrificing in isolation efficacy and producing large isolation-level deformations.
A CPS Based Optimal Operational Control System for Fused Magnesium Furnace
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Tian-you; Wu, Zhi-wei; Wang, Hong
Fused magnesia smelting for fused magnesium furnace (FMF) is an energy intensive process with high temperature and comprehensive complexities. Its operational index namely energy consumption per ton (ECPT) is defined as the consumed electrical energy per ton of acceptable quality and is difficult to measure online. Moreover, the dynamics of ECPT cannot be precisely modelled mathematically. The model parameters of the three-phase currents of the electrodes such as the molten pool level, its variation rate and resistance are uncertain and nonlinear functions of the changes in both the smelting process and the raw materials composition. In this paper, an integratedmore » optimal operational control algorithm proposed is composed of a current set-point control, a current switching control and a self-optimized tuning mechanism. The tight conjoining of and coordination between the computational resources including the integrated optimal operational control, embedded software, industrial cloud, wireless communication and the physical resources of FMF constitutes a cyber-physical system (CPS) based embedded optimal operational control system. Successful application of this system has been made for a production line with ten fused magnesium furnaces in a factory in China, leading to a significant reduced ECPT.« less
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
Power optimization of wireless media systems with space-time block codes.
Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran
2004-07-01
We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.
Optical systems integrated modeling
NASA Technical Reports Server (NTRS)
Shannon, Robert R.; Laskin, Robert A.; Brewer, SI; Burrows, Chris; Epps, Harlan; Illingworth, Garth; Korsch, Dietrich; Levine, B. Martin; Mahajan, Vini; Rimmer, Chuck
1992-01-01
An integrated modeling capability that provides the tools by which entire optical systems and instruments can be simulated and optimized is a key technology development, applicable to all mission classes, especially astrophysics. Many of the future missions require optical systems that are physically much larger than anything flown before and yet must retain the characteristic sub-micron diffraction limited wavefront accuracy of their smaller precursors. It is no longer feasible to follow the path of 'cut and test' development; the sheer scale of these systems precludes many of the older techniques that rely upon ground evaluation of full size engineering units. The ability to accurately model (by computer) and optimize the entire flight system's integrated structural, thermal, and dynamic characteristics is essential. Two distinct integrated modeling capabilities are required. These are an initial design capability and a detailed design and optimization system. The content of an initial design package is shown. It would be a modular, workstation based code which allows preliminary integrated system analysis and trade studies to be carried out quickly by a single engineer or a small design team. A simple concept for a detailed design and optimization system is shown. This is a linkage of interface architecture that allows efficient interchange of information between existing large specialized optical, control, thermal, and structural design codes. The computing environment would be a network of large mainframe machines and its users would be project level design teams. More advanced concepts for detailed design systems would support interaction between modules and automated optimization of the entire system. Technology assessment and development plans for integrated package for initial design, interface development for detailed optimization, validation, and modeling research are presented.
MATLAB/Simulink Pulse-Echo Ultrasound System Simulator Based on Experimentally Validated Models.
Kim, Taehoon; Shin, Sangmin; Lee, Hyongmin; Lee, Hyunsook; Kim, Heewon; Shin, Eunhee; Kim, Suhwan
2016-02-01
A flexible clinical ultrasound system must operate with different transducers, which have characteristic impulse responses and widely varying impedances. The impulse response determines the shape of the high-voltage pulse that is transmitted and the specifications of the front-end electronics that receive the echo; the impedance determines the specification of the matching network through which the transducer is connected. System-level optimization of these subsystems requires accurate modeling of pulse-echo (two-way) response, which in turn demands a unified simulation of the ultrasonics and electronics. In this paper, this is realized by combining MATLAB/Simulink models of the high-voltage transmitter, the transmission interface, the acoustic subsystem which includes wave propagation and reflection, the receiving interface, and the front-end receiver. To demonstrate the effectiveness of our simulator, the models are experimentally validated by comparing the simulation results with the measured data from a commercial ultrasound system. This simulator could be used to quickly provide system-level feedback for an optimized tuning of electronic design parameters.
Incorporation of β-glucans in meat emulsions through an optimal mixture modeling systems.
Vasquez Mejia, Sandra M; de Francisco, Alicia; Manique Barreto, Pedro L; Damian, César; Zibetti, Andre Wüst; Mahecha, Hector Suárez; Bohrer, Benjamin M
2018-09-01
The effects of β-glucans (βG) in beef emulsions with carrageenan and starch were evaluated using an optimal mixture modeling system. The best mathematical models to describe the cooking loss, color, and textural profile analysis (TPA) were selected and optimized. The cubic models were better to describe the cooking loss, color, and TPA parameters, with the exception of springiness. Emulsions with greater levels of βG and starch had less cooking loss (<1%), intermediate L* (>54 and <62), and greater hardness, cohesiveness and springiness values. Subsequently, during the optimization phase, the use of carrageenan was eliminated. The optimized emulsion contained 3.13 ± 0.11% βG, which could cover the intake daily of βG recommendations. However, the hardness of the optimized emulsion was greater (60,224 ± 1025 N) than expected. The optimized emulsion had a homogeneous structure and normal thermal behavior by DSC and allowed for the manufacture of products with high amounts of βG and desired functional attributes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optimization techniques using MODFLOW-GWM
Grava, Anna; Feinstein, Daniel T.; Barlow, Paul M.; Bonomi, Tullia; Buarne, Fabiola; Dunning, Charles; Hunt, Randall J.
2015-01-01
An important application of optimization codes such as MODFLOW-GWM is to maximize water supply from unconfined aquifers subject to constraints involving surface-water depletion and drawdown. In optimizing pumping for a fish hatchery in a bedrock aquifer system overlain by glacial deposits in eastern Wisconsin, various features of the GWM-2000 code were used to overcome difficulties associated with: 1) Non-linear response matrices caused by unconfined conditions and head-dependent boundaries; 2) Efficient selection of candidate well and drawdown constraint locations; and 3) Optimizing against water-level constraints inside pumping wells. Features of GWM-2000 were harnessed to test the effects of systematically varying the decision variables and constraints on the optimized solution for managing withdrawals. An important lesson of the procedure, similar to lessons learned in model calibration, is that the optimized outcome is non-unique, and depends on a range of choices open to the user. The modeler must balance the complexity of the numerical flow model used to represent the groundwater-flow system against the range of options (decision variables, objective functions, constraints) available for optimizing the model.
NASA Astrophysics Data System (ADS)
Premasiri, Amaranath; Happawana, Gemunu; Rosen, Arye
2007-02-01
Photodynamic therapy (PDT) is an approved treatment modality for Barrett's and invasive esophageal carcinoma. Proper Combination of photosentizing agent, oxygen, and a specific wavelength of light to activate the photosentizing agents is necessary for the cytotoxic destruction of cancerous cells by PDT. As a light source expensive solid-state laser sources currently are being used for the treatment. Inexpensive semiconductor lasers have been suggested for the light delivery system, however packaging of semiconductor lasers for optimal optical power output is challenging. In this paper, we present a multidirectional direct water-cooling of semiconductor lasers that provides a better efficiency than the conventional unidirectional cooling. AlGaAsP lasers were tested under de-ionized (DI) water and it is shown that the optical power output of the lasers under the DI water is much higher than that of the uni-directional cooling of lasers. Also, in this paper we discuss how direct DI water-cooling can optimize power output of semiconductor lasers. Thereafter an optimal design of the semiconductor laser package is shown with the DI water-cooling system. Further, a microwave antenna is designed which is to be imprinted on to a balloon catheter in order to provide local heating of esophagus, leading to an increase in local oxygenation of the tumor to generate an effective level of singlet oxygen for cellular death. Finally the optimal level of light energy that is required to achieve the expected level of singlet oxygen is modeled to design an efficient PDT protocol.
ERIC Educational Resources Information Center
Rosmann, Michael R.
A family therapy model, based on a conceptualization of the family as a behavioral system whose members interact adaptively so that an optimal level of functioning is maintained within the system, is described. The divergent roots of this conceptualization are discussed briefly, as are the treatment approaches based on it. The author's model,…
USDA-ARS?s Scientific Manuscript database
Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...
Kiong, Tiong Sieh; Salem, S. Balasem; Paw, Johnny Koh Siaw; Sankar, K. Prajindra
2014-01-01
In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals. PMID:25003136
Kiong, Tiong Sieh; Salem, S Balasem; Paw, Johnny Koh Siaw; Sankar, K Prajindra; Darzi, Soodabeh
2014-01-01
In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals.
A Hybrid Interval-Robust Optimization Model for Water Quality Management.
Xu, Jieyu; Li, Yongping; Huang, Guohe
2013-05-01
In water quality management problems, uncertainties may exist in many system components and pollution-related processes ( i.e. , random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval-robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements.
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.
2017-02-01
A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.
NASA Technical Reports Server (NTRS)
Hanks, J. H.; Dhople, A. M.
1975-01-01
Stability and optimal concentrations of reagents were studied in bioluminescence assay of ATP levels. Luciferase enzyme was prepared and purified using Sephadex G-100. Interdependencies between enzyme and luciferin concentrations in presence of optimal Mg are illustrated. Optimal ionic strength was confirmed to be 0.05 M for the four buffers tested. Adapted features of the R- and H-systems are summarized, as well as the percentages of ATP pools released from representative microbes by heat and chloroform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, P.F.
1986-03-01
The Monsanto Chocolate Bayou plant has had an aggressive and successful energy conservation program. The combined efforts have resulted in a 80% reduction in unit energy consumption compared to 1972. The approach of using system audits to optimize fluid systems was developed. Since most of the fluid movers are centrifugal, the name Centrifugal Savings Task Force was adopted. There are three tools that are particularly valuable in optimizing fluid systems. First, a working level understanding of the Affinity Laws seems a must. In addition, the performance curves for the fluid movers is needed. The last need is accurate system fieldmore » data. Systems effectively managed at the Chocolate Bayou plant were process air improvement, feed-water pressure reduction, combustion air blower turbine speed control, and cooling tower pressure reduction. Optimization of centrifugal systems is an often-overlooked opportunity for energy savings. The basic guidelines are to move only the fluid needed, and move it at as low a pressure as possible.« less
NASA Technical Reports Server (NTRS)
Foyle, David C.
1993-01-01
Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.
Kazuta, Yasuaki; Matsuura, Tomoaki; Ichihashi, Norikazu; Yomo, Tetsuya
2014-11-01
In this study, the amount of protein synthesized using an in vitro protein synthesis system composed of only highly purified components (the PURE system) was optimized. By varying the concentrations of each system component, we determined the component concentrations that result in the synthesis of 0.38 mg/mL green fluorescent protein (GFP) in batch mode and 3.8 mg/mL GFP in dialysis mode. In dialysis mode, protein concentrations of 4.3 and 4.4 mg/mL were synthesized for dihydrofolate reductase and β-galactosidase, respectively. Using the optimized system, the synthesized protein represented 30% (w/w) of the total protein, which is comparable to the level of overexpressed protein in Escherichia coli cells. This optimized reconstituted in vitro protein synthesis system may potentially be useful for various applications, including in vitro directed evolution of proteins, artificial cell assembly, and protein structural studies. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
1988-08-01
be possible for management at all levels to review the S&T Program in order to optimize the investment in two ways--(1) over time and (2) by priority...easier at the higher management levels. a. Optimizing Over Time One concern in investing in new technology is to balance the near-term and far-term... managed and integrated into systems that meet the perceived threat on a timely basis. To this end, the Core Group formed working groups to find
Optimal Wind Power Uncertainty Intervals for Electricity Market Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ying; Zhou, Zhi; Botterud, Audun
It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixedmore » integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.« less
Performance of discrete heat engines and heat pumps in finite time
Feldmann; Kosloff
2000-05-01
The performance in finite time of a discrete heat engine with internal friction is analyzed. The working fluid of the engine is composed of an ensemble of noninteracting two level systems. External work is applied by changing the external field and thus the internal energy levels. The friction induces a minimal cycle time. The power output of the engine is optimized with respect to time allocation between the contact time with the hot and cold baths as well as the adiabats. The engine's performance is also optimized with respect to the external fields. By reversing the cycle of operation a heat pump is constructed. The performance of the engine as a heat pump is also optimized. By varying the time allocation between the adiabats and the contact time with the reservoir a universal behavior can be identified. The optimal performance of the engine when the cold bath is approaching absolute zero is studied. It is found that the optimal cooling rate converges linearly to zero when the temperature approaches absolute zero.
A comparison of automated dispensing cabinet optimization methods.
O'Neil, Daniel P; Miller, Adam; Cronin, Daniel; Hatfield, Chad J
2016-07-01
Results of a study comparing two methods of optimizing automated dispensing cabinets (ADCs) are reported. Eight nonprofiled ADCs were optimized over six months. Optimization of each cabinet involved three steps: (1) removal of medications that had not been dispensed for at least 180 days, (2) movement of ADC stock to better suit end-user needs and available space, and (3) adjustment of par levels (desired on-hand inventory levels). The par levels of four ADCs (the Day Supply group) were adjusted according to average daily usage; the par levels of the other four ADCs (the Formula group) were adjusted using a standard inventory formula. The primary outcome was the vend:fill ratio, while secondary outcomes included total inventory, inventory cost, quantity of expired medications, and ADC stockout percentage. The total number of medications stocked in the eight machines was reduced from 1,273 in a designated two-month preoptimization period to 1,182 in a designated two-month postoptimization period, yielding a carrying cost savings of $44,981. The mean vend:fill ratios before and after optimization were 4.43 and 4.46, respectively. The vend:fill ratio for ADCs in the Formula group increased from 4.33 before optimization to 5.2 after optimization; in the Day Supply group, the ratio declined (from 4.52 to 3.90). The postoptimization interaction difference between the Formula and Day Supply groups was found to be significant (p = 0.0477). ADC optimization via a standard inventory formula had a positive impact on inventory costs, refills, vend:fill ratios, and stockout percentages. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Analysis and Optimization of Building Energy Consumption
NASA Astrophysics Data System (ADS)
Chuah, Jun Wei
Energy is one of the most important resources required by modern human society. In 2010, energy expenditures represented 10% of global gross domestic product (GDP). By 2035, global energy consumption is expected to increase by more than 50% from current levels. The increased pace of global energy consumption leads to significant environmental and socioeconomic issues: (i) carbon emissions, from the burning of fossil fuels for energy, contribute to global warming, and (ii) increased energy expenditures lead to reduced standard of living. Efficient use of energy, through energy conservation measures, is an important step toward mitigating these effects. Residential and commercial buildings represent a prime target for energy conservation, comprising 21% of global energy consumption and 40% of the total energy consumption in the United States. This thesis describes techniques for the analysis and optimization of building energy consumption. The thesis focuses on building retrofits and building energy simulation as key areas in building energy optimization and analysis. The thesis first discusses and evaluates building-level renewable energy generation as a solution toward building energy optimization. The thesis next describes a novel heating system, called localized heating. Under localized heating, building occupants are heated individually by directed radiant heaters, resulting in a considerably reduced heated space and significant heating energy savings. To support localized heating, a minimally-intrusive indoor occupant positioning system is described. The thesis then discusses occupant-level sensing (OLS) as the next frontier in building energy optimization. OLS captures the exact environmental conditions faced by each building occupant, using sensors that are carried by all building occupants. The information provided by OLS enables fine-grained optimization for unprecedented levels of energy efficiency and occupant comfort. The thesis also describes a retrofit-oriented building energy simulator, ROBESim, that natively supports building retrofits. ROBESim extends existing building energy simulators by providing a platform for the analysis of novel retrofits, in addition to simulating existing retrofits. Using ROBESim, retrofits can be automatically applied to buildings, obviating the need for users to manually update building characteristics for comparisons between different building retrofits. ROBESim also includes several ease-of-use enhancements to support users of all experience levels.
Economopoulou, M A; Economopoulou, A A; Economopoulos, A P
2013-11-01
The paper describes a software system capable of formulating alternative optimal Municipal Solid Wastes (MSWs) management plans, each of which meets a set of constraints that may reflect selected objections and/or wishes of local communities. The objective function to be minimized in each plan is the sum of the annualized capital investment and annual operating cost of all transportation, treatment and final disposal operations involved, taking into consideration the possible income from the sale of products and any other financial incentives or disincentives that may exist. For each plan formulated, the system generates several reports that define the plan, analyze its cost elements and yield an indicative profile of selected types of installations, as well as data files that facilitate the geographic representation of the optimal solution in maps through the use of GIS. A number of these reports compare the technical and economic data from all scenarios considered at the study area, municipality and installation level constituting in effect sensitivity analysis. The generation of alternative plans offers local authorities the opportunity of choice and the results of the sensitivity analysis allow them to choose wisely and with consensus. The paper presents also an application of this software system in the capital Region of Attica in Greece, for the purpose of developing an optimal waste transportation system in line with its approved waste management plan. The formulated plan was able to: (a) serve 113 Municipalities and Communities that generate nearly 2 milliont/y of comingled MSW with distinctly different waste collection patterns, (b) take into consideration several existing waste transfer stations (WTS) and optimize their use within the overall plan, (c) select the most appropriate sites among the potentially suitable (new and in use) ones, (d) generate the optimal profile of each WTS proposed, and (e) perform sensitivity analysis so as to define the impact of selected sets of constraints (limitations in the availability of sites and in the capacity of their installations) on the design and cost of the ensuing optimal waste transfer system. The results show that optimal planning offers significant economic savings to municipalities, while reducing at the same time the present levels of traffic, fuel consumptions and air emissions in the congested Athens basin. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Eimori, Takahisa; Anami, Kenji; Yoshimatsu, Norifumi; Hasebe, Tetsuya; Murakami, Kazuaki
2014-01-01
A comprehensive design optimization methodology using intuitive nondimensional parameters of inversion-level and saturation-level is proposed, especially for ultralow-power, low-voltage, and high-performance analog circuits with mixed strong, moderate, and weak inversion metal-oxide-semiconductor transistor (MOST) operations. This methodology is based on the synthesized charge-based MOST model composed of Enz-Krummenacher-Vittoz (EKV) basic concepts and advanced-compact-model (ACM) physics-based equations. The key concept of this methodology is that all circuit and system characteristics are described as some multivariate functions of inversion-level parameters, where the inversion level is used as an independent variable representative of each MOST. The analog circuit design starts from the first step of inversion-level design using universal characteristics expressed by circuit currents and inversion-level parameters without process-dependent parameters, followed by the second step of foundry-process-dependent design and the last step of verification using saturation-level criteria. This methodology also paves the way to an intuitive and comprehensive design approach for many kinds of analog circuit specifications by optimization using inversion-level log-scale diagrams and saturation-level criteria. In this paper, we introduce an example of our design methodology for a two-stage Miller amplifier.
Model-Based Design of Tree WSNs for Decentralized Detection.
Tantawy, Ashraf; Koutsoukos, Xenofon; Biswas, Gautam
2015-08-20
The classical decentralized detection problem of finding the optimal decision rules at the sensor and fusion center, as well as variants that introduce physical channel impairments have been studied extensively in the literature. The deployment of WSNs in decentralized detection applications brings new challenges to the field. Protocols for different communication layers have to be co-designed to optimize the detection performance. In this paper, we consider the communication network design problem for a tree WSN. We pursue a system-level approach where a complete model for the system is developed that captures the interactions between different layers, as well as different sensor quality measures. For network optimization, we propose a hierarchical optimization algorithm that lends itself to the tree structure, requiring only local network information. The proposed design approach shows superior performance over several contentionless and contention-based network design approaches.
NASA Astrophysics Data System (ADS)
Mohammed Anzar, Sharafudeen Thaha; Sathidevi, Puthumangalathu Savithri
2014-12-01
In this paper, we have considered the utility of multi-normalization and ancillary measures, for the optimal score level fusion of fingerprint and voice biometrics. An efficient matching score preprocessing technique based on multi-normalization is employed for improving the performance of the multimodal system, under various noise conditions. Ancillary measures derived from the feature space and the score space are used in addition to the matching score vectors, for weighing the modalities, based on their relative degradation. Reliability (dispersion) and the separability (inter-/intra-class distance and d-prime statistics) measures under various noise conditions are estimated from the individual modalities, during the training/validation stage. The `best integration weights' are then computed by algebraically combining these measures using the weighted sum rule. The computed integration weights are then optimized against the recognition accuracy using techniques such as grid search, genetic algorithm and particle swarm optimization. The experimental results show that, the proposed biometric solution leads to considerable improvement in the recognition performance even under low signal-to-noise ratio (SNR) conditions and reduces the false acceptance rate (FAR) and false rejection rate (FRR), making the system useful for security as well as forensic applications.
Studies of mineralization in tissue culture: optimal conditions for cartilage calcification
NASA Technical Reports Server (NTRS)
Boskey, A. L.; Stiner, D.; Doty, S. B.; Binderman, I.; Leboy, P.
1992-01-01
The optimal conditions for obtaining a calcified cartilage matrix approximating that which exists in situ were established in a differentiating chick limb bud mesenchymal cell culture system. Using cells from stage 21-24 embryos in a micro-mass culture, at an optimal density of 0.5 million cells/20 microliters spot, the deposition of small crystals of hydroxyapatite on a collagenous matrix and matrix vesicles was detected by day 21 using X-ray diffraction, FT-IR microscopy, and electron microscopy. Optimal media, containing 1.1 mM Ca, 4 mM P, 25 micrograms/ml vitamin C, 0.3 mg/ml glutamine, no Hepes buffer, and 10% fetal bovine serum, produced matrix resembling the calcifying cartilage matrix of fetal chick long bones. Interestingly, higher concentrations of fetal bovine serum had an inhibitory effect on calcification. The cartilage phenotype was confirmed based on the cellular expression of cartilage collagen and proteoglycan mRNAs, the presence of type II and type X collagen, and cartilage type proteoglycan at the light microscopic level, and the presence of chondrocytes and matrix vesicles at the EM level. The system is proposed as a model for evaluating the events in cell mediated cartilage calcification.
Optimization of power systems with voltage security constraints
NASA Astrophysics Data System (ADS)
Rosehart, William Daniel
As open access market principles are applied to power systems, significant changes in their operation and control are occurring. In the new marketplace, power systems are operating under higher loading conditions as market influences demand greater attention to operating cost versus stability margins. Since stability continues to be a basic requirement in the operation of any power system, new tools are being considered to analyze the effect of stability on the operating cost of the system, so that system stability can be incorporated into the costs of operating the system. In this thesis, new optimal power flow (OPF) formulations are proposed based on multi-objective methodologies to optimize active and reactive power dispatch while maximizing voltage security in power systems. The effects of minimizing operating costs, minimizing reactive power generation and/or maximizing voltage stability margins are analyzed. Results obtained using the proposed Voltage Stability Constrained OPF formulations are compared and analyzed to suggest possible ways of costing voltage security in power systems. When considering voltage stability margins the importance of system modeling becomes critical, since it has been demonstrated, based on bifurcation analysis, that modeling can have a significant effect of the behavior of power systems, especially at high loading levels. Therefore, this thesis also examines the effects of detailed generator models and several exponential load models. Furthermore, because of its influence on voltage stability, a Static Var Compensator model is also incorporated into the optimization problems.
Stochastic resonance in attention control
NASA Astrophysics Data System (ADS)
Kitajo, K.; Yamanaka, K.; Ward, L. M.; Yamamoto, Y.
2006-12-01
We investigated the beneficial role of noise in a human higher brain function, namely visual attention control. We asked subjects to detect a weak gray-level target inside a marker box either in the left or the right visual field. Signal detection performance was optimized by presenting a low level of randomly flickering gray-level noise between and outside the two possible target locations. Further, we found that an increase in eye movement (saccade) rate helped to compensate for the usual deterioration in detection performance at higher noise levels. To our knowledge, this is the first experimental evidence that noise can optimize a higher brain function which involves distinct brain regions above the level of primary sensory systems -- switching behavior between multi-stable attention states -- via the mechanism of stochastic resonance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Dezhi; Zhan, Qingwen; Chen, Yuche
This study proposes an optimization model that simultaneously incorporates the selection of logistics infrastructure investments and subsidies for green transport modes to achieve specific CO 2 emission targets in a regional logistics network. The proposed model is formulated as a bi-level formulation, in which the upper level determines the optimal selection of logistics infrastructure investments and subsidies for green transport modes such that the benefit-cost ratio of the entire logistics system is maximized. The lower level describes the selected service routes of logistics users. A genetic and Frank-Wolfe hybrid algorithm is introduced to solve the proposed model. The proposed modelmore » is applied to the regional logistics network of Changsha City, China. Findings show that using the joint scheme of the selection of logistics infrastructure investments and green subsidies is more effective than using them solely. In conclusion, carbon emission reduction targets can significantly affect logistics infrastructure investments and subsidy levels.« less
Zhang, Dezhi; Zhan, Qingwen; Chen, Yuche; ...
2016-03-14
This study proposes an optimization model that simultaneously incorporates the selection of logistics infrastructure investments and subsidies for green transport modes to achieve specific CO 2 emission targets in a regional logistics network. The proposed model is formulated as a bi-level formulation, in which the upper level determines the optimal selection of logistics infrastructure investments and subsidies for green transport modes such that the benefit-cost ratio of the entire logistics system is maximized. The lower level describes the selected service routes of logistics users. A genetic and Frank-Wolfe hybrid algorithm is introduced to solve the proposed model. The proposed modelmore » is applied to the regional logistics network of Changsha City, China. Findings show that using the joint scheme of the selection of logistics infrastructure investments and green subsidies is more effective than using them solely. In conclusion, carbon emission reduction targets can significantly affect logistics infrastructure investments and subsidy levels.« less
NASA Astrophysics Data System (ADS)
Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.
1991-03-01
To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).
From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation
Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...
2013-01-01
Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less
Zhou, Yuan; Shi, Tie-Mao; Hu, Yuan-Man; Gao, Chang; Liu, Miao; Song, Lin-Qi
2011-12-01
Based on geographic information system (GIS) technology and multi-objective location-allocation (LA) model, and in considering of four relatively independent objective factors (population density level, air pollution level, urban heat island effect level, and urban land use pattern), an optimized location selection for the urban parks within the Third Ring of Shenyang was conducted, and the selection results were compared with the spatial distribution of existing parks, aimed to evaluate the rationality of the spatial distribution of urban green spaces. In the location selection of urban green spaces in the study area, the factor air pollution was most important, and, compared with single objective factor, the weighted analysis results of multi-objective factors could provide optimized spatial location selection of new urban green spaces. The combination of GIS technology with LA model would be a new approach for the spatial optimizing of urban green spaces.
NASA Astrophysics Data System (ADS)
Cao, Yang; Liu, Chun; Huang, Yuehui; Wang, Tieqiang; Sun, Chenjun; Yuan, Yue; Zhang, Xinsong; Wu, Shuyun
2017-02-01
With the development of roof photovoltaic power (PV) generation technology and the increasingly urgent need to improve supply reliability levels in remote areas, islanded microgrid with photovoltaic and energy storage systems (IMPE) is developing rapidly. The high costs of photovoltaic panel material and energy storage battery material have become the primary factors that hinder the development of IMPE. The advantages and disadvantages of different types of photovoltaic panel materials and energy storage battery materials are analyzed in this paper, and guidance is provided on material selection for IMPE planners. The time sequential simulation method is applied to optimize material demands of the IMPE. The model is solved by parallel algorithms that are provided by a commercial solver named CPLEX. Finally, to verify the model, an actual IMPE is selected as a case system. Simulation results on the case system indicate that the optimization model and corresponding algorithm is feasible. Guidance for material selection and quantity demand for IMPEs in remote areas is provided by this method.
Khanna, Sankalp; Boyle, Justin; Good, Norm; Lind, James
2012-10-01
To investigate the effect of hospital occupancy levels on inpatient and ED patient flow parameters, and to simulate the impact of shifting discharge timing on occupancy levels. Retrospective analysis of hospital inpatient data and ED data from 23 reporting public hospitals in Queensland, Australia, across 30 months. Relationships between outcome measures were explored through the aggregation of the historic data into 21 912 hourly intervals. Main outcome measures included admission and discharge rates, occupancy levels, length of stay for admitted and emergency patients, and the occurrence of access block. The impact of shifting discharge timing on occupancy levels was quantified using observed and simulated data. The study identified three stages of system performance decline, or choke points, as hospital occupancy increased. These choke points were found to be dependent on hospital size, and reflect a system change from 'business-as-usual' to 'crisis'. Effecting early discharge of patients was also found to significantly (P < 0.001) impact overcrowding levels and improve patient flow. Modern hospital systems have the ability to operate efficiently above an often-prescribed 85% occupancy level, with optimal levels varying across hospitals of different size. Operating over these optimal levels leads to performance deterioration defined around occupancy choke points. Understanding these choke points and designing strategies around alleviating these flow bottlenecks would improve capacity management, reduce access block and improve patient outcomes. Effecting early discharge also helps alleviate overcrowding and related stress on the system. © 2012 CSIRO. EMA © 2012 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
Liu, Cunbao; Yang, Xu; Yao, Yufeng; Huang, Weiwei; Sun, Wenjia; Ma, Yanbing
2014-05-01
Two versions of an optimized gene that encodes human papilloma virus type 16 major protein L1 were designed according to the codon usage frequency of Pichia pastoris. Y16 was highly expressed in both P. pastoris and Hansenula polymorpha. M16 expression was as efficient as that of Y16 in P. pastoris, but merely detectable in H. polymorpha even though transcription levels of M16 and Y16 were similar. H. polymorpha had a unique codon usage frequency that contains many more rare codons than Saccharomyces cerevisiae or P. pastoris. These findings indicate that even codon-optimized genes that are expressed well in S. cerevisiae and P. pastoris may be inefficiently expressed in H. polymorpha; thus rare codons must be avoided when universal optimized gene versions are designed to facilitate expression in a variety of yeast expression systems, especially H. polymorpha is involved.
The use of an integrated variable fuzzy sets in water resources management
NASA Astrophysics Data System (ADS)
Qiu, Qingtai; Liu, Jia; Li, Chuanzhe; Yu, Xinzhe; Wang, Yang
2018-06-01
Based on the evaluation of the present situation of water resources and the development of water conservancy projects and social economy, optimal allocation of regional water resources presents an increasing need in the water resources management. Meanwhile it is also the most effective way to promote the harmonic relationship between human and water. In view of the own limitations of the traditional evaluations of which always choose a single index model using in optimal allocation of regional water resources, on the basis of the theory of variable fuzzy sets (VFS) and system dynamics (SD), an integrated variable fuzzy sets model (IVFS) is proposed to address dynamically complex problems in regional water resources management in this paper. The model is applied to evaluate the level of the optimal allocation of regional water resources of Zoucheng in China. Results show that the level of allocation schemes of water resources ranging from 2.5 to 3.5, generally showing a trend of lower level. To achieve optimal regional management of water resources, this model conveys a certain degree of accessing water resources management, which prominently improve the authentic assessment of water resources management by using the eigenvector of level H.
NASA Astrophysics Data System (ADS)
Pradana, G. W.; Fanida, E. H.; Niswah, F.
2018-01-01
The demand for good governance is directed towards the realization of efficiency, effectiveness, and clean government. The move is demonstrated through national and regional levels to develop and implement electronic government concepts. Through the development of electronic government is done structuring management systems and work processes in the government environment by optimizing the utilization of information technology. One of the real forms of electronic government (e-Gov) implementation at the local level is the Intranet Sub-District program in Sukodono Sub-District, Sidoarjo. Intranet Sub-District is an innovation whose purpose is to realize the availability of information on the utilization of management, distribution, and storage of official scripts, and also the optimal delivery of information and communication in the implementation of guidance and supervision of local administration. The type of this paper is descriptive with a qualitative approach and focus on the implementation of the Intranet District Program in Sukodono District, Sidoarjo. The findings of the study are the limited number of human resources who have mastered ICT, the uneven network, the adequacy of institutional needs and the existence of budget support from the authorized institution and the information system has not accommodated all the service needs.
Optimization of Compressor Mounting Bracket of a Passenger Car
NASA Astrophysics Data System (ADS)
Kalsi, Sachin; Singh, Daljeet; Saini, J. S.
2018-05-01
In the present work, the CAE tools are used for the optimization of the compressor mounting bracket used in an automobile. Both static and dynamic analysis is done for the bracket. With the objective to minimize the mass and increase the stiffness of the bracket, the new design is optimized using shape and topology optimization techniques. The optimized design given by CAE tool is then validated experimentally. The new design results in lower level of vibrations, contribute to lower mass along with lesser cost which is effective in air conditioning system as well as the efficiency of a vehicle. The results given by CAE tool had a very good correlation with the experimental results.
Implementation and optimization of automated dispensing cabinet technology.
McCarthy, Bryan C; Ferker, Michael
2016-10-01
A multifaceted automated dispensing cabinet (ADC) optimization initiative at a large hospital is described. The ADC optimization project, which was launched approximately six weeks after activation of ADCs in 30 patient care unit medication rooms of a newly established adult hospital, included (1) adjustment of par inventory levels (desired on-hand quantities of medications) and par reorder quantities to reduce the risk of ADC supply exhaustion and improve restocking efficiency, (2) expansion of ADC "common stock" (medications assigned to ADC inventories) to increase medication availability at the point of care, and (3) removal of some infrequently prescribed medications from ADCs to reduce the likelihood of product expiration. The purpose of the project was to address organizational concerns regarding widespread ADC medication stockouts, growing reliance on cart-fill medication delivery systems, and suboptimal medication order turnaround times. Leveraging of the ADC technology platform's reporting functionalities for enhanced inventory control yielded a number of benefits, including cost savings resulting from reduced pharmacy technician labor requirements (estimated at $2,728 annually), a substantial reduction in the overall weekly stockout percentage (from 3.2% before optimization to 0.5% eight months after optimization), an improvement in the average medication turnaround time, and estimated cost avoidance of $19,660 attributed to the reduced potential for product expiration. Efforts to optimize ADCs through par level optimization, expansion of common stock, and removal of infrequently used medications reduced pharmacy technician labor, decreased stockout percentages, generated opportunities for cost avoidance, and improved medication turnaround times. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Operations research applications in nuclear energy
NASA Astrophysics Data System (ADS)
Johnson, Benjamin Lloyd
This dissertation consists of three papers; the first is published in Annals of Operations Research, the second is nearing submission to INFORMS Journal on Computing, and the third is the predecessor of a paper nearing submission to Progress in Nuclear Energy. We apply operations research techniques to nuclear waste disposal and nuclear safeguards. Although these fields are different, they allow us to showcase some benefits of using operations research techniques to enhance nuclear energy applications. The first paper, "Optimizing High-Level Nuclear Waste Disposal within a Deep Geologic Repository," presents a mixed-integer programming model that determines where to place high-level nuclear waste packages in a deep geologic repository to minimize heat load concentration. We develop a heuristic that increases the size of solvable model instances. The second paper, "Optimally Configuring a Measurement System to Detect Diversions from a Nuclear Fuel Cycle," introduces a simulation-optimization algorithm and an integer-programming model to find the best, or near-best, resource-limited nuclear fuel cycle measurement system with a high degree of confidence. Given location-dependent measurement method precisions, we (i) optimize the configuration of n methods at n locations of a hypothetical nuclear fuel cycle facility, (ii) find the most important location at which to improve method precision, and (iii) determine the effect of measurement frequency on near-optimal configurations and objective values. Our results correspond to existing outcomes but we obtain them at least an order of magnitude faster. The third paper, "Optimizing Nuclear Material Control and Accountability Measurement Systems," extends the integer program from the second paper to locate measurement methods in a larger, hypothetical nuclear fuel cycle scenario given fixed purchase and utilization budgets. This paper also presents two mixed-integer quadratic programming models to increase the precision of existing methods given a fixed improvement budget and to reduce the measurement uncertainty in the system while limiting improvement costs. We quickly obtain similar or better solutions compared to several intuitive analyses that take much longer to perform.
Aerostructural optimization of a morphing wing for airborne wind energy applications
NASA Astrophysics Data System (ADS)
Fasel, U.; Keidel, D.; Molinari, G.; Ermanni, P.
2017-09-01
Airborne wind energy (AWE) vehicles maximize energy production by constantly operating at extreme wing loading, permitted by high flight speeds. Additionally, the wide range of wind speeds and the presence of flow inhomogeneities and gusts create a complex and demanding flight environment for AWE systems. Adaptation to different flow conditions is normally achieved by conventional wing control surfaces and, in case of ground generator-based systems, by varying the reel-out speed. These control degrees of freedom enable to remain within the operational envelope, but cause significant penalties in terms of energy output. A significantly greater adaptability is offered by shape-morphing wings, which have the potential to achieve optimal performance at different flight conditions by tailoring their airfoil shape and lift distribution at different levels along the wingspan. Hence, the application of compliant structures for AWE wings is very promising. Furthermore, active gust load alleviation can be achieved through morphing, which leads to a lower weight and an expanded flight envelope, thus increasing the power production of the AWE system. This work presents a procedure to concurrently optimize the aerodynamic shape, compliant structure, and composite layup of a morphing wing for AWE applications. The morphing concept is based on distributed compliance ribs, actuated by electromechanical linear actuators, guiding the deformation of the flexible—yet load-carrying—composite skin. The goal of the aerostructural optimization is formulated as a high-level requirement, namely to maximize the average annual power production per wing area of an AWE system by tailoring the shape of the wing, and to extend the flight envelope of the wing by actively alleviating gust loads. The results of the concurrent multidisciplinary optimization show a 50.7% increase of extracted power with respect to a sequentially optimized design, highlighting the benefits of morphing and the potential of the proposed approach.
Effects of optimism on recovery and mental health after a tornado outbreak.
Carbone, Eric G; Echols, Erin Thomas
2017-05-01
Dispositional optimism, a stable expectation that good things will happen, has been shown to improve health outcomes in a wide range of contexts, but very little research has explored the impact of optimism on post-disaster health and well-being. Data for this study come from the Centers for Disease Control and Prevention's Public health systems and mental health community recovery (PHSMHCR) Survey. Participants included 3216 individuals living in counties affected by the April 2011 tornado outbreak in Mississippi and Alabama. This study assesses the effect of dispositional optimism on post-disaster recovery and mental health. Dispositional optimism was found to have a positive effect on personal recovery and mental health after the disaster. Furthermore, it moderated the relationship between level of home damage and personal recovery as well as the relationship between home damage and post-traumatic stress disorder (PTSD), with stronger effects for those with increased levels of home damage. The utility of screening for optimism is discussed, along with the potential for interventions to increase optimism as a means of mitigating adverse mental health effects and improving the recovery of individuals affected by disasters and other traumatic events.
Effects of optimism on recovery and mental health after a tornado outbreak
Carbone, Eric G.; Echols, Erin Thomas
2017-01-01
Objective Dispositional optimism, a stable expectation that good things will happen, has been shown to improve health outcomes in a wide range of contexts, but very little research has explored the impact of optimism on post-disaster health and well-being. Design Data for this study come from the Centers for Disease Control and Prevention’s Public health systems and mental health community recovery (PHSMHCR) Survey. Participants included 3216 individuals living in counties affected by the April 2011 tornado outbreak in Mississippi and Alabama. Main outcome measures This study assesses the effect of dispositional optimism on post-disaster recovery and mental health. Results Dispositional optimism was found to have a positive effect on personal recovery and mental health after the disaster. Furthermore, it moderated the relationship between level of home damage and personal recovery as well as the relationship between home damage and post-traumatic stress disorder (PTSD), with stronger effects for those with increased levels of home damage. Conclusions The utility of screening for optimism is discussed, along with the potential for interventions to increase optimism as a means of mitigating adverse mental health effects and improving the recovery of individuals affected by disasters and other traumatic events. PMID:28156138
The effect of inflation rate on the cost of medical waste management system
NASA Astrophysics Data System (ADS)
Jolanta Walery, Maria
2017-11-01
This paper describes the optimization study aimed to analyse the impact of the parameter describing the inflation rate on the cost of the system and its structure. The study was conducted on the example of the analysis of medical waste management system in north-eastern Poland, in the Podlaskie Province. The scope of operational research carried out under the optimization study was divided into two stages of optimization calculations with assumed technical and economic parameters of the system. In the first stage, the lowest cost of functioning of the analysed system was generated, whereas in the second one the influence of the input parameter of the system, i.e. the inflation rate on the economic efficiency index (E) and the spatial structure of the system was determined. With the assumed inflation rate in the range of 1.00 to 1.12, the highest cost of the system was achieved at the level of PLN 2022.20/t (increase of economic efficiency index E by ca. 27% in comparison with run 1, with inflation rate = 1.12).
Finite Energy and Bounded Attacks on Control System Sensor Signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Djouadi, Seddik M; Melin, Alexander M; Ferragut, Erik M
Control system networks are increasingly being connected to enterprise level networks. These connections leave critical industrial controls systems vulnerable to cyber-attacks. Most of the effort in protecting these cyber-physical systems (CPS) has been in securing the networks using information security techniques and protection and reliability concerns at the control system level against random hardware and software failures. However, besides these failures the inability of information security techniques to protect against all intrusions means that the control system must be resilient to various signal attacks for which new analysis and detection methods need to be developed. In this paper, sensor signalmore » attacks are analyzed for observer-based controlled systems. The threat surface for sensor signal attacks is subdivided into denial of service, finite energy, and bounded attacks. In particular, the error signals between states of attack free systems and systems subject to these attacks are quantified. Optimal sensor and actuator signal attacks for the finite and infinite horizon linear quadratic (LQ) control in terms of maximizing the corresponding cost functions are computed. The closed-loop system under optimal signal attacks are provided. Illustrative numerical examples are provided together with an application to a power network with distributed LQ controllers.« less
Liver-selective glucocorticoid antagonists: a novel treatment for type 2 diabetes.
von Geldern, Thomas W; Tu, Noah; Kym, Philip R; Link, James T; Jae, Hwan-Soo; Lai, Chunqiu; Apelqvist, Theresa; Rhonnstad, Patrik; Hagberg, Lars; Koehler, Konrad; Grynfarb, Marlena; Goos-Nilsson, Annika; Sandberg, Johnny; Osterlund, Marie; Barkhem, Tomas; Höglund, Marie; Wang, Jiahong; Fung, Steven; Wilcox, Denise; Nguyen, Phong; Jakob, Clarissa; Hutchins, Charles; Färnegårdh, Mathias; Kauppi, Björn; Ohman, Lars; Jacobson, Peer B
2004-08-12
Hepatic blockade of glucocorticoid receptors (GR) suppresses glucose production and thus decreases circulating glucose levels, but systemic glucocorticoid antagonism can produce adrenal insufficiency and other undesirable side effects. These hepatic and systemic responses might be dissected, leading to liver-selective pharmacology, when a GR antagonist is linked to a bile acid in an appropriate manner. Bile acid conjugation can be accomplished with a minimal loss of binding affinity for GR. The resultant conjugates remain potent in cell-based functional assays. A novel in vivo assay has been developed to simultaneously evaluate both hepatic and systemic GR blockade; this assay has been used to optimize the nature and site of the linker functionality, as well as the choice of the GR antagonist and the bile acid. This optimization led to the identification of A-348441, which reduces glucose levels and improves lipid profiles in an animal model of diabetes. Copyright 2004 American Chemical Society
NASA Astrophysics Data System (ADS)
Luo, D.; Guan, Z.; Wang, C.; Yue, L.; Peng, L.
2017-06-01
Distribution of different parts to the assembly lines is significant for companies to improve production. Current research investigates the problem of distribution method optimization of a logistics system in a third party logistic company that provide professional services to an automobile manufacturing case company in China. Current research investigates the logistics leveling the material distribution and unloading platform of the automobile logistics enterprise and proposed logistics distribution strategy, material classification method, as well as logistics scheduling. Moreover, the simulation technology Simio is employed on assembly line logistics system which helps to find and validate an optimization distribution scheme through simulation experiments. Experimental results indicate that the proposed scheme can solve the logistic balance and levels the material problem and congestion of the unloading pattern in an efficient way as compared to the original method employed by the case company.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Module-level power electronics, such as DC power optimizers, microinverters, and those found in AC modules, are increasing in popularity in smaller-scale photovoltaic (PV) systems as their prices continue to decline. Therefore, it is important to provide PV modelers with guidelines about how to model these distributed power electronics appropriately in PV modeling software. This paper extends the work completed at NREL that provided recommendations to model the performance of distributed power electronics in NREL’s popular PVWatts calculator [1], to provide similar guidelines for modeling these technologies in NREL's more complex System Advisor Model (SAM). Module-level power electronics - such asmore » DC power optimizers, microinverters, and those found in AC modules-- are increasing in popularity in smaller-scale photovoltaic (PV) systems as their prices continue to decline. Therefore, it is important to provide PV modelers with guidelines about how to model these distributed power electronics appropriately in PV modeling software.« less
NASA Astrophysics Data System (ADS)
Mortensen, Henrik Lund; Sørensen, Jens Jakob W. H.; Mølmer, Klaus; Sherson, Jacob Friis
2018-02-01
We propose an efficient strategy to find optimal control functions for state-to-state quantum control problems. Our procedure first chooses an input state trajectory, that can realize the desired transformation by adiabatic variation of the system Hamiltonian. The shortcut-to-adiabaticity formalism then provides a control Hamiltonian that realizes the reference trajectory exactly but on a finite time scale. As the final state is achieved with certainty, we define a cost functional that incorporates the resource requirements and a perturbative expression for robustness. We optimize this functional by systematically varying the reference trajectory. We demonstrate the method by application to population transfer in a laser driven three-level Λ-system, where we find solutions that are fast and robust against perturbations while maintaining a low peak laser power.
Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan
2017-01-01
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325
Acoustic design criteria in a general system for structural optimization
NASA Technical Reports Server (NTRS)
Brama, Torsten
1990-01-01
Passenger comfort is of great importance in most transport vehicles. For instance, in the new generation of regional turboprop aircraft, a low noise level is vital to be competitive on the market. The possibilities to predict noise levels analytically has improved rapidly in recent years. This will make it possible to take acoustic design criteria into account in early project stages. The development of the ASKA FE-system to include also acoustic analysis has been carried out at Saab Aircraft Division and the Aeronautical Research Institute of Sweden in a joint project. New finite elements have been developed to model the free fluid, porous damping materials, and the interaction between the fluid and structural degrees of freedom. The FE approach to the acoustic analysis is best suited for lower frequencies up to a few hundred Hz. For accurate analysis of interior cabin noise, large 3-D FE-models are built, but 2-D models are also considered to be useful for parametric studies and optimization. The interest is here focused on the introduction of an acoustic design criteria in the general structural optimization system OPTSYS available at the Saab Aircraft Division. The first implementation addresses a somewhat limited class of problems. The problems solved are formulated: Minimize the structural weight by modifying the dimensions of the structure while keeping the noise level in the cavity and other structural design criteria within specified limits.
Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan
2017-08-04
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.
Blana, Dimitra; Hincapie, Juan G; Chadwick, Edward K; Kirsch, Robert F
2013-01-01
Neuroprosthetic systems based on functional electrical stimulation aim to restore motor function to individuals with paralysis following spinal cord injury. Identifying the optimal electrode set for the neuroprosthesis is complicated because it depends on the characteristics of the individual (such as injury level), the force capacities of the muscles, the movements the system aims to restore, and the hardware limitations (number and type of electrodes available). An electrode-selection method has been developed that uses a customized musculoskeletal model. Candidate electrode sets are created based on desired functional outcomes and the hard ware limitations of the proposed system. Inverse-dynamic simulations are performed to determine the proportion of target movements that can be accomplished with each set; the set that allows the most movements to be performed is chosen as the optimal set. The technique is demonstrated here for a system recently developed by our research group to restore whole-arm movement to individuals with high-level tetraplegia. The optimal set included selective nerve-cuff electrodes for the radial and musculocutaneous nerves; single-channel cuffs for the axillary, suprascapular, upper subscapular, and long-thoracic nerves; and muscle-based electrodes for the remaining channels. The importance of functional goals, hardware limitations, muscle and nerve anatomy, and surgical feasibility are highlighted.
Voltage scheduling for low power/energy
NASA Astrophysics Data System (ADS)
Manzak, Ali
2001-07-01
Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.
NASA Astrophysics Data System (ADS)
Shorikov, A. F.
2016-12-01
In this article we consider a discrete-time dynamical system consisting of a set a controllable objects (region and forming it municipalities). The dynamics each of these is described by the corresponding linear or nonlinear discrete-time recurrent vector relations and its control system consist from two levels: basic level (control level I) that is dominating level and auxiliary level (control level II) that is subordinate level. Both levels have different criterions of functioning and united by information and control connections which defined in advance. In this article we study the problem of optimization of guaranteed result for program control by the final state of regional social and economic system in the presence of risks vectors. For this problem we propose a mathematical model in the form of two-level hierarchical minimax program control problem of the final states of this system with incomplete information and the general scheme for its solving.
Optimal Correlations in Many-Body Quantum Systems
NASA Astrophysics Data System (ADS)
Amico, L.; Rossini, D.; Hamma, A.; Korepin, V. E.
2012-06-01
Information and correlations in a quantum system are closely related through the process of measurement. We explore such relation in a many-body quantum setting, effectively bridging between quantum metrology and condensed matter physics. To this aim we adopt the information-theory view of correlations and study the amount of correlations after certain classes of positive-operator-valued measurements are locally performed. As many-body systems, we consider a one-dimensional array of interacting two-level systems (a spin chain) at zero temperature, where quantum effects are most pronounced. We demonstrate how the optimal strategy to extract the correlations depends on the quantum phase through a subtle interplay between local interactions and coherence.
Microscopic heat engine and control of work fluctuations
NASA Astrophysics Data System (ADS)
Xiao, Gaoyang
In this thesis, we study novel behaviors of microscopic work and heat in systems involving few degrees of freedom. We firstly report that a quantum Carnot cycle should consist of two isothermal processes and two mechanical adiabatic processes if we want to maximize its heat-to-work conversion efficiency. We then find that the efficiency can be further optimized, and it is generally system specific, lower than the Carnot efficiency, and dependent upon both temperatures of the cold and hot reservoirs. We then move on to the studies the fluctuations of microscopic work. We find a principle of minimal work fluctuations related to the Jarzynski equality. In brief, an adiabatic process without energy level crossing yields the minimal fluctuations in exponential work, given a thermally isolated system initially prepared at thermal equilibrium. Finally, we investigate an optimal control approach to suppress the work fluctuations and accelerate the adiabatic processes. This optimal control approach can apply to wide variety of systems even when we do not have full knowledge of the systems.
Lightning location system supervising Swedish power transmission network
NASA Technical Reports Server (NTRS)
Melin, Stefan A.
1991-01-01
For electric utilities, the ability to prevent or minimize lightning damage on personnel and power systems is of great importance. Therefore, the Swedish State Power Board, has been using data since 1983 from a nationwide lightning location system (LLS) for accurately locating lightning ground strikes. Lightning data is distributed and presented on color graphic displays at regional power network control centers as well as at the national power system control center for optimal data use. The main objectives for use of LLS data are: supervising the power system for optimal and safe use of the transmission and generating capacity during periods of thunderstorms; warning service to maintenance and service crews at power line and substations to end operations hazardous when lightning; rapid positioning of emergency crews to locate network damage at areas of detected lightning; and post analysis of power outages and transmission faults in relation to lightning, using archived lightning data for determination of appropriate design and insulation levels of equipment. Staff have found LLS data useful and economically justified since the availability of power system has increased as well as level of personnel safety.
Application configuration selection for energy-efficient execution on multicore systems
Wang, Shinan; Luo, Bing; Shi, Weisong; ...
2015-09-21
Balanced performance and energy consumption are incorporated in the design of modern computer systems. Several runtime factors, such as concurrency levels, thread mapping strategies, and dynamic voltage and frequency scaling (DVFS) should be considered in order to achieve optimal energy efficiency fora workload. Selecting appropriate run-time factors, however, is one of the most challenging tasks because the run-time factors are architecture-specific and workload-specific. And while most existing works concentrate on either static analysis of the workload or run-time prediction results, we present a hybrid two-step method that utilizes concurrency levels and DVFS settings to achieve the energy efficiency configuration formore » a worldoad. The experimental results based on a Xeon E5620 server with NPB and PARSEC benchmark suites show that the model is able to predict the energy efficient configuration accurately. On average, an additional 10% EDP (Energy Delay Product) saving is obtained by using run-time DVFS for the entire system. An off-line optimal solution is used to compare with the proposed scheme. Finally, the experimental results show that the average extra EDP saved by the optimal solution is within 5% on selective parallel benchmarks.« less
Artificial intelligence in robot control systems
NASA Astrophysics Data System (ADS)
Korikov, A.
2018-05-01
This paper analyzes modern concepts of artificial intelligence and known definitions of the term "level of intelligence". In robotics artificial intelligence system is defined as a system that works intelligently and optimally. The author proposes to use optimization methods for the design of intelligent robot control systems. The article provides the formalization of problems of robotic control system design, as a class of extremum problems with constraints. Solving these problems is rather complicated due to the high dimensionality, polymodality and a priori uncertainty. Decomposition of the extremum problems according to the method, suggested by the author, allows reducing them into a sequence of simpler problems, that can be successfully solved by modern computing technology. Several possible approaches to solving such problems are considered in the article.
Optimal Solar PV Arrays Integration for Distributed Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omitaomu, Olufemi A; Li, Xueping
2012-01-01
Solar photovoltaic (PV) systems hold great potential for distributed energy generation by installing PV panels on rooftops of residential and commercial buildings. Yet challenges arise along with the variability and non-dispatchability of the PV systems that affect the stability of the grid and the economics of the PV system. This paper investigates the integration of PV arrays for distributed generation applications by identifying a combination of buildings that will maximize solar energy output and minimize system variability. Particularly, we propose mean-variance optimization models to choose suitable rooftops for PV integration based on Markowitz mean-variance portfolio selection model. We further introducemore » quantity and cardinality constraints to result in a mixed integer quadratic programming problem. Case studies based on real data are presented. An efficient frontier is obtained for sample data that allows decision makers to choose a desired solar energy generation level with a comfortable variability tolerance level. Sensitivity analysis is conducted to show the tradeoffs between solar PV energy generation potential and variability.« less
NASA Astrophysics Data System (ADS)
Cheng, Xi; He, Li; Lu, Hongwei; Chen, Yizhong; Ren, Lixia
2016-09-01
A major concern associated with current shale-gas extraction is high consumption of water resources. However, decision-making problems regarding water consumption and shale-gas extraction have not yet been solved through systematic approaches. This study develops a new bilevel optimization problem based on goals at two different levels: minimization of water demands at the lower level and maximization of system benefit at the upper level. The model is used to solve a real-world case across Pennsylvania and West Virginia. Results show that surface water would be the largest contributor to gas production (with over 80.00% from 2015 to 2030) and groundwater occupies for the least proportion (with less than 2.00% from 2015 to 2030) in both districts over the planning span. Comparative analysis between the proposed model and conventional single-level models indicates that the bilevel model could provide coordinated schemes to comprehensively attain the goals from both water resources authorities and energy sectors. Sensitivity analysis shows that the change of water use of per unit gas production (WU) has significant effects upon system benefit, gas production and pollutants (i.e., barium, chloride and bromide) discharge, but not significantly changes water demands.
The emotion system promotes diversity and evolvability
Giske, Jarl; Eliassen, Sigrunn; Fiksen, Øyvind; Jakobsen, Per J.; Aksnes, Dag L.; Mangel, Marc; Jørgensen, Christian
2014-01-01
Studies on the relationship between the optimal phenotype and its environment have had limited focus on genotype-to-phenotype pathways and their evolutionary consequences. Here, we study how multi-layered trait architecture and its associated constraints prescribe diversity. Using an idealized model of the emotion system in fish, we find that trait architecture yields genetic and phenotypic diversity even in absence of frequency-dependent selection or environmental variation. That is, for a given environment, phenotype frequency distributions are predictable while gene pools are not. The conservation of phenotypic traits among these genetically different populations is due to the multi-layered trait architecture, in which one adaptation at a higher architectural level can be achieved by several different adaptations at a lower level. Our results emphasize the role of convergent evolution and the organismal level of selection. While trait architecture makes individuals more constrained than what has been assumed in optimization theory, the resulting populations are genetically more diverse and adaptable. The emotion system in animals may thus have evolved by natural selection because it simultaneously enhances three important functions, the behavioural robustness of individuals, the evolvability of gene pools and the rate of evolutionary innovation at several architectural levels. PMID:25100697
The emotion system promotes diversity and evolvability.
Giske, Jarl; Eliassen, Sigrunn; Fiksen, Øyvind; Jakobsen, Per J; Aksnes, Dag L; Mangel, Marc; Jørgensen, Christian
2014-09-22
Studies on the relationship between the optimal phenotype and its environment have had limited focus on genotype-to-phenotype pathways and their evolutionary consequences. Here, we study how multi-layered trait architecture and its associated constraints prescribe diversity. Using an idealized model of the emotion system in fish, we find that trait architecture yields genetic and phenotypic diversity even in absence of frequency-dependent selection or environmental variation. That is, for a given environment, phenotype frequency distributions are predictable while gene pools are not. The conservation of phenotypic traits among these genetically different populations is due to the multi-layered trait architecture, in which one adaptation at a higher architectural level can be achieved by several different adaptations at a lower level. Our results emphasize the role of convergent evolution and the organismal level of selection. While trait architecture makes individuals more constrained than what has been assumed in optimization theory, the resulting populations are genetically more diverse and adaptable. The emotion system in animals may thus have evolved by natural selection because it simultaneously enhances three important functions, the behavioural robustness of individuals, the evolvability of gene pools and the rate of evolutionary innovation at several architectural levels.
Optimal adaptive control for quantum metrology with time-dependent Hamiltonians.
Pang, Shengshi; Jordan, Andrew N
2017-03-09
Quantum metrology has been studied for a wide range of systems with time-independent Hamiltonians. For systems with time-dependent Hamiltonians, however, due to the complexity of dynamics, little has been known about quantum metrology. Here we investigate quantum metrology with time-dependent Hamiltonians to bridge this gap. We obtain the optimal quantum Fisher information for parameters in time-dependent Hamiltonians, and show proper Hamiltonian control is generally necessary to optimize the Fisher information. We derive the optimal Hamiltonian control, which is generally adaptive, and the measurement scheme to attain the optimal Fisher information. In a minimal example of a qubit in a rotating magnetic field, we find a surprising result that the fundamental limit of T 2 time scaling of quantum Fisher information can be broken with time-dependent Hamiltonians, which reaches T 4 in estimating the rotation frequency of the field. We conclude by considering level crossings in the derivatives of the Hamiltonians, and point out additional control is necessary for that case.
Optimal adaptive control for quantum metrology with time-dependent Hamiltonians
Pang, Shengshi; Jordan, Andrew N.
2017-01-01
Quantum metrology has been studied for a wide range of systems with time-independent Hamiltonians. For systems with time-dependent Hamiltonians, however, due to the complexity of dynamics, little has been known about quantum metrology. Here we investigate quantum metrology with time-dependent Hamiltonians to bridge this gap. We obtain the optimal quantum Fisher information for parameters in time-dependent Hamiltonians, and show proper Hamiltonian control is generally necessary to optimize the Fisher information. We derive the optimal Hamiltonian control, which is generally adaptive, and the measurement scheme to attain the optimal Fisher information. In a minimal example of a qubit in a rotating magnetic field, we find a surprising result that the fundamental limit of T2 time scaling of quantum Fisher information can be broken with time-dependent Hamiltonians, which reaches T4 in estimating the rotation frequency of the field. We conclude by considering level crossings in the derivatives of the Hamiltonians, and point out additional control is necessary for that case. PMID:28276428
Increasing the technical level of mining haul trucks
NASA Astrophysics Data System (ADS)
Voronov, Yuri; Voronov, Artyom; Grishin, Sergey; Bujankin, Alexey
2017-11-01
Theoretical and methodological fundamentals of mining haul trucks optimal design are articulated. Methods based on the systems approach to integrated assessment of truck technical level and methods for optimization of truck parameters depending on performance standards are provided. The results of using these methods are given. The developed method allows not only assessing the truck technical levels but also choosing the most promising models and providing quantitative evaluations of the decisions to be made at the design stage. These areas are closely connected with the problem of improvement in the industrial output quality, which, being a part of the widely spread in Western world "total quality control" ideology, is one of the major issues for the Russian economy.
NASA Technical Reports Server (NTRS)
Orme, John S.
1995-01-01
The performance seeking control algorithm optimizes total propulsion system performance. This adaptive, model-based optimization algorithm has been successfully flight demonstrated on two engines with differing levels of degradation. Models of the engine, nozzle, and inlet produce reliable, accurate estimates of engine performance. But, because of an observability problem, component levels of degradation cannot be accurately determined. Depending on engine-specific operating characteristics PSC achieves various levels performance improvement. For example, engines with more deterioration typically operate at higher turbine temperatures than less deteriorated engines. Thus when the PSC maximum thrust mode is applied, for example, there will be less temperature margin available to be traded for increasing thrust.
Optimal estimation for the satellite attitude using star tracker measurements
NASA Technical Reports Server (NTRS)
Lo, J. T.-H.
1986-01-01
An optimal estimation scheme is presented, which determines the satellite attitude using the gyro readings and the star tracker measurements of a commonly used satellite attitude measuring unit. The scheme is mainly based on the exponential Fourier densities that have the desirable closure property under conditioning. By updating a finite and fixed number of parameters, the conditional probability density, which is an exponential Fourier density, is recursively determined. Simulation results indicate that the scheme is more accurate and robust than extended Kalman filtering. It is believed that this approach is applicable to many other attitude measuring units. As no linearization and approximation are necessary in the approach, it is ideal for systems involving high levels of randomness and/or low levels of observability and systems for which accuracy is of overriding importance.
An inverter/controller subsystem optimized for photovoltaic applications
NASA Technical Reports Server (NTRS)
Pickrell, R. L.; Merrill, W. C.; Osullivan, G.
1978-01-01
Conversion of solar array dc power to ac power stimulated the specification, design, and simulation testing of an inverter/controller subsystem tailored to the photovoltaic power source characteristics. This paper discusses the optimization of the inverter/controller design as part of an overall Photovoltaic Power System (PPS) designed for maximum energy extraction from the solar array. The special design requirements for the inverter/controller include: (1) a power system controller (PSC) to control continuously the solar array operating point at the maximum power level based on variable solar insolation and cell temperatures; and (2) an inverter designed for high efficiency at rated load and low losses at light loadings to conserve energy. It must be capable of operating connected to the utility line at a level set by an external controller (PSC).
Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey Dewayne
2004-01-01
The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.
NASA Astrophysics Data System (ADS)
Saponara, M.; Tramutola, A.; Creten, P.; Hardy, J.; Philippe, C.
2013-08-01
Optimization-based control techniques such as Model Predictive Control (MPC) are considered extremely attractive for space rendezvous, proximity operations and capture applications that require high level of autonomy, optimal path planning and dynamic safety margins. Such control techniques require high-performance computational needs for solving large optimization problems. The development and implementation in a flight representative avionic architecture of a MPC based Guidance, Navigation and Control system has been investigated in the ESA R&T study “On-line Reconfiguration Control System and Avionics Architecture” (ORCSAT) of the Aurora programme. The paper presents the baseline HW and SW avionic architectures, and verification test results obtained with a customised RASTA spacecraft avionics development platform from Aeroflex Gaisler.
Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia
NASA Astrophysics Data System (ADS)
Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg
2013-03-01
Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Shankar; Karri, Naveen K.; Gogna, Pawan K.
2012-03-13
Enormous military and commercial interests exist in developing quiet, lightweight, and compact thermoelectric (TE) power generation systems. This paper investigates design integration and analysis of an advanced TE power generation system implementing JP-8 fueled combustion and thermal recuperation. Design and development of a portable TE power system using a JP-8 combustor as a high temperature heat source and optimal process flows depend on efficient heat generation, transfer, and recovery within the system are explored. Design optimization of the system required considering the combustion system efficiency and TE conversion efficiency simultaneously. The combustor performance and TE sub-system performance were coupled directlymore » through exhaust temperatures, fuel and air mass flow rates, heat exchanger performance, subsequent hot-side temperatures, and cold-side cooling techniques and temperatures. Systematic investigation of this system relied on accurate thermodynamic modeling of complex, high-temperature combustion processes concomitantly with detailed thermoelectric converter thermal/mechanical modeling. To this end, this work reports on design integration of systemlevel process flow simulations using commercial software CHEMCADTM with in-house thermoelectric converter and module optimization, and heat exchanger analyses using COMSOLTM software. High-performance, high-temperature TE materials and segmented TE element designs are incorporated in coupled design analyses to achieve predicted TE subsystem level conversion efficiencies exceeding 10%. These TE advances are integrated with a high performance microtechnology combustion reactor based on recent advances at the Pacific Northwest National Laboratory (PNNL). Predictions from this coupled simulation established a basis for optimal selection of fuel and air flow rates, thermoelectric module design and operating conditions, and microtechnology heat-exchanger design criteria. This paper will discuss this simulation process that leads directly to system efficiency power maps defining potentially available optimal system operating conditions and regimes. This coupled simulation approach enables pathways for integrated use of high-performance combustor components, high performance TE devices, and microtechnologies to produce a compact, lightweight, combustion driven TE power system prototype that operates on common fuels.« less
Level-set techniques for facies identification in reservoir modeling
NASA Astrophysics Data System (ADS)
Iglesias, Marco A.; McLaughlin, Dennis
2011-03-01
In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.
NASA Astrophysics Data System (ADS)
Pansing, Craig W.; Hua, Hong; Rolland, Jannick P.
2005-08-01
Head-mounted display (HMD) technologies find a variety of applications in the field of 3D virtual and augmented environments, 3D scientific visualization, as well as wearable displays. While most of the current HMDs use head pose to approximate line of sight, we propose to investigate approaches and designs for integrating eye tracking capability into HMDs from a low-level system design perspective and to explore schemes for optimizing system performance. In this paper, we particularly propose to optimize the illumination scheme, which is a critical component in designing an eye tracking-HMD (ET-HMD) integrated system. An optimal design can improve not only eye tracking accuracy, but also robustness. Using LightTools, we present the simulation of a complete eye illumination and imaging system using an eye model along with multiple near infrared LED (IRLED) illuminators and imaging optics, showing the irradiance variation of the different eye structures. The simulation of dark pupil effects along with multiple 1st-order Purkinje images will be presented. A parametric analysis is performed to investigate the relationships between the IRLED configurations and the irradiance distribution at the eye, and a set of optimal configuration parameters is recommended. The analysis will be further refined by actual eye image acquisition and processing.
Impacts of Intelligent Automated Quality Control on a Small Animal APD-Based Digital PET Scanner
NASA Astrophysics Data System (ADS)
Charest, Jonathan; Beaudoin, Jean-François; Bergeron, Mélanie; Cadorette, Jules; Arpin, Louis; Lecomte, Roger; Brunet, Charles-Antoine; Fontaine, Réjean
2016-10-01
Stable system performance is mandatory to warrant the accuracy and reliability of biological results relying on small animal positron emission tomography (PET) imaging studies. This simple requirement sets the ground for imposing routine quality control (QC) procedures to keep PET scanners at a reliable optimal performance level. However, such procedures can become burdensome to implement for scanner operators, especially taking into account the increasing number of data acquisition channels in newer generation PET scanners. In systems using pixel detectors to achieve enhanced spatial resolution and contrast-to-noise ratio (CNR), the QC workload rapidly increases to unmanageable levels due to the number of independent channels involved. An artificial intelligence based QC system, referred to as Scanner Intelligent Diagnosis for Optimal Performance (SIDOP), was proposed to help reducing the QC workload by performing automatic channel fault detection and diagnosis. SIDOP consists of four high-level modules that employ machine learning methods to perform their tasks: Parameter Extraction, Channel Fault Detection, Fault Prioritization, and Fault Diagnosis. Ultimately, SIDOP submits a prioritized faulty channel list to the operator and proposes actions to correct them. To validate that SIDOP can perform QC procedures adequately, it was deployed on a LabPET™ scanner and multiple performance metrics were extracted. After multiple corrections on sub-optimal scanner settings, a 8.5% (with a 95% confidence interval (CI) of [7.6, 9.3]) improvement in the CNR, a 17.0% (CI: [15.3, 18.7]) decrease of the uniformity percentage standard deviation, and a 6.8% gain in global sensitivity were observed. These results confirm that SIDOP can indeed be of assistance in performing QC procedures and restore performance to optimal figures.
Occupant-responsive optimal control of smart facade systems
NASA Astrophysics Data System (ADS)
Park, Cheol-Soo
Windows provide occupants with daylight, direct sunlight, visual contact with the outside and a feeling of openness. Windows enable the use of daylighting and offer occupants a outside view. Glazing may also cause a number of problems: undesired heat gain/loss in winter. An over-lit window can cause glare, which is another major complaint by occupants. Furthermore, cold or hot window surfaces induce asymmetric thermal radiation which can result in thermal discomfort. To reduce the potential problems of window systems, double skin facades and airflow window systems have been introduced in the 1970s. They typically contain interstitial louvers and ventilation openings. The current problem with double skin facades and airflow windows is that their operation requires adequate dynamic control to reach their expected performance. Many studies have recognized that only an optimal control enables these systems to truly act as active energy savers and indoor environment controllers. However, an adequate solution for this dynamic optimization problem has thus far not been developed. The primary objective of this study is to develop occupant responsive optimal control of smart facade systems. The control could be implemented as a smart controller that operates the motorized Venetian blind system and the opening ratio of ventilation openings. The objective of the control is to combine the benefits of large windows with low energy demands for heating and cooling, while keeping visual well-being and thermal comfort at an optimal level. The control uses a simulation model with an embedded optimization routine that allows occupant interaction via the Web. An occupant can access the smart controller from a standard browser and choose a pre-defined mode (energy saving mode, visual comfort mode, thermal comfort mode, default mode, nighttime mode) or set a preferred mode (user-override mode) by moving preference sliders on the screen. The most prominent feature of these systems is the capability of dynamically reacting to the environmental input data through real-time optimization. The proposed occupant responsive optimal control of smart facade systems could provide a breakthrough in this under-developed area and lead to a renewed interest in smart facade systems.
Multiobjective optimization of urban water resources: Moving toward more practical solutions
NASA Astrophysics Data System (ADS)
Mortazavi, Mohammad; Kuczera, George; Cui, Lijie
2012-03-01
The issue of drought security is of paramount importance for cities located in regions subject to severe prolonged droughts. The prospect of "running out of water" for an extended period would threaten the very existence of the city. Managing drought security for an urban water supply is a complex task involving trade-offs between conflicting objectives. In this paper a multiobjective optimization approach for urban water resource planning and operation is developed to overcome practically significant shortcomings identified in previous work. A case study based on the headworks system for Sydney (Australia) demonstrates the approach and highlights the potentially serious shortcomings of Pareto optimal solutions conditioned on short climate records, incomplete decision spaces, and constraints to which system response is sensitive. Where high levels of drought security are required, optimal solutions conditioned on short climate records are flawed. Our approach addresses drought security explicitly by identifying approximate optimal solutions in which the system does not "run dry" in severe droughts with expected return periods up to a nominated (typically large) value. In addition, it is shown that failure to optimize the full mix of interacting operational and infrastructure decisions and to explore the trade-offs associated with sensitive constraints can lead to significantly more costly solutions.
Designing Industrial Networks Using Ecological Food Web Metrics.
Layton, Astrid; Bras, Bert; Weissburg, Marc
2016-10-18
Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.
Model-Based Design of Tree WSNs for Decentralized Detection †
Tantawy, Ashraf; Koutsoukos, Xenofon; Biswas, Gautam
2015-01-01
The classical decentralized detection problem of finding the optimal decision rules at the sensor and fusion center, as well as variants that introduce physical channel impairments have been studied extensively in the literature. The deployment of WSNs in decentralized detection applications brings new challenges to the field. Protocols for different communication layers have to be co-designed to optimize the detection performance. In this paper, we consider the communication network design problem for a tree WSN. We pursue a system-level approach where a complete model for the system is developed that captures the interactions between different layers, as well as different sensor quality measures. For network optimization, we propose a hierarchical optimization algorithm that lends itself to the tree structure, requiring only local network information. The proposed design approach shows superior performance over several contentionless and contention-based network design approaches. PMID:26307989
NASA Astrophysics Data System (ADS)
Crane, D. T.
2011-05-01
High-power-density, segmented, thermoelectric (TE) elements have been intimately integrated into heat exchangers, eliminating many of the loss mechanisms of conventional TE assemblies, including the ceramic electrical isolation layer. Numerical models comprising simultaneously solved, nonlinear, energy balance equations have been created to simulate these novel architectures. Both steady-state and transient models have been created in a MATLAB/Simulink environment. The models predict data from experiments in various configurations and applications over a broad range of temperature, flow, and current conditions for power produced, efficiency, and a variety of other important outputs. Using the validated models, devices and systems are optimized using advanced multiparameter optimization techniques. Devices optimized for particular steady-state operating conditions can then be dynamically simulated in a transient operating model. The transient model can simulate a variety of operating conditions including automotive and truck drive cycles.
A Hybrid Interval–Robust Optimization Model for Water Quality Management
Xu, Jieyu; Li, Yongping; Huang, Guohe
2013-01-01
Abstract In water quality management problems, uncertainties may exist in many system components and pollution-related processes (i.e., random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval–robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements. PMID:23922495
Villarubia, Gabriel; De Paz, Juan F.; Bajo, Javier
2017-01-01
The use of electric bikes (e-bikes) has grown in popularity, especially in large cities where overcrowding and traffic congestion are common. This paper proposes an intelligent engine management system for e-bikes which uses the information collected from sensors to optimize battery energy and time. The intelligent engine management system consists of a built-in network of sensors in the e-bike, which is used for multi-sensor data fusion; the collected data is analysed and fused and on the basis of this information the system can provide the user with optimal and personalized assistance. The user is given recommendations related to battery consumption, sensors, and other parameters associated with the route travelled, such as duration, speed, or variation in altitude. To provide a user with these recommendations, artificial neural networks are used to estimate speed and consumption for each of the segments of a route. These estimates are incorporated into evolutionary algorithms in order to make the optimizations. A comparative analysis of the results obtained has been conducted for when routes were travelled with and without the optimization system. From the experiments, it is evident that the use of an engine management system results in significant energy and time savings. Moreover, user satisfaction increases as the level of assistance adapts to user behavior and the characteristics of the route. PMID:29088087
NASA Technical Reports Server (NTRS)
Hahne, David E.; Glaab, Louis J.
1999-01-01
An investigation was performed to evaluate leading-and trailing-edge flap deflections for optimal aerodynamic performance of a High-Speed Civil Transport concept during takeoff and approach-to-landing conditions. The configuration used for this study was designed by the Douglas Aircraft Company during the 1970's. A 0.1-scale model of this configuration was tested in the Langley 30- by 60-Foot Tunnel with both the original leading-edge flap system and a new leading-edge flap system, which was designed with modem computational flow analysis and optimization tools. Leading-and trailing-edge flap deflections were generated for the original and modified leading-edge flap systems with the computational flow analysis and optimization tools. Although wind tunnel data indicated improvements in aerodynamic performance for the analytically derived flap deflections for both leading-edge flap systems, perturbations of the analytically derived leading-edge flap deflections yielded significant additional improvements in aerodynamic performance. In addition to the aerodynamic performance optimization testing, stability and control data were also obtained. An evaluation of the crosswind landing capability of the aircraft configuration revealed that insufficient lateral control existed as a result of high levels of lateral stability. Deflection of the leading-and trailing-edge flaps improved the crosswind landing capability of the vehicle considerably; however, additional improvements are required.
De La Iglesia, Daniel H; Villarrubia, Gabriel; De Paz, Juan F; Bajo, Javier
2017-10-31
The use of electric bikes (e-bikes) has grown in popularity, especially in large cities where overcrowding and traffic congestion are common. This paper proposes an intelligent engine management system for e-bikes which uses the information collected from sensors to optimize battery energy and time. The intelligent engine management system consists of a built-in network of sensors in the e-bike, which is used for multi-sensor data fusion; the collected data is analysed and fused and on the basis of this information the system can provide the user with optimal and personalized assistance. The user is given recommendations related to battery consumption, sensors, and other parameters associated with the route travelled, such as duration, speed, or variation in altitude. To provide a user with these recommendations, artificial neural networks are used to estimate speed and consumption for each of the segments of a route. These estimates are incorporated into evolutionary algorithms in order to make the optimizations. A comparative analysis of the results obtained has been conducted for when routes were travelled with and without the optimization system. From the experiments, it is evident that the use of an engine management system results in significant energy and time savings. Moreover, user satisfaction increases as the level of assistance adapts to user behavior and the characteristics of the route.
Large Scale GW Calculations on the Cori System
NASA Astrophysics Data System (ADS)
Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven
The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.
Unintended greenhouse gas consequences of lowering level of service in urban transit systems
NASA Astrophysics Data System (ADS)
Griswold, Julia B.; Cheng, Han; Madanat, Samer; Horvath, Arpad
2014-12-01
Public transit is often touted as a ‘green’ transportation option and a way for users to reduce their environmental footprint by avoiding automobile emissions, but that may not be the case when systems run well below passenger capacity. In previous work, we explored an approach to optimizing the design and operations of transit systems for both costs and emissions, using continuum approximation models and assuming fixed demand. In this letter, we expand upon our previous work to explore how the level of service for users impacts emissions. We incorporate travel time elasticities into the optimization to account for demand shifts from transit to cars, resulting from increases in transit travel time. We find that emissions reductions are moderated, but not eliminated, for relatively inelastic users. We consider two scenarios: the first is where only the agency faces an emissions budget; the second is where the entire city faces an emissions budget. In the latter scenario, the emissions reductions resulting from reductions in transit level of service are mitigated as users switch to automobile.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ploskey, Gene R.; Hughes, James S.; Khan, Fenton
The purpose of this report is to document the results of the acoustic optimization study conducted at John Day Dam during January and February 2008. The goal of the study was to optimize performance of the Juvenile Salmon Acoustic Telemetry System (JSATS) by determining deployment and data acquisition methods to minimize electrical and acoustic interference from various other acoustic sampling devices. Thereby, this would allow concurrent sampling by active and passive acoustic methods during the formal evaluations of the prototype surface flow outlets at the dam during spring and summer outmigration seasons for juvenile salmonids. The objectives for the optimizationmore » study at John Day Dam were to: 1. Design and test prototypes and provide a total needs list of pipes and trolleys to deploy JSATS hydrophones on the forebay face of the powerhouse and spillway. 2. Assess the effect on mean percentage decoded of JSATS transmissions from tags arrayed in the forebay and detected on the hydrophones by comparing: turbine unit OFF vs. ON; spill bay OPEN vs. CLOSED; dual frequency identification sonar (DIDSON) and acoustic Doppler current profiler (ADCP) both OFF vs. ON at a spill bay; and, fixed-aspect hydroacoustic system OFF vs. ON at a turbine unit and a spill bay. 3. Determine the relationship between fixed-aspect hydroacoustic transmit level and mean percentage of JSATS transmissions decoded. The general approach was to use hydrophones to listen for transmissions from JSATS tags deployed in vertical arrays in a series perpendicular to the face of the dam. We used acoustic telemetry equipment manufactured by Technologic and Sonic Concepts. In addition, we assessed old and new JSATS signal detectors and decoders and two different types of hydrophone baffling. The optimization study consisted of a suite of off/on tests. The primary response variable was mean percentage of tag transmissions decoded. We found that there was no appreciable adverse effect on mean percentage decoded for JSATS transmitters from: turbine operations; spillway operations; DIDSON/ADCP acoustic energy; and PAS hydroacoustic systems at transmit level of -12 dB, although there was a significant impact at all higher transmit levels (-11 to -6 dB). The main conclusion from this optimization study is that valid JSATS telemetry data can be collected simultaneously with a DIDSON/ADCP and a PAS hydroacoustic system at transmit level -12 dB. Multiple evaluation tools should be considered to increase the robustness and thoroughness of future fish passage evaluations at John Day and other dams.« less
Singh, Bhupinder; Garg, Babita; Chaturvedi, Subhash Chand; Arora, Sharry; Mandsaurwale, Rachana; Kapil, Rishi; Singh, Baljinder
2012-05-01
The current studies entail successful formulation of optimized gastroretentive tablets of lamivudine using the floating-bioadhesive potential of carbomers and cellulosic polymers, and their subsequent in-vitro and in-vivo evaluation in animals and humans. Effervescent floating-bioadhesive hydrophilic matrices were prepared and evaluated for in-vitro drug release, floatation and ex-vivo bioadhesive strength. The optimal composition of polymer blends was systematically chosen using central composite design and overlay plots. Pharmacokinetic studies were carried out in rabbits, and various levels of in-vitro/in-vivo correlation (IVIVC) were established. In-vivo gamma scintigraphic studies were performed in human volunteers using (99m) Tc to evaluate formulation retention in the gastric milieu. The optimized formulation exhibited excellent bioadhesive and floatational characteristics besides possessing adequate drug-release control and pharmacokinetic extension of plasma levels. The successful establishment of various levels of IVIVC substantiated the judicious choice of in-vitro dissolution media for simulating the in-vivo conditions. In-vivo gamma scintigraphic studies ratified the gastroretentive characteristics of the optimized formulation with a retention time of 5 h or more. Besides unravelling the polymer synergism, the study helped in developing an optimal once-a-day gastroretentive drug delivery system with improved bioavailability potential exhibiting excellent swelling, floating and bioadhesive characteristics. © 2012 The Authors. JPP © 2012 Royal Pharmaceutical Society.
An expert system for integrated structural analysis and design optimization for aerospace structures
NASA Technical Reports Server (NTRS)
1992-01-01
The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.
An expert system for integrated structural analysis and design optimization for aerospace structures
NASA Astrophysics Data System (ADS)
1992-04-01
The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.
Research on Optimization of GLCM Parameter in Cell Classification
NASA Astrophysics Data System (ADS)
Zhang, Xi-Kun; Hou, Jie; Hu, Xin-Hua
2016-05-01
Real-time classification of biological cells according to their 3D morphology is highly desired in a flow cytometer setting. Gray level co-occurrence matrix (GLCM) algorithm has been developed to extract feature parameters from measured diffraction images ,which are too complicated to coordinate with the real-time system for a large amount of calculation. An optimization of GLCM algorithm is provided based on correlation analysis of GLCM parameters. The results of GLCM analysis and subsequent classification demonstrate optimized method can lower the time complexity significantly without loss of classification accuracy.
Constrained optimization of sequentially generated entangled multiqubit states
NASA Astrophysics Data System (ADS)
Saberi, Hamed; Weichselbaum, Andreas; Lamata, Lucas; Pérez-García, David; von Delft, Jan; Solano, Enrique
2009-08-01
We demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any entangled multiqubit state. We give paradigmatic examples that may be of interest for theoretical and experimental developments.
Optimal Operation of Energy Storage in Power Transmission and Distribution
NASA Astrophysics Data System (ADS)
Akhavan Hejazi, Seyed Hossein
In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider uncertainty from various elements, such as solar photovoltaic , electric vehicle chargers, and residential baseloads, in the form of discrete probability functions. In the last part of this thesis we address some other resources and concepts for enhancing the operation of power distribution and transmission systems. In particular, we proposed a new framework to determine the best sites, sizes, and optimal payment incentives under special contracts for committed-type DG projects to offset distribution network investment costs. In this framework, the aim is to allocate DGs such that the profit gained by the distribution company is maximized while each DG unit's individual profit is also taken into account to assure that private DG investment remains economical.
Evaluation of the Lateral Performance of Roof Truss-to-Wall Connections in Light-Frame Wood Systems
Andrew DeRenzis; Vladimir Kochkin; Xiping Wang
2012-01-01
This testing program was designed to benchmark the performance of traditional roof systems and incrementally improved roof-to-wall systems with the goal of developing connection solutions that are optimized for performance and constructability. Nine full-size roof systems were constructed and tested with various levels and types of heel detailing to measure the lateral...
Alive and Well: Optimizing the Fitness of an Organization
ERIC Educational Resources Information Center
Wayne, David
2008-01-01
Grounded in the work of W. Edwards Deming, this article describes the basics of systems thinking, viewing a business as a system, and contrasts improving a system with solving a problem. The article uses the human body as a metaphor to describe the various aspects of viewing a business as a system at the concept level and maps the Deming cycle,…
Optimal Regulation of Virtual Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall Anese, Emiliano; Guggilam, Swaroop S.; Simonetto, Andrea
This paper develops a real-time algorithmic framework for aggregations of distributed energy resources (DERs) in distribution networks to provide regulation services in response to transmission-level requests. Leveraging online primal-dual-type methods for time-varying optimization problems and suitable linearizations of the nonlinear AC power-flow equations, we believe this work establishes the system-theoretic foundation to realize the vision of distribution-level virtual power plants. The optimization framework controls the output powers of dispatchable DERs such that, in aggregate, they respond to automatic-generation-control and/or regulation-services commands. This is achieved while concurrently regulating voltages within the feeder and maximizing customers' and utility's performance objectives. Convergence andmore » tracking capabilities are analytically established under suitable modeling assumptions. Simulations are provided to validate the proposed approach.« less
System Analysis and Performance Benefits of an Optimized Rotorcraft Propulsion System
NASA Technical Reports Server (NTRS)
Bruckner, Robert J.
2007-01-01
The propulsion system of rotorcraft vehicles is the most critical system to the vehicle in terms of safety and performance. The propulsion system must provide both vertical lift and forward flight propulsion during the entire mission. Whereas propulsion is a critical element for all flight vehicles, it is particularly critical for rotorcraft due to their limited safe, un-powered landing capability. This unparalleled reliability requirement has led rotorcraft power plants down a certain evolutionary path in which the system looks and performs quite similarly to those of the 1960 s. By and large the advancements in rotorcraft propulsion have come in terms of safety and reliability and not in terms of performance. The concept of the optimized propulsion system is a means by which both reliability and performance can be improved for rotorcraft vehicles. The optimized rotorcraft propulsion system which couples an oil-free turboshaft engine to a highly loaded gearbox that provides axial load support for the power turbine can be designed with current laboratory proven technology. Such a system can provide up to 60% weight reduction of the propulsion system of rotorcraft vehicles. Several technical challenges are apparent at the conceptual design level and should be addressed with current research.
An optimization model for the US Air-Traffic System
NASA Technical Reports Server (NTRS)
Mulvey, J. M.
1986-01-01
A systematic approach for monitoring U.S. air traffic was developed in the context of system-wide planning and control. Towards this end, a network optimization model with nonlinear objectives was chosen as the central element in the planning/control system. The network representation was selected because: (1) it provides a comprehensive structure for depicting essential aspects of the air traffic system, (2) it can be solved efficiently for large scale problems, and (3) the design can be easily communicated to non-technical users through computer graphics. Briefly, the network planning models consider the flow of traffic through a graph as the basic structure. Nodes depict locations and time periods for either individual planes or for aggregated groups of airplanes. Arcs define variables as actual airplanes flying through space or as delays across time periods. As such, a special case of the network can be used to model the so called flow control problem. Due to the large number of interacting variables and the difficulty in subdividing the problem into relatively independent subproblems, an integrated model was designed which will depict the entire high level (above 29000 feet) jet route system for the 48 contiguous states in the U.S. As a first step in demonstrating the concept's feasibility a nonlinear risk/cost model was developed for the Indianapolis Airspace. The nonlinear network program --NLPNETG-- was employed in solving the resulting test cases. This optimization program uses the Truncated-Newton method (quadratic approximation) for determining the search direction at each iteration in the nonlinear algorithm. It was shown that aircraft could be re-routed in an optimal fashion whenever traffic congestion increased beyond an acceptable level, as measured by the nonlinear risk function.
NASA Astrophysics Data System (ADS)
Gao, Chuan; Zhang, Rong-Hua; Wu, Xinrong; Sun, Jichang
2018-04-01
Large biases exist in real-time ENSO prediction, which can be attributed to uncertainties in initial conditions and model parameters. Previously, a 4D variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer ( T e), which is empirically and explicitly related to sea level (SL) variation. The strength of the thermocline effect on SST (referred to simply as "the thermocline effect") is represented by an introduced parameter, α Te. A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having their initial condition optimized only, and having their initial condition plus this additional model parameter optimized, are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameters and initial conditions together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.
Using a 4D-Variational Method to Optimize Model Parameters in an Intermediate Coupled Model of ENSO
NASA Astrophysics Data System (ADS)
Gao, C.; Zhang, R. H.
2017-12-01
Large biases exist in real-time ENSO prediction, which is attributed to uncertainties in initial conditions and model parameters. Previously, a four dimentional variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer (Te), which is empirically and explicitly related to sea level (SL) variation, written as Te=αTe×FTe (SL). The introduced parameter, αTe, represents the strength of the thermocline effect on sea surface temperature (SST; referred as the thermocline effect). A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having initial condition optimized only and having initial condition plus this additional model parameter optimized both are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameter and initial condition together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.
Yobbi, Dann K.
2002-01-01
Tampa Bay depends on ground water for most of the water supply. Numerous wetlands and lakes in Pasco County have been impacted by the high demand for ground water. Central Pasco County, particularly the area within the Cypress Creek well field, has been greatly affected. Probable causes for the decline in surface-water levels are well-field pumpage and a decade-long drought. Efforts are underway to increase surface-water levels by developing alternative sources of water supply, thus reducing the quantity of well-field pumpage. Numerical ground-water flow simulations coupled with an optimization routine were used in a series of simulations to test the sensitivity of optimal pumpage to desired increases in surficial aquifer system heads in the Cypress Creek well field. The ground-water system was simulated using the central northern Tampa Bay ground-water flow model. Pumping solutions for 1987 equilibrium conditions and for a transient 6-month timeframe were determined for five test cases, each reflecting a range of desired target recovery heads at different head control sites in the surficial aquifer system. Results are presented in the form of curves relating average head recovery to total optimal pumpage. Pumping solutions are sensitive to the location of head control sites formulated in the optimization problem and as expected, total optimal pumpage decreased when desired target head increased. The distribution of optimal pumpage for individual production wells also was significantly affected by the location of head control sites. A pumping advantage was gained for test-case formulations where hydraulic heads were maximized in cells near the production wells, in cells within the steady-state pumping center cone of depression, and in cells within the area of the well field where confining-unit leakance is the highest. More water was pumped and the ratio of head recovery per unit decrease in optimal pumpage was more than double for test cases where hydraulic heads are maximized in cells located at or near the production wells. Additionally, the ratio of head recovery per unit decrease in pumpage was about three times more for the area where confining-unit leakance is the highest than for other leakance zone areas of the well field. For many head control sites, optimal heads corresponding to optimal pumpage deviated from the desired target recovery heads. Overall, pumping solutions were constrained by the limiting recovery values, initial head conditions, and by upper boundary conditions of the ground-water flow model.
Kan, Bin; Zhang, Jiangbin; Liu, Feng; Wan, Xiangjian; Li, Chenxi; Ke, Xin; Wang, Yunchuang; Feng, Huanran; Zhang, Yamin; Long, Guankui; Friend, Richard H; Bakulin, Artem A; Chen, Yongsheng
2018-01-01
Organic solar cell optimization requires careful balancing of current-voltage output of the materials system. Here, such optimization using ultrafast spectroscopy as a tool to optimize the material bandgap without altering ultrafast photophysics is reported. A new acceptor-donor-acceptor (A-D-A)-type small-molecule acceptor NCBDT is designed by modification of the D and A units of NFBDT. Compared to NFBDT, NCBDT exhibits upshifted highest occupied molecular orbital (HOMO) energy level mainly due to the additional octyl on the D unit and downshifted lowest unoccupied molecular orbital (LUMO) energy level due to the fluorination of A units. NCBDT has a low optical bandgap of 1.45 eV which extends the absorption range toward near-IR region, down to ≈860 nm. However, the 60 meV lowered LUMO level of NCBDT hardly changes the V oc level, and the elevation of the NCBDT HOMO does not have a substantial influence on the photophysics of the materials. Thus, for both NCBDT- and NFBDT-based systems, an unusually slow (≈400 ps) but ultimately efficient charge generation mediated by interfacial charge-pair states is observed, followed by effective charge extraction. As a result, the PBDB-T:NCBDT devices demonstrate an impressive power conversion efficiency over 12%-among the best for solution-processed organic solar cells. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Yang, Shan-Shan; Pang, Ji-Wei; Jin, Xiao-Man; Wu, Zhong-Yang; Yang, Xiao-Yin; Guo, Wan-Qian; Zhao, Zhi-Qing; Ren, Nan-Qi
2018-03-01
Redundant excess sludge production and considerable non-standard wastewater discharge from existing activated sludge processes are facing more and more challenges. The investigations on lower sludge production and higher sewage treatment efficiency are urgently needed. In this study, an anaerobic/anoxic/micro-aerobic/oxic-MBR combining a micro-aerobic starvation sludge holding tank (A2MMBR-M) system is developed. Batch tests on the optimization of the staged dissolved oxygen (DO) in the micro-aerobic, the first oxic, and the second oxic tanks were carried out by a 3-factor and 3-level Box-Behnken design (BBD). The optimal actual values of X1 , X2 , and X3 were DO1 of 0.3-0.5 mg/L, DO2 of 3.5-4.5 mg/L, and DO3 of 3-4 mg/L. After the optimization tests, continuous-flow experiments of anaerobic/anoxic/oxic (AAO) and A2MMBR-M systems were further conducted. Compared to AAO system, a 37.45% reduction in discharged excess sludge in A2MMBR-M system was achieved. The COD, TN, and TP removal efficiencies in A2MMBR-M system were respective 4.06%, 2.68%, and 4.04% higher than AAO system. The A2MMBR-M system is proved a promising wastewater treatment technology possessing enhanced in-situ sludge reduction and improved effluent quality. The staged optimized DO concentrations are the key controlling parameters for the realization of simultaneous in-situ sludge reduction and nutrient removal.
NASA Astrophysics Data System (ADS)
Jolanta Walery, Maria
2017-12-01
The article describes optimization studies aimed at analysing the impact of capital and current costs changes of medical waste incineration on the cost of the system management and its structure. The study was conducted on the example of an analysis of the system of medical waste management in the Podlaskie Province, in north-eastern Poland. The scope of operational research carried out under the optimization study was divided into two stages of optimization calculations with assumed technical and economic parameters of the system. In the first stage, the lowest cost of functioning of the analysed system was generated, whereas in the second one the influence of the input parameter of the system, i.e. capital and current costs of medical waste incineration on economic efficiency index (E) and the spatial structure of the system was determined. Optimization studies were conducted for the following cases: with a 25% increase in capital and current costs of incineration process, followed by 50%, 75% and 100% increase. As a result of the calculations, the highest cost of system operation was achieved at the level of 3143.70 PLN/t with the assumption of 100% increase in capital and current costs of incineration process. There was an increase in the economic efficiency index (E) by about 97% in relation to run 1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Economopoulou, M.A.; Economopoulou, A.A.; Economopoulos, A.P., E-mail: eco@otenet.gr
2013-11-15
Highlights: • A two-step (strategic and detailed optimal planning) methodology is used for solving complex MSW management problems. • A software package is outlined, which can be used for generating detailed optimal plans. • Sensitivity analysis compares alternative scenarios that address objections and/or wishes of local communities. • A case study shows the application of the above procedure in practice and demonstrates the results and benefits obtained. - Abstract: The paper describes a software system capable of formulating alternative optimal Municipal Solid Wastes (MSWs) management plans, each of which meets a set of constraints that may reflect selected objections and/ormore » wishes of local communities. The objective function to be minimized in each plan is the sum of the annualized capital investment and annual operating cost of all transportation, treatment and final disposal operations involved, taking into consideration the possible income from the sale of products and any other financial incentives or disincentives that may exist. For each plan formulated, the system generates several reports that define the plan, analyze its cost elements and yield an indicative profile of selected types of installations, as well as data files that facilitate the geographic representation of the optimal solution in maps through the use of GIS. A number of these reports compare the technical and economic data from all scenarios considered at the study area, municipality and installation level constituting in effect sensitivity analysis. The generation of alternative plans offers local authorities the opportunity of choice and the results of the sensitivity analysis allow them to choose wisely and with consensus. The paper presents also an application of this software system in the capital Region of Attica in Greece, for the purpose of developing an optimal waste transportation system in line with its approved waste management plan. The formulated plan was able to: (a) serve 113 Municipalities and Communities that generate nearly 2 million t/y of comingled MSW with distinctly different waste collection patterns, (b) take into consideration several existing waste transfer stations (WTS) and optimize their use within the overall plan, (c) select the most appropriate sites among the potentially suitable (new and in use) ones, (d) generate the optimal profile of each WTS proposed, and (e) perform sensitivity analysis so as to define the impact of selected sets of constraints (limitations in the availability of sites and in the capacity of their installations) on the design and cost of the ensuing optimal waste transfer system. The results show that optimal planning offers significant economic savings to municipalities, while reducing at the same time the present levels of traffic, fuel consumptions and air emissions in the congested Athens basin.« less
Optimization of minoxidil microemulsions using fractional factorial design approach.
Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned
2016-01-01
The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3.
Hemmati, Reza; Saboori, Hedayat
2016-01-01
Energy storage systems (ESSs) have experienced a very rapid growth in recent years and are expected to be a promising tool in order to improving power system reliability and being economically efficient. The ESSs possess many potential benefits in various areas in the electric power systems. One of the main benefits of an ESS, especially a bulk unit, relies on smoothing the load pattern by decreasing on-peak and increasing off-peak loads, known as load leveling. These devices require new methods and tools in order to model and optimize their effects in the power system studies. In this respect, this paper will model bulk ESSs based on the several technical characteristics, introduce the proposed model in the thermal unit commitment (UC) problem, and analyze it with respect to the various sensitive parameters. The technical limitations of the thermal units and transmission network constraints are also considered in the model. The proposed model is a Mixed Integer Linear Programming (MILP) which can be easily solved by strong commercial solvers (for instance CPLEX) and it is appropriate to be used in the practical large scale networks. The results of implementing the proposed model on a test system reveal that proper load leveling through optimum storage scheduling leads to considerable operation cost reduction with respect to the storage system characteristics. PMID:27222741
Hemmati, Reza; Saboori, Hedayat
2016-05-01
Energy storage systems (ESSs) have experienced a very rapid growth in recent years and are expected to be a promising tool in order to improving power system reliability and being economically efficient. The ESSs possess many potential benefits in various areas in the electric power systems. One of the main benefits of an ESS, especially a bulk unit, relies on smoothing the load pattern by decreasing on-peak and increasing off-peak loads, known as load leveling. These devices require new methods and tools in order to model and optimize their effects in the power system studies. In this respect, this paper will model bulk ESSs based on the several technical characteristics, introduce the proposed model in the thermal unit commitment (UC) problem, and analyze it with respect to the various sensitive parameters. The technical limitations of the thermal units and transmission network constraints are also considered in the model. The proposed model is a Mixed Integer Linear Programming (MILP) which can be easily solved by strong commercial solvers (for instance CPLEX) and it is appropriate to be used in the practical large scale networks. The results of implementing the proposed model on a test system reveal that proper load leveling through optimum storage scheduling leads to considerable operation cost reduction with respect to the storage system characteristics.
1991-09-01
System ( CAPMS ) in lieu of using DODI 4151.15H. Facility utilization rate computation is not explicitly defined; it is merely identified as a ratio of...front of a bottleneck buffers the critical resource and protects against disruption of the system. This approach optimizes facility utilization by...run titled BUFFERED BASELINE. Three different levels of inventory were used to evaluate the effect of increasing the inventory level on critical
Jackson, George L; Zullig, Leah L; Phelan, Sean M; Provenzale, Dawn; Griffin, Joan M; Clauser, Steven B; Haggstrom, David A; Jindal, Rahul M; van Ryn, Michelle
2015-07-01
The current study was performed to determine whether patient characteristics, including race/ethnicity, were associated with patient-reported care coordination for patients with colorectal cancer (CRC) who were treated in the Veterans Affairs (VA) health care system, with the goal of better understanding potential goals of quality improvement efforts aimed at improving coordination. The nationwide Cancer Care Assessment and Responsive Evaluation Studies survey involved VA patients with CRC who were diagnosed in 2008 (response rate, 67%). The survey included a 4-item scale of patient-reported frequency ("never," "sometimes," "usually," and "always") of care coordination activities (scale score range, 1-4). Among 913 patients with CRC who provided information regarding care coordination, demographics, and symptoms, multivariable logistic regression was used to examine odds of patients reporting optimal care coordination. VA patients with CRC were found to report high levels of care coordination (mean scale score, 3.50 [standard deviation, 0.61]). Approximately 85% of patients reported a high level of coordination, including the 43% reporting optimal/highest-level coordination. There was no difference observed in the odds of reporting optimal coordination by race/ethnicity. Patients with early-stage disease (odds ratio [OR], 0.60; 95% confidence interval [95% CI], 0.45-0.81), greater pain (OR, 0.97 for a 1-point increase in pain scale; 95% CI, 0.96-0.99), and greater levels of depression (OR, 0.97 for a 1-point increase in depression scale; 95% CI, 0.96-0.99) were less likely to report optimal coordination. Patients with CRC in the VA reported high levels of care coordination. Unlike what has been reported in settings outside the VA, there appears to be no racial/ethnic disparity in reported coordination. However, challenges remain in ensuring coordination of care for patients with less advanced disease and a high symptom burden. Cancer 2015;121:2207-2213. © 2015 American Cancer Society. © 2015 American Cancer Society.
Ant groups optimally amplify the effect of transiently informed individuals
NASA Astrophysics Data System (ADS)
Gelblum, Aviram; Pinkoviezky, Itai; Fonio, Ehud; Ghosh, Abhijit; Gov, Nir; Feinerman, Ofer
2015-07-01
To cooperatively transport a large load, it is important that carriers conform in their efforts and align their forces. A downside of behavioural conformism is that it may decrease the group's responsiveness to external information. Combining experiment and theory, we show how ants optimize collective transport. On the single-ant scale, optimization stems from decision rules that balance individuality and compliance. Macroscopically, these rules poise the system at the transition between random walk and ballistic motion where the collective response to the steering of a single informed ant is maximized. We relate this peak in response to the divergence of susceptibility at a phase transition. Our theoretical models predict that the ant-load system can be transitioned through the critical point of this mesoscopic system by varying its size; we present experiments supporting these predictions. Our findings show that efficient group-level processes can arise from transient amplification of individual-based knowledge.
Elaziz, Mohamed Abd; Hemdan, Ahmed Monem; Hassanien, AboulElla; Oliva, Diego; Xiong, Shengwu
2017-09-07
The current economics of the fish protein industry demand rapid, accurate and expressive prediction algorithms at every step of protein production especially with the challenge of global climate change. This help to predict and analyze functional and nutritional quality then consequently control food allergies in hyper allergic patients. As, it is quite expensive and time-consuming to know these concentrations by the lab experimental tests, especially to conduct large-scale projects. Therefore, this paper introduced a new intelligent algorithm using adaptive neuro-fuzzy inference system based on whale optimization algorithm. This algorithm is used to predict the concentration levels of bioactive amino acids in fish protein hydrolysates at different times during the year. The whale optimization algorithm is used to determine the optimal parameters in adaptive neuro-fuzzy inference system. The results of proposed algorithm are compared with others and it is indicated the higher performance of the proposed algorithm.
Efficiency Management in Spaceflight Systems
NASA Technical Reports Server (NTRS)
Murphy, Karen
2016-01-01
Efficiency in spaceflight is often approached as “faster, better, cheaper – pick two”. The high levels of performance and reliability required for each mission suggest that planners can only control for two of the three. True efficiency comes by optimizing a system across all three parameters. The functional processes of spaceflight become technical requirements on three operational groups during mission planning: payload, vehicle, and launch operations. Given the interrelationships among the functions performed by the operational groups, optimizing function resources from one operational group to the others affects the efficiency of those groups and therefore the mission overall. This paper helps outline this framework and creates a context in which to understand the effects of resource trades on the overall system, improving the efficiency of the operational groups and the mission as a whole. This allows insight into and optimization of the controlling factors earlier in the mission planning stage.
A techno-economic assessment of grid connected photovoltaic system for hospital building in Malaysia
NASA Astrophysics Data System (ADS)
Mat Isa, Normazlina; Tan, Chee Wei; Yatim, AHM
2017-07-01
Conventionally, electricity in hospital building are supplied by the utility grid which uses mix fuel including coal and gas. Due to enhancement in renewable technology, many building shall moving forward to install their own PV panel along with the grid to employ the advantages of the renewable energy. This paper present an analysis of grid connected photovoltaic (GCPV) system for hospital building in Malaysia. A discussion is emphasized on the economic analysis based on Levelized Cost of Energy (LCOE) and total Net Present Post (TNPC) in regards with the annual interest rate. The analysis is performed using Hybrid Optimization Model for Electric Renewables (HOMER) software which give optimization and sensitivity analysis result. An optimization result followed by the sensitivity analysis also being discuss in this article thus the impact of the grid connected PV system has be evaluated. In addition, the benefit from Net Metering (NeM) mechanism also discussed.
Ant groups optimally amplify the effect of transiently informed individuals
Gelblum, Aviram; Pinkoviezky, Itai; Fonio, Ehud; Ghosh, Abhijit; Gov, Nir; Feinerman, Ofer
2015-01-01
To cooperatively transport a large load, it is important that carriers conform in their efforts and align their forces. A downside of behavioural conformism is that it may decrease the group's responsiveness to external information. Combining experiment and theory, we show how ants optimize collective transport. On the single-ant scale, optimization stems from decision rules that balance individuality and compliance. Macroscopically, these rules poise the system at the transition between random walk and ballistic motion where the collective response to the steering of a single informed ant is maximized. We relate this peak in response to the divergence of susceptibility at a phase transition. Our theoretical models predict that the ant-load system can be transitioned through the critical point of this mesoscopic system by varying its size; we present experiments supporting these predictions. Our findings show that efficient group-level processes can arise from transient amplification of individual-based knowledge. PMID:26218613
A distributed system for fast alignment of next-generation sequencing data.
Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D
2010-12-01
We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.
A Technical Description of the Officer Procurement Model (TOPOPS). Final Report.
ERIC Educational Resources Information Center
Akman, Allan; And Others
The Total Objective Plan for the Officer Procurement System (TOPOPS) is an aggregate-level, computer-based model of the Air Force Officer procurement system developed to operate on the UNIVAC 1108 system. It is designed to simulate officer accession and training and achieve optimal solutions in terms of either cost minimization or accession…
Aljaberi, Ahmad; Chatterji, Ashish; Dong, Zedong; Shah, Navnit H; Malick, Waseem; Singhal, Dharmendra; Sandhu, Harpreet K
2013-01-01
To evaluate and optimize sodium lauryl sulfate (SLS) and magnesium stearate (Mg.St) levels, with respect to dissolution and compaction, in a high dose, poorly soluble drug tablet formulation. A model poorly soluble drug was formulated using high shear aqueous granulation. A D-optimal design was used to evaluate and model the effect of granulation conditions, size of milling screen, SLS and Mg.St levels on tablet compaction and ejection. The compaction profiles were generated using a Presster(©) compaction simulator. Dissolution of the kernels was performed using a USP dissolution apparatus II and intrinsic dissolution was determined using a stationary disk system. Unlike kernels dissolution which failed to discriminate between tablets prepared with various SLS contents, the intrinsic dissolution rate showed that a SLS level of 0.57% was sufficient to achieve the required release profile while having minimal effect on compaction. The formulation factors that affect tablet compaction and ejection were identified and satisfactorily modeled. The design space of best factor setting to achieve optimal compaction and ejection properties was successfully constructed by RSM analysis. A systematic study design helped identify the critical factors and provided means to optimize the functionality of key excipient to design robust drug product.
NASA Astrophysics Data System (ADS)
Park, Nam In; Kim, Seon Man; Kim, Hong Kook; Kim, Ji Woon; Kim, Myeong Bo; Yun, Su Won
In this paper, we propose a video-zoom driven audio-zoom algorithm in order to provide audio zooming effects in accordance with the degree of video-zoom. The proposed algorithm is designed based on a super-directive beamformer operating with a 4-channel microphone system, in conjunction with a soft masking process that considers the phase differences between microphones. Thus, the audio-zoom processed signal is obtained by multiplying an audio gain derived from a video-zoom level by the masked signal. After all, a real-time audio-zoom system is implemented on an ARM-CORETEX-A8 having a clock speed of 600 MHz after different levels of optimization are performed such as algorithmic level, C-code, and memory optimizations. To evaluate the complexity of the proposed real-time audio-zoom system, test data whose length is 21.3 seconds long is sampled at 48 kHz. As a result, it is shown from the experiments that the processing time for the proposed audio-zoom system occupies 14.6% or less of the ARM clock cycles. It is also shown from the experimental results performed in a semi-anechoic chamber that the signal with the front direction can be amplified by approximately 10 dB compared to the other directions.
NASA Astrophysics Data System (ADS)
Kawo, Nafyad Serre; Zhou, Yangxiao; Magalso, Ronnell; Salvacion, Lasaro
2018-05-01
A coupled simulation-optimization approach to optimize an artificial-recharge-pumping system for the water supply in the Maghaway Valley, Cebu, Philippines, is presented. The objective is to maximize the total pumping rate through a system of artificial recharge and pumping while meeting constraints such as groundwater-level drawdown and bounds on pumping rates at each well. The simulation models were coupled with groundwater management optimization to maximize production rates. Under steady-state natural conditions, the significant inflow to the aquifer comes from river leakage, whereas the natural discharge is mainly the subsurface outflow to the downstream area. Results from the steady artificial-recharge-pumping simulation model show that artificial recharge is about 20,587 m3/day and accounts for 77% of total inflow. Under transient artificial-recharge-pumping conditions, artificial recharge varies between 14,000 and 20,000 m3/day depending on the wet and dry seasons, respectively. The steady-state optimisation results show that the total optimal abstraction rate is 37,545 m3/day and artificial recharge is increased to 29,313 m3/day. The transient optimization results show that the average total optimal pumping rate is 36,969 m3/day for the current weir height. The transient optimization results for an increase in weir height by 1 and 2 m show that the average total optimal pumping rates are increased to 38,768 and 40,463 m3/day, respectively. It is concluded that the increase in the height of the weir can significantly increase the artificial recharge rate and production rate in Maghaway Valley.
Evolutionary Tradeoffs between Economy and Effectiveness in Biological Homeostasis Systems
Szekely, Pablo; Sheftel, Hila; Mayo, Avi; Alon, Uri
2013-01-01
Biological regulatory systems face a fundamental tradeoff: they must be effective but at the same time also economical. For example, regulatory systems that are designed to repair damage must be effective in reducing damage, but economical in not making too many repair proteins because making excessive proteins carries a fitness cost to the cell, called protein burden. In order to see how biological systems compromise between the two tasks of effectiveness and economy, we applied an approach from economics and engineering called Pareto optimality. This approach allows calculating the best-compromise systems that optimally combine the two tasks. We used a simple and general model for regulation, known as integral feedback, and showed that best-compromise systems have particular combinations of biochemical parameters that control the response rate and basal level. We find that the optimal systems fall on a curve in parameter space. Due to this feature, even if one is able to measure only a small fraction of the system's parameters, one can infer the rest. We applied this approach to estimate parameters in three biological systems: response to heat shock and response to DNA damage in bacteria, and calcium homeostasis in mammals. PMID:23950698
Evolutionary tradeoffs between economy and effectiveness in biological homeostasis systems.
Szekely, Pablo; Sheftel, Hila; Mayo, Avi; Alon, Uri
2013-01-01
Biological regulatory systems face a fundamental tradeoff: they must be effective but at the same time also economical. For example, regulatory systems that are designed to repair damage must be effective in reducing damage, but economical in not making too many repair proteins because making excessive proteins carries a fitness cost to the cell, called protein burden. In order to see how biological systems compromise between the two tasks of effectiveness and economy, we applied an approach from economics and engineering called Pareto optimality. This approach allows calculating the best-compromise systems that optimally combine the two tasks. We used a simple and general model for regulation, known as integral feedback, and showed that best-compromise systems have particular combinations of biochemical parameters that control the response rate and basal level. We find that the optimal systems fall on a curve in parameter space. Due to this feature, even if one is able to measure only a small fraction of the system's parameters, one can infer the rest. We applied this approach to estimate parameters in three biological systems: response to heat shock and response to DNA damage in bacteria, and calcium homeostasis in mammals.
Improving Sensorimotor Function and Adaptation using Stochastic Vestibular Stimulation
NASA Technical Reports Server (NTRS)
Galvan, R. C.; Bloomberg, J. J.; Mulavara, A. P.; Clark, T. K.; Merfeld, D. M.; Oman, C. M.
2014-01-01
Astronauts experience sensorimotor changes during adaption to G-transitions that occur when entering and exiting microgravity. Post space flight, these sensorimotor disturbances can include postural and gait instability, visual performance changes, manual control disruptions, spatial disorientation, and motion sickness, all of which can hinder the operational capabilities of the astronauts. Crewmember safety would be significantly increased if sensorimotor changes brought on by gravitational changes could be mitigated and adaptation could be facilitated. The goal of this research is to investigate and develop the use of electrical stochastic vestibular stimulation (SVS) as a countermeasure to augment sensorimotor function and facilitate adaptation. For this project, SVS will be applied via electrodes on the mastoid processes at imperceptible amplitude levels. We hypothesize that SVS will improve sensorimotor performance through the phenomena of stochastic resonance, which occurs when the response of a nonlinear system to a weak input signal is optimized by the application of a particular nonzero level of noise. In line with the theory of stochastic resonance, a specific optimal level of SVS will be found and tested for each subject [1]. Three experiments are planned to investigate the use of SVS in sensory-dependent tasks and performance. The first experiment will aim to demonstrate stochastic resonance in the vestibular system through perception based motion recognition thresholds obtained using a 6-degree of freedom Stewart platform in the Jenks Vestibular Laboratory at Massachusetts Eye and Ear Infirmary. A range of SVS amplitudes will be applied to each subject and the subjectspecific optimal SVS level will be identified as that which results in the lowest motion recognition threshold, through previously established, well developed methods [2,3,4]. The second experiment will investigate the use of optimal SVS in facilitating sensorimotor adaptation to system disturbances. Subjects will adapt to wearing minifying glasses, resulting in decreased vestibular ocular reflex (VOR) gain. The VOR gain will then be intermittently measured while the subject readapts to normal vision, with and without optimal SVS. We expect that optimal SVS will cause a steepening of the adaptation curve. The third experiment will test the use of optimal SVS in an operationally relevant aerospace task, using the tilt translation sled at NASA Johnson Space Center, a test platform capable of recreating the tilt-gain and tilt-translation illusions associated with landing of a spacecraft post-space flight. In this experiment, a perception based manual control measure will be used to compare performance with and without optimal SVS. We expect performance to improve in this task when optimal SVS is applied. The ultimate goal of this work is to systematically investigate and further understand the potential benefits of stochastic vestibular stimulation in the context of human space flight so that it may be used in the future as a component of a comprehensive countermeasure plan for adaptation to G-transitions.
NASA Astrophysics Data System (ADS)
Li, Yi; Ye, Quanliang; Liu, An; Meng, Fangang; Zhang, Wenlong; Xiong, Wei; Wang, Peifang; Wang, Chao
2017-07-01
Urban rainwater management need to achieve an optimal compromise among water resource augmentation, water loggings alleviation, economic investment and pollutants reduction. Rainwater harvesting (RWH) systems, such as green rooftops, porous pavements, and green lands, have been successfully implemented as viable approaches to alleviate water-logging disasters and water scarcity problems caused by rapid urbanization. However, there is limited guidance to determine the construction areas of RWH systems, especially for stormwater runoff control due to increasing extreme precipitation. This study firstly developed a multi-objective model to optimize the construction areas of green rooftops, porous pavements and green lands, considering the trade-offs among 24 h-interval RWH volume, stormwater runoff volume control ratio (R), economic cost, and rainfall runoff pollutant reduction. Pareto fronts of RWH system areas for 31 provinces of China were obtained through nondominated sorting genetic algorithm. On the national level, the control strategies for the construction rate (the ratio between the area of single RWH system and the total areas of RWH systems) of green rooftops (ηGR), porous pavements (ηPP) and green lands (ηGL) were 12%, 26% and 62%, and the corresponding RWH volume and total suspended solids reduction was 14.84 billion m3 and 228.19 kilotons, respectively. Optimal ηGR , ηPP and ηGL in different regions varied from 1 to 33%, 6 to 54%, and 30 to 89%, respectively. Particularly, green lands were the most important RWH system in 25 provinces with ηGL more than 50%, ηGR mainly less than 15%, and ηPP mainly between 10 and 30%. Results also indicated whether considering the objective MaxR made a non-significant difference for RWH system areas whereas exerted a great influence on the result of stormwater runoff control. Maximum daily rainfall under control increased, exceeding 200% after the construction of the optimal RWH system compared with that before construction. Optimal RWH system areas presented a general picture for urban development policy makers in China.
Bandyopadhyay, Shantanu; Katare, O P; Singh, Bhupinder
2012-12-01
The objective of the current work is to develop systematically optimized self-nanoemulsifying drug delivery systems (SNEDDS) using long chain triglycerides (LCT's) and medium chain triglycerides (MCT's) of ezetimibe employing Formulation by Design (FbD), and evaluate their in vitro and in vivo performance. Equilibrium solubility studies indicated the choice of Maisine 35-1 and Capryol 90 as lipids, and of Labrasol and Tween 80 as emulgents for formulating the LCT and MCT systems, respectively. Ternary phase diagrams were constructed to select the areas of nanoemulsion, and the amounts of lipid (X(1)) and emulgent (X(2)) as the critical factor variables. The SNEDDS were systematically optimized using 3(2) central composite design and the optimized formulations located using overlay plot. TEM studies on reconstituted SNEDDS demonstrated uniform shape and size of globules. The nanometer size range and high negative values of zeta potential depicted non-coalescent nature of the optimized SNEDDS. Thermodynamic studies, cloud point determination and accelerated stability studies ascertained the stability of optimized formulations. In situ perfusion (SPIP) studies in Sprague Dawley (SD) rats construed remarkable enhancement in the absorptivity and permeability parameters of SNEDDS vis-à-vis the conventional marketed product. In vivo pharmacodynamic studies in SD rats indicated significantly superior modification in plasma lipid levels of optimized SNEDDS vis-à-vis marketed product, inclusion complex and pure drug. The studies, therefore, indicate the successful formulation development of self-nanoemulsifying systems with distinctly improved bioavailability potential of ezetimibe. Copyright © 2012 Elsevier B.V. All rights reserved.
AUTOMATED WATER LEVEL MEASUREMENTS IN SMALL-DIAMETER AQUIFER TUBES
DOE Office of Scientific and Technical Information (OSTI.GOV)
PETERSEN SW; EDRINGTON RS; MAHOOD RO
2011-01-14
Groundwater contaminated with hexavalent chromium, strontium-90, and uranium discharges into the Columbia River along approximately 16 km (10 mi) of the shoreline. Various treatment systems have and will continue to be implemented to eliminate the impact of Hanford Site contamination to the river. To optimize the various remediation strategies, it is important to understand interactions between groundwater and the surface water of the Columbia River. An automated system to record water levels in aquifer sampling tubes installed in the hyporheic zone was designed and tested to (1) gain a more complete understanding of groundwater/river water interactions based on gaining andmore » losing conditions ofthe Columbia River, (2) record and interpret data for consistent and defensible groundwater/surface water conceptual models that may be used to better predict subsurface contaminant fate and transport, and (3) evaluate the hydrodynamic influence of extraction wells in an expanded pump-and-treat system to optimize the treatment system. A system to measure water levels in small-diameter aquifer tubes was designed and tested in the laboratory and field. The system was configured to allow manual measurements to periodically calibrate the instrument and to permit aquifer tube sampling without removing the transducer tube. Manual measurements were collected with an e-tape designed and fabricated especially for this test. Results indicate that the transducer system accurately records groundwater levels in aquifer tubes. These data are being used to refine the conceptual and numeric models to better understand interactions in the hyporheic zone of the Columbia River and the adjacent river water and groundwater, and changes in hydrochemistry relative to groundwater flux as river water recharges the aquifer and then drains back out in response to changes in the river level.« less
Update on HCDstruct - A Tool for Hybrid Wing Body Conceptual Design and Structural Optimization
NASA Technical Reports Server (NTRS)
Gern, Frank H.
2015-01-01
HCDstruct is a Matlab® based software tool to rapidly build a finite element model for structural optimization of hybrid wing body (HWB) aircraft at the conceptual design level. The tool uses outputs from a Flight Optimization System (FLOPS) performance analysis together with a conceptual outer mold line of the vehicle, e.g. created by Vehicle Sketch Pad (VSP), to generate a set of MSC Nastran® bulk data files. These files can readily be used to perform a structural optimization and weight estimation using Nastran’s® Solution 200 multidisciplinary optimization solver. Initially developed at NASA Langley Research Center to perform increased fidelity conceptual level HWB centerbody structural analyses, HCDstruct has grown into a complete HWB structural sizing and weight estimation tool, including a fully flexible aeroelastic loads analysis. Recent upgrades to the tool include the expansion to a full wing tip-to-wing tip model for asymmetric analyses like engine out conditions and dynamic overswings, as well as a fully actuated trailing edge, featuring up to 15 independently actuated control surfaces and twin tails. Several example applications of the HCDstruct tool are presented.
Dong, Ruixia; Wang, Dongxu; Wang, Xiaoxiao; Zhang, Ke; Chen, Pingping; Yang, Chung S; Zhang, Jinsong
2016-12-01
Selenium participates in the antioxidant defense mainly through a class of selenoproteins, including thioredoxin reductase. Epigallocatechin-3-gallate (EGCG) is the most abundant and biologically active catechin in green tea. Depending upon the dose and biological systems, EGCG may function either as an antioxidant or as an inducer of antioxidant defense via its pro-oxidant action or other unidentified mechanisms. By manipulating the selenium status, the present study investigated the interactions of EGCG with antioxidant defense systems including the thioredoxin system comprising of thioredoxin and thioredoxin reductase, the glutathione system comprising of glutathione and glutathione reductase coupled with glutaredoxin, and the Nrf2 system. In selenium-optimal mice, EGCG increased hepatic activities of thioredoxin reductase, glutathione reductase and glutaredoxin. These effects of EGCG appeared to be not due to overt pro-oxidant action because melatonin, a powerful antioxidant, did not influence the increase. However, in selenium-deficient mice, with low basal levels of thioredoxin reductase 1, the same dose of EGCG did not elevate the above-mentioned enzymes; intriguingly EGCG in turn activated hepatic Nrf2 response, leading to increased heme oxygenase 1 and NAD(P)H:quinone oxidoreductase 1 protein levels and thioredoxin activity. Overall, the present work reveals that EGCG is a robust inducer of the Nrf2 system only in selenium-deficient conditions. Under normal physiological conditions, in selenium-optimal mice, thioredoxin and glutathione systems serve as the first line defense systems against the stress induced by high doses of EGCG, sparing the activation of the Nrf2 system. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Selection of optimal complexity for ENSO-EMR model by minimum description length principle
NASA Astrophysics Data System (ADS)
Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.
2012-12-01
One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression modeling of nonlinear processes: Derivation and applications to climatic variability. J. Climate, 18 (21): 4404-4424. D. Kondrashov, S. Kravtsov, A. W. Robertson and M. Ghil, 2005. A hierarchy of data-based ENSO models. J. Climate, 18, 4425-4444.
Optimal Stimulus Amplitude for Vestibular Stochastic Stimulation to Improve Sensorimotor Function
NASA Technical Reports Server (NTRS)
Goel, R.; Kofman, I.; DeDios, Y. E.; Jeevarajan, J.; Stepanyan, V.; Nair, M.; Congdon, S.; Fregia, M.; Cohen, H.; Bloomberg, J. J.;
2014-01-01
Sensorimotor changes such as postural and gait instabilities can affect the functional performance of astronauts when they transition across different gravity environments. We are developing a method, based on stochastic resonance (SR), to enhance information transfer by applying non-zero levels of external noise on the vestibular system (vestibular stochastic resonance, VSR). Our previous work has shown the advantageous effects of VSR in a balance task of standing on an unstable surface. This technique to improve detection of vestibular signals uses a stimulus delivery system that is wearable or portable and provides imperceptibly low levels of white noise-based binaural bipolar electrical stimulation of the vestibular system. The goal of this project is to determine optimal levels of stimulation for SR applications by using a defined vestibular threshold of motion detection. A series of experiments were carried out to determine a robust paradigm to identify a vestibular threshold that can then be used to recommend optimal stimulation levels for SR training applications customized to each crewmember. Customizing stimulus intensity can maximize treatment effects. The amplitude of stimulation to be used in the VSR application has varied across studies in the literature such as 60% of nociceptive stimulus thresholds. We compared subjects' perceptual threshold with that obtained from two measures of body sway. Each test session was 463s long and consisted of several 15s sinusoidal stimuli, at different current amplitudes (0-2 mA), interspersed with 20-20.5s periods of no stimulation. Subjects sat on a chair with their eyes closed and had to report their perception of motion through a joystick. A force plate underneath the chair recorded medio-lateral shear forces and roll moments. First we determined the percent time during stimulation periods for which perception of motion (activity above a pre-defined threshold) was reported using the joystick, and body sway (two standard deviation of the noise level in the baseline measurement) was detected by the sensors. The percentage time at each stimulation level for motion detection was normalized with respect to the largest value and a logistic regression curve fit was applied to these data. The threshold was defined at the 50% probability of motion detection. Comparison of threshold of motion detection obtained from joystick data versus body sway suggests that perceptual thresholds were significantly lower, and were not impacted by system noise. Further, in order to determine optimal stimulation amplitude to improve balance, two sets of experiments were carried out. In the first set of experiments, all subjects received the same level of stimuli and the intensity of optimal performance was projected back on subjects' vestibular threshold curve. In the second set of experiments, on different subjects, stimulation was administered from 20-400% of subjects' vestibular threshold obtained from joystick data. Preliminary results of our study show that, in general, using stimulation amplitudes at 40-60% of perceptual motion threshold improved balance performance significantly compared to control (no stimulation). The amplitude of vestibular stimulation that improved balance function was predominantly in the range of +/- 100 to +/- 400 micro A. We hypothesize that VSR stimulation will act synergistically with sensorimotor adaptability (SA) training to improve adaptability by increasing utilization of vestibular information and therefore will help us to optimize and personalize a SA countermeasure prescription. This combination will help to significantly reduce the number of days required to recover functional performance to preflight levels after long-duration spaceflight.
Situational reaction and planning
NASA Technical Reports Server (NTRS)
Yen, John; Pfluger, Nathan
1994-01-01
One problem faced in designing an autonomous mobile robot system is that there are many parameters of the system to define and optimize. While these parameters can be obtained for any given situation determining what the parameters should be in all situations is difficult. The usual solution is to give the system general parameters that work in all situations, but this does not help the robot to perform its best in a dynamic environment. Our approach is to develop a higher level situation analysis module that adjusts the parameters by analyzing the goals and history of sensor readings. By allowing the robot to change the system parameters based on its judgement of the situation, the robot will be able to better adapt to a wider set of possible situations. We use fuzzy logic in our implementation to reduce the number of basic situations the controller has to recognize. For example, a situation may be 60 percent open and 40 percent corridor, causing the optimal parameters to be somewhere between the optimal settings for the two extreme situations.
NASA Astrophysics Data System (ADS)
Dai, C.; Qin, X. S.; Chen, Y.; Guo, H. C.
2018-06-01
A Gini-coefficient based stochastic optimization (GBSO) model was developed by integrating the hydrological model, water balance model, Gini coefficient and chance-constrained programming (CCP) into a general multi-objective optimization modeling framework for supporting water resources allocation at a watershed scale. The framework was advantageous in reflecting the conflicting equity and benefit objectives for water allocation, maintaining the water balance of watershed, and dealing with system uncertainties. GBSO was solved by the non-dominated sorting Genetic Algorithms-II (NSGA-II), after the parameter uncertainties of the hydrological model have been quantified into the probability distribution of runoff as the inputs of CCP model, and the chance constraints were converted to the corresponding deterministic versions. The proposed model was applied to identify the Pareto optimal water allocation schemes in the Lake Dianchi watershed, China. The optimal Pareto-front results reflected the tradeoff between system benefit (αSB) and Gini coefficient (αG) under different significance levels (i.e. q) and different drought scenarios, which reveals the conflicting nature of equity and efficiency in water allocation problems. A lower q generally implies a lower risk of violating the system constraints and a worse drought intensity scenario corresponds to less available water resources, both of which would lead to a decreased system benefit and a less equitable water allocation scheme. Thus, the proposed modeling framework could help obtain the Pareto optimal schemes under complexity and ensure that the proposed water allocation solutions are effective for coping with drought conditions, with a proper tradeoff between system benefit and water allocation equity.
NASA Astrophysics Data System (ADS)
Mwakabuta, Ndaga Stanslaus
Electric power distribution systems play a significant role in providing continuous and "quality" electrical energy to different classes of customers. In the context of the present restrictions on transmission system expansions and the new paradigm of "open and shared" infrastructure, new approaches to distribution system analyses, economic and operational decision-making need investigation. This dissertation includes three layers of distribution system investigations. In the basic level, improved linear models are shown to offer significant advantages over previous models for advanced analysis. In the intermediate level, the improved model is applied to solve the traditional problem of operating cost minimization using capacitors and voltage regulators. In the advanced level, an artificial intelligence technique is applied to minimize cost under Distributed Generation injection from private vendors. Soft computing techniques are finding increasing applications in solving optimization problems in large and complex practical systems. The dissertation focuses on Genetic Algorithm for investigating the economic aspects of distributed generation penetration without compromising the operational security of the distribution system. The work presents a methodology for determining the optimal pricing of distributed generation that would help utilities make a decision on how to operate their system economically. This would enable modular and flexible investments that have real benefits to the electric distribution system. Improved reliability for both customers and the distribution system in general, reduced environmental impacts, increased efficiency of energy use, and reduced costs of energy services are some advantages.
Petruzielo, F R; Toulouse, Julien; Umrigar, C J
2011-02-14
A simple yet general method for constructing basis sets for molecular electronic structure calculations is presented. These basis sets consist of atomic natural orbitals from a multiconfigurational self-consistent field calculation supplemented with primitive functions, chosen such that the asymptotics are appropriate for the potential of the system. Primitives are optimized for the homonuclear diatomic molecule to produce a balanced basis set. Two general features that facilitate this basis construction are demonstrated. First, weak coupling exists between the optimal exponents of primitives with different angular momenta. Second, the optimal primitive exponents for a chosen system depend weakly on the particular level of theory employed for optimization. The explicit case considered here is a basis set appropriate for the Burkatzki-Filippi-Dolg pseudopotentials. Since these pseudopotentials are finite at nuclei and have a Coulomb tail, the recently proposed Gauss-Slater functions are the appropriate primitives. Double- and triple-zeta bases are developed for elements hydrogen through argon. These new bases offer significant gains over the corresponding Burkatzki-Filippi-Dolg bases at various levels of theory. Using a Gaussian expansion of the basis functions, these bases can be employed in any electronic structure method. Quantum Monte Carlo provides an added benefit: expansions are unnecessary since the integrals are evaluated numerically.
Optimization of a Lunar Pallet Lander Reinforcement Structure Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Burt, Adam
2014-01-01
In this paper, a unique system level spacecraft design optimization will be presented. A Genetic Algorithm is used to design the global pattern of the reinforcing structure, while a gradient routine is used to adequately stiffen the sub-structure. The system level structural design includes determining the optimal physical location (and number) of reinforcing beams of a lunar pallet lander deck structure. Design of the substructure includes determining placement of secondary stiffeners and the number of rivets required for assembly.. In this optimization, several considerations are taken into account. The primary objective was to raise the primary natural frequencies of the structure such that the Pallet Lander primary structure does not significantly couple with the launch vehicle. A secondary objective is to determine how to properly stiffen the reinforcing beams so that the beam web resists the shear buckling load imparted by the spacecraft components mounted to the pallet lander deck during launch and landing. A third objective is that the calculated stress does not exceed the allowable strength of the material. These design requirements must be met while, minimizing the overall mass of the spacecraft. The final paper will discuss how the optimization was implemented as well as the results. While driven by optimization algorithms, the primary purpose of this effort was to demonstrate the capability of genetic algorithms to enable design automation in the preliminary design cycle. By developing a routine that can automatically generate designs through the use of Finite Element Analysis, considerable design efficiencies, both in time and overall product, can be obtained over more traditional brute force design methods.
Cyber War Game in Temporal Networks
Cho, Jin-Hee; Gao, Jianxi
2016-01-01
In a cyber war game where a network is fully distributed and characterized by resource constraints and high dynamics, attackers or defenders often face a situation that may require optimal strategies to win the game with minimum effort. Given the system goal states of attackers and defenders, we study what strategies attackers or defenders can take to reach their respective system goal state (i.e., winning system state) with minimum resource consumption. However, due to the dynamics of a network caused by a node’s mobility, failure or its resource depletion over time or action(s), this optimization problem becomes NP-complete. We propose two heuristic strategies in a greedy manner based on a node’s two characteristics: resource level and influence based on k-hop reachability. We analyze complexity and optimality of each algorithm compared to optimal solutions for a small-scale static network. Further, we conduct a comprehensive experimental study for a large-scale temporal network to investigate best strategies, given a different environmental setting of network temporality and density. We demonstrate the performance of each strategy under various scenarios of attacker/defender strategies in terms of win probability, resource consumption, and system vulnerability. PMID:26859840
Sumi, Mayumi; Koga, Yoshiyuki; Tominaga, Jun-Ya; Hamanaka, Ryo; Ozaki, Hiroya; Chiang, Pao-Chang; Yoshida, Noriaki
2016-12-01
Most closing loops designed for producing higher moment-to-force (M/F) ratios require complex wire bending and are likely to cause hygiene problems and discomfort because of their complicated configurations. We aimed to develop a simple loop design that can produce optimal force and M/F ratio. A loop design that can generate a high M/F ratio and the ideal force level was investigated by varying the portion and length of the cross-sectional reduction of a teardrop loop and the loop position. The forces and moments acting on closing loops were calculated using structural analysis based on the tangent stiffness method. An M/F ratio of 9.3 (high enough to achieve controlled movement of the anterior teeth) and an optimal force level of approximately 250 g of force can be generated by activation of a 10-mm-high teardrop loop whose cross-section of 0.019 × 0.025 or 0.021 × 0.025 in was reduced in thickness by 50% for a distance of 3 mm from the apex, located between a quarter and a third of the interbracket distance from the canine bracket. The simple loop design that we developed delivers an optimal force and an M/F ratio for the retraction of anterior teeth, and is applicable in a 0.022-in slot system. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Enzymatic process optimization for the in vitro production of isoprene from mevalonate.
Cheng, Tao; Liu, Hui; Zou, Huibin; Chen, Ningning; Shi, Mengxun; Xie, Congxia; Zhao, Guang; Xian, Mo
2017-01-09
As an important bulk chemical for synthetic rubber, isoprene can be biosynthesized by robust microbes. But rational engineering and optimization are often demanded to make the in vivo process feasible due to the complexities of cellular metabolism. Alternative synthetic biochemistry strategies are in fast development to produce isoprene or isoprenoids in vitro. This study set up an in vitro enzyme synthetic chemistry process using 5 enzymes in the lower mevalonate pathway to produce isoprene from mevalonate. We found the level and ratio of individual enzymes would significantly affect the efficiency of the whole system. The optimized process using 10 balanced enzyme unites (5.0 µM of MVK, PMK, MVD; 10.0 µM of IDI, 80.0 µM of ISPS) could produce 6323.5 µmol/L/h (430 mg/L/h) isoprene in a 2 ml in vitro system. In a scale up process (50 ml) only using 1 balanced enzyme unit (0.5 µM of MVK, PMK, MVD; 1.0 µM of IDI, 8.0 µM of ISPS), the system could produce 302 mg/L isoprene in 40 h, which showed higher production rate and longer reaction phase with comparison of the in vivo control. By optimizing the enzyme levels of lower MVA pathway, synthetic biochemistry methods could be set up for the enzymatic production of isoprene or isoprenoids from mevalonate.
NASA Astrophysics Data System (ADS)
Huang, C.; Hsu, N.
2013-12-01
This study imports Low-Impact Development (LID) technology of rainwater catchment systems into a Storm-Water runoff Management Model (SWMM) to design the spatial capacity and quantity of rain barrel for urban flood mitigation. This study proposes a simulation-optimization model for effectively searching the optimal design. In simulation method, we design a series of regular spatial distributions of capacity and quantity of rainwater catchment facilities, and thus the reduced flooding circumstances using a variety of design forms could be simulated by SWMM. Moreover, we further calculate the net benefit that is equal to subtract facility cost from decreasing inundation loss and the best solution of simulation method would be the initial searching solution of the optimization model. In optimizing method, first we apply the outcome of simulation method and Back-Propagation Neural Network (BPNN) for developing a water level simulation model of urban drainage system in order to replace SWMM which the operating is based on a graphical user interface and is hard to combine with optimization model and method. After that we embed the BPNN-based simulation model into the developed optimization model which the objective function is minimizing the negative net benefit. Finally, we establish a tabu search-based algorithm to optimize the planning solution. This study applies the developed method in Zhonghe Dist., Taiwan. Results showed that application of tabu search and BPNN-based simulation model into the optimization model not only can find better solutions than simulation method in 12.75%, but also can resolve the limitations of previous studies. Furthermore, the optimized spatial rain barrel design can reduce 72% of inundation loss according to historical flood events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shorikov, A. F., E-mail: afshorikov@mail.ru
This article discusses a discrete-time dynamical system consisting of a set a controllable objects (region and forming it municipalities). The dynamics each of these is described by the corresponding vector nonlinear discrete-time recurrent vector equations and its control system consist from two levels: basic (control level I) that is dominating and subordinate level (control level II). Both levels have different criterions of functioning and united a priori by determined informational and control connections defined in advance. In this paper we study the problem of optimization of guaranteed result for program control by the final state of regional social and economicmore » system in the presence of risks. For this problem we proposed in this work an economical and mathematical model of two-level hierarchical minimax program control the final state of regional social and economic system in the presence of risks and the general scheme for its solving.« less
NASA Technical Reports Server (NTRS)
1978-01-01
Alternate level 4 integration approaches were synthesized and evaluated to establish the most cost effective experiment integration approach. Program baseline system trade studies are described, as well as Spacelab equipment utilization. Programmatic analysis of the baseline program was evaluated; the 2/3 and 1/3 traffic models were also considered.
An Empirical Approach to Determining Advertising Spending Level.
ERIC Educational Resources Information Center
Sunoo, D. H.; Lin, Lynn Y. S.
To assess the relationship between advertising and consumer promotion and to determine the optimal short-term advertising spending level for a product, a research project was undertaken by a major food manufacturer. One thousand homes subscribing to a dual-system cable television service received either no advertising exposure to the product or…
Genetic learning in rule-based and neural systems
NASA Technical Reports Server (NTRS)
Smith, Robert E.
1993-01-01
The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.
Biomechanical analysis of a new lumbar interspinous device with optimized topology.
Chen, Chen-Sheng; Shih, Shih-Liang
2018-01-06
Interspinous spacers used stand-alone preserve joint movement but provide little protection for diseased segments of the spine. Used as adjuncts with fusion, interspinous spacers offer rigid stability but may accelerate degeneration on adjacent levels. Our new device is intended to balance the stability and preserves motion provided by the implant. A new interspinous spacer was devised according to the results of topology optimization studies. Four finite element (FE) spine models were created that consisted of an intact spine without an implant, implantation of the novel, the device for intervertebral assisted motion (DIAM system), and the Dynesys system. All models were loaded with moments, and their range of motions (ROMs), peak disc stresses, and facet contact forces were analyzed. The limited motion segment ROMs, shielded disc stresses, and unloaded facet contact forces of the new devices were greater than those of the DIAM and Dynesys system at L3-L4 in almost all directions of movements. The ROMs, disc stresses, and facet contact forces of the new devices at L2-L3 were slightly greater than those in the DIAM system, but much lower than those in the Dynesys system in most directions. This study demonstrated that the new device provided more stability at the instrumented level than the DIAM system did, especially in lateral rotation and the bending direction. The device caused fewer adjacent ROMs, lower disc stresses, and lower facet contact forces than the Dynesys system did. Additionally, this study conducted topology optimization to design the new device and created a smaller implant for minimal invasive surgery.
Economic-Oriented Stochastic Optimization in Advanced Process Control of Chemical Processes
Dobos, László; Király, András; Abonyi, János
2012-01-01
Finding the optimal operating region of chemical processes is an inevitable step toward improving economic performance. Usually the optimal operating region is situated close to process constraints related to product quality or process safety requirements. Higher profit can be realized only by assuring a relatively low frequency of violation of these constraints. A multilevel stochastic optimization framework is proposed to determine the optimal setpoint values of control loops with respect to predetermined risk levels, uncertainties, and costs of violation of process constraints. The proposed framework is realized as direct search-type optimization of Monte-Carlo simulation of the controlled process. The concept is illustrated throughout by a well-known benchmark problem related to the control of a linear dynamical system and the model predictive control of a more complex nonlinear polymerization process. PMID:23213298
Dependency of Optimal Parameters of the IRIS Template on Image Quality and Border Detection Error
NASA Astrophysics Data System (ADS)
Matveev, I. A.; Novik, V. P.
2017-05-01
Generation of a template containing spatial-frequency features of iris is an important stage of identification. The template is obtained by a wavelet transform in an image region specified by iris borders. One of the main characteristics of the identification system is the value of recognition error, equal error rate (EER) is used as criterion here. The optimal values (in sense of minimizing the EER) of wavelet transform parameters depend on many factors: image quality, sharpness, size of characteristic objects, etc. It is hard to isolate these factors and their influences. The work presents an attempt to study an influence of following factors to EER: iris segmentation precision, defocus level, noise level. Several public domain iris image databases were involved in experiments. The images were subjected to modelled distortions of said types. The dependencies of wavelet parameter and EER values from the distortion levels were build. It is observed that the increase of the segmentation error and image noise leads to the increase of the optimal wavelength of the wavelets, whereas the increase of defocus level leads to decreasing of this value.
Insurance principles and the design of prospective payment systems.
Ellis, R P; McGuire, T G
1988-09-01
This paper applies insurance principles to the issues of optimal outlier payments and designation of peer groups in Medicare's case-based prospective payment system for hospital care. Arrow's principle that full insurance after a deductible is optimal implies that, to minimize hospital risk, outlier payments should be based on hospital average loss per case rather than, as at present, based on individual case-level losses. The principle of experience rating implies defining more homogenous peer groups for the purpose of figuring average cost. The empirical significance of these results is examined using a sample of 470,568 discharges from 469 hospitals.
Portable parallel portfolio optimization in the Aurora Financial Management System
NASA Astrophysics Data System (ADS)
Laure, Erwin; Moritsch, Hans
2001-07-01
Financial planning problems are formulated as large scale, stochastic, multiperiod, tree structured optimization problems. An efficient technique for solving this kind of problems is the nested Benders decomposition method. In this paper we present a parallel, portable, asynchronous implementation of this technique. To achieve our portability goals we elected the programming language Java for our implementation and used a high level Java based framework, called OpusJava, for expressing the parallelism potential as well as synchronization constraints. Our implementation is embedded within a modular decision support tool for portfolio and asset liability management, the Aurora Financial Management System.
[Pharmaceutical logistic in turnover of pharmaceutical products of Azerbaijan].
Dzhalilova, K I
2009-11-01
Development of pharmaceutical logistic system model promotes optimal strategy for pharmaceutical functioning. The goal of such systems is organization of pharmaceutical product's turnover in required quantity and assortment, at preset time and place, at a highest possible degree of consumption readiness with minimal expenses and qualitative service. Organization of the optimal turnover chain in the region is offered to start from approximate classification of medicaments by logistic characteristics. Supplier selection was performed by evaluation of timeliness of delivery, quality of delivered products (according to the minimum acceptable level of quality) and time-keeping of time spending for orders delivery.
A faster technique for rendering meshes in multiple display systems
NASA Astrophysics Data System (ADS)
Hand, Randall E.; Moorhead, Robert J., II
2003-05-01
Level of detail algorithms have widely been implemented in architectural VR walkthroughs and video games, but have not had widespread use in VR terrain visualization systems. This thesis explains a set of optimizations to allow most current level of detail algorithms run in the types of multiple display systems used in VR. It improves both the visual quality of the system through use of graphics hardware acceleration, and improves the framerate and running time through moifications to the computaitons that drive the algorithms. Using ROAM as a testbed, results show improvements between 10% and 100% on varying machines.
Experimental Eavesdropping Based on Optimal Quantum Cloning
NASA Astrophysics Data System (ADS)
Bartkiewicz, Karol; Lemr, Karel; Černoch, Antonín; Soubusta, Jan; Miranowicz, Adam
2013-04-01
The security of quantum cryptography is guaranteed by the no-cloning theorem, which implies that an eavesdropper copying transmitted qubits in unknown states causes their disturbance. Nevertheless, in real cryptographic systems some level of disturbance has to be allowed to cover, e.g., transmission losses. An eavesdropper can attack such systems by replacing a noisy channel by a better one and by performing approximate cloning of transmitted qubits which disturb them but below the noise level assumed by legitimate users. We experimentally demonstrate such symmetric individual eavesdropping on the quantum key distribution protocols of Bennett and Brassard (BB84) and the trine-state spherical code of Renes (R04) with two-level probes prepared using a recently developed photonic multifunctional quantum cloner [Lemr et al., Phys. Rev. A 85, 050307(R) (2012)PLRAAN1050-2947]. We demonstrated that our optimal cloning device with high-success rate makes the eavesdropping possible by hiding it in usual transmission losses. We believe that this experiment can stimulate the quest for other operational applications of quantum cloning.
Variable cycle control model for intersection based on multi-source information
NASA Astrophysics Data System (ADS)
Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan
2018-05-01
In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.
Design considerations for the beamwaveguide retrofit of a ground antenna station
NASA Technical Reports Server (NTRS)
Veruttipong, T.; Withington, J.; Galindo-Israel, V.; Imbriale, W.; Bathker, D.
1987-01-01
A primary requirement of the NASA Deep Space Network (DSN) is to provide for optimal reception of very low signal levels. This requirement necessitates optimizing the antenna gain to the total system operating noise level quotient. Low overall system noise levels of 16 to 20 K are achieved by using cryogenically cooled preamplifiers closely coupled with an appropriately balanced antenna gain/spillover design. Additionally, high-power transmitters (up to 400 kW CW) are required for spacecraft emergency command and planetary radar experiments. The frequency bands allocated for deep space telemetry are narrow bands near 2.1 and 2.3 GHz (Ka-band), 7.1 and 8.4 GHz (X-band), and 32 and 34.5 GHz (Ka-band). In addition, planned operations for the Search for Extraterrestrial Intelligence (SETI) program require continuous low-noise receive coverage over the 1 to 10 GHz band. To summarize, DSN antennas must operate efficiently with low receive noise and high-power uplink over the 1 to 35 GHz band.
An automated method to find transition states using chemical dynamics simulations.
Martínez-Núñez, Emilio
2015-02-05
A procedure to automatically find the transition states (TSs) of a molecular system (MS) is proposed. It has two components: high-energy chemical dynamics simulations (CDS), and an algorithm that analyzes the geometries along the trajectories to find reactive pathways. Two levels of electronic structure calculations are involved: a low level (LL) is used to integrate the trajectories and also to optimize the TSs, and a higher level (HL) is used to reoptimize the structures. The method has been tested in three MSs: formaldehyde, formic acid (FA), and vinyl cyanide (VC), using MOPAC2012 and Gaussian09 to run the LL and HL calculations, respectively. Both the efficacy and efficiency of the method are very good, with around 15 TS structures optimized every 10 trajectories, which gives a total of 7, 12, and 83 TSs for formaldehyde, FA, and VC, respectively. The use of CDS makes it a powerful tool to unveil possible nonstatistical behavior of the system under study. © 2014 Wiley Periodicals, Inc.
A Web Centric Architecture for Deploying Multi-Disciplinary Engineering Design Processes
NASA Technical Reports Server (NTRS)
Woyak, Scott; Kim, Hongman; Mullins, James; Sobieszczanski-Sobieski, Jaroslaw
2004-01-01
There are continuous needs for engineering organizations to improve their design process. Current state of the art techniques use computational simulations to predict design performance, and optimize it through advanced design methods. These tools have been used mostly by individual engineers. This paper presents an architecture for achieving results at an organization level beyond individual level. The next set of gains in process improvement will come from improving the effective use of computers and software within a whole organization, not just for an individual. The architecture takes advantage of state of the art capabilities to produce a Web based system to carry engineering design into the future. To illustrate deployment of the architecture, a case study for implementing advanced multidisciplinary design optimization processes such as Bi-Level Integrated System Synthesis is discussed. Another example for rolling-out a design process for Design for Six Sigma is also described. Each example explains how an organization can effectively infuse engineering practice with new design methods and retain the knowledge over time.
A Modified Artificial Bee Colony Algorithm Application for Economic Environmental Dispatch
NASA Astrophysics Data System (ADS)
Tarafdar Hagh, M.; Baghban Orandi, Omid
2018-03-01
In conventional fossil-fuel power systems, the economic environmental dispatch (EED) problem is a major problem that optimally determines the output power of generating units in a way that cost of total production and emission level be minimized simultaneously, and at the same time all the constraints of units and system are satisfied properly. To solve EED problem which is a non-convex optimization problem, a modified artificial bee colony (MABC) algorithm is proposed in this paper. This algorithm by implementing weighted sum method is applied on two test systems, and eventually, obtained results are compared with other reported results. Comparison of results confirms superiority and efficiency of proposed method clearly.
NASA Astrophysics Data System (ADS)
Maser, Adam Charles
More electric aircraft systems, high power avionics, and a reduction in heat sink capacity have placed a larger emphasis on correctly satisfying aircraft thermal management requirements during conceptual design. Thermal management systems must be capable of dealing with these rising heat loads, while simultaneously meeting mission performance. Since all subsystem power and cooling requirements are ultimately traced back to the engine, the growing interactions between the propulsion and thermal management systems are becoming more significant. As a result, it is necessary to consider their integrated performance during the conceptual design of the aircraft gas turbine engine cycle to ensure that thermal requirements are met. This can be accomplished by using thermodynamic subsystem modeling and simulation while conducting the necessary design trades to establish the engine cycle. However, this approach also poses technical challenges associated with the existence of elaborate aircraft subsystem interactions. This research addresses these challenges through the creation of a parsimonious, transparent thermodynamic model of propulsion and thermal management systems performance with a focus on capturing the physics that have the largest impact on propulsion design choices. This modeling environment, known as Cycle Refinement for Aircraft Thermodynamically Optimized Subsystems (CRATOS), is capable of operating in on-design (parametric) and off-design (performance) modes and includes a system-level solver to enforce design constraints. A key aspect of this approach is the incorporation of physics-based formulations involving the concurrent usage of the first and second laws of thermodynamics, which are necessary to achieve a clearer view of the component-level losses across the propulsion and thermal management systems. This is facilitated by the direct prediction of the exergy destruction distribution throughout the system and the resulting quantification of available work losses over the time history of the mission. The characterization of the thermodynamic irreversibility distribution helps give the propulsion systems designer an absolute and consistent view of the tradeoffs associated with the design of the entire integrated system. Consequently, this leads directly to the question of the proper allocation of irreversibility across each of the components. The process of searching for the most favorable allocation of this irreversibility is the central theme of the research and must take into account production cost and vehicle mission performance. The production cost element is accomplished by including an engine component weight and cost prediction capability within the system model. The vehicle mission performance is obtained by directly linking the propulsion and thermal management model to a vehicle performance model and flying it through a mission profile. A canonical propulsion and thermal management systems architecture is then presented to experimentally test each element of the methodology separately: first the integrated modeling and simulation, then the irreversibility, cost, and mission performance considerations, and then finally the proper technique to perform the optimal allocation. A goal of this research is the description of the optimal allocation of system irreversibility to enable an engine cycle design with improved performance and cost at the vehicle-level. To do this, a numerical optimization was first used to minimize system-level production and operating costs by fixing the performance requirements and identifying the best settings for all of the design variables. There are two major drawbacks to this approach: It does not allow the designer to directly trade off the performance requirements and it does not allow the individual component losses to directly factor into the optimization. An irreversibility allocation approach based on the economic concept of resource allocation is then compared to the numerical optimization. By posing the problem in economic terms, exergy destruction is treated as a true common currency to barter for improved efficiency, cost, and performance. This allows the designer to clearly see how changes in the irreversibility distribution impact the overall system. The inverse design is first performed through a filtered Monte Carlo to allow the designer to view the irreversibility design space. The designer can then directly perform the allocation using the exergy destruction, which helps to place the design choices on an even thermodynamic footing. Finally, two use cases are presented to show how the irreversibility allocation approach can assist the designer. The first describes a situation where the designer can better address competing system-level requirements; the second describes a different situation where the designer can choose from a number of options to improve a system in a manner that is more robust to future requirements.
SGO: A fast engine for ab initio atomic structure global optimization by differential evolution
NASA Astrophysics Data System (ADS)
Chen, Zhanghui; Jia, Weile; Jiang, Xiangwei; Li, Shu-Shen; Wang, Lin-Wang
2017-10-01
As the high throughout calculations and material genome approaches become more and more popular in material science, the search for optimal ways to predict atomic global minimum structure is a high research priority. This paper presents a fast method for global search of atomic structures at ab initio level. The structures global optimization (SGO) engine consists of a high-efficiency differential evolution algorithm, accelerated local relaxation methods and a plane-wave density functional theory code running on GPU machines. The purpose is to show what can be achieved by combining the superior algorithms at the different levels of the searching scheme. SGO can search the global-minimum configurations of crystals, two-dimensional materials and quantum clusters without prior symmetry restriction in a relatively short time (half or several hours for systems with less than 25 atoms), thus making such a task a routine calculation. Comparisons with other existing methods such as minima hopping and genetic algorithm are provided. One motivation of our study is to investigate the properties of magnetic systems in different phases. The SGO engine is capable of surveying the local minima surrounding the global minimum, which provides the information for the overall energy landscape of a given system. Using this capability we have found several new configurations for testing systems, explored their energy landscape, and demonstrated that the magnetic moment of metal clusters fluctuates strongly in different local minima.
Kim, Sung-Chul; Lee, Hae-Kag; Lee, Yang-Sub; Cho, Jae-Hwan
2015-01-01
We found a way to optimize the image quality and reduce the exposure dose of patients through the proper activity combination of the automatic exposure control system chamber for the dose optimization when examining the pelvic anteroposterior side using the phantom of the human body standard model. We set 7 combinations of the chamber of automatic exposure control system. The effective dose was yielded by measuring five times for each according to the activity combination of the chamber for the dose measurement. Five radiologists with more than five years of experience evaluated the image through picture archiving and communication system using double blind test while classifying the 6 anatomical sites into 3-point level (improper, proper, perfect). When only one central chamber was activated, the effective dose was found to be the highest level, 0.287 mSv; and lowest when only the top left chamber was used, 0.165 mSv. After the subjective evaluation by five panel members on the pelvic image was completed, there was no statistically meaningful difference between the 7 chamber combinations, and all had good image quality. When testing the pelvic anteroposterior side with digital radiography, we were able to reduce the exposure dose of patients using the combination of the top right side of or the top two of the chamber.
Adaptive Critic Nonlinear Robust Control: A Survey.
Wang, Ding; He, Haibo; Liu, Derong
2017-10-01
Adaptive dynamic programming (ADP) and reinforcement learning are quite relevant to each other when performing intelligent optimization. They are both regarded as promising methods involving important components of evaluation and improvement, at the background of information technology, such as artificial intelligence, big data, and deep learning. Although great progresses have been achieved and surveyed when addressing nonlinear optimal control problems, the research on robustness of ADP-based control strategies under uncertain environment has not been fully summarized. Hence, this survey reviews the recent main results of adaptive-critic-based robust control design of continuous-time nonlinear systems. The ADP-based nonlinear optimal regulation is reviewed, followed by robust stabilization of nonlinear systems with matched uncertainties, guaranteed cost control design of unmatched plants, and decentralized stabilization of interconnected systems. Additionally, further comprehensive discussions are presented, including event-based robust control design, improvement of the critic learning rule, nonlinear H ∞ control design, and several notes on future perspectives. By applying the ADP-based optimal and robust control methods to a practical power system and an overhead crane plant, two typical examples are provided to verify the effectiveness of theoretical results. Overall, this survey is beneficial to promote the development of adaptive critic control methods with robustness guarantee and the construction of higher level intelligent systems.
Multigrid one shot methods for optimal control problems: Infinite dimensional control
NASA Technical Reports Server (NTRS)
Arian, Eyal; Taasan, Shlomo
1994-01-01
The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.
Optimizing Metabolite Production Using Periodic Oscillations
Sowa, Steven W.; Baldea, Michael; Contreras, Lydia M.
2014-01-01
Methods for improving microbial strains for metabolite production remain the subject of constant research. Traditionally, metabolic tuning has been mostly limited to knockouts or overexpression of pathway genes and regulators. In this paper, we establish a new method to control metabolism by inducing optimally tuned time-oscillations in the levels of selected clusters of enzymes, as an alternative strategy to increase the production of a desired metabolite. Using an established kinetic model of the central carbon metabolism of Escherichia coli, we formulate this concept as a dynamic optimization problem over an extended, but finite time horizon. Total production of a metabolite of interest (in this case, phosphoenolpyruvate, PEP) is established as the objective function and time-varying concentrations of the cellular enzymes are used as decision variables. We observe that by varying, in an optimal fashion, levels of key enzymes in time, PEP production increases significantly compared to the unoptimized system. We demonstrate that oscillations can improve metabolic output in experimentally feasible synthetic circuits. PMID:24901332
Polarimetry noise in fiber-based optical coherence tomography instrumentation
Zhang, Ellen Ziyi; Vakoc, Benjamin J.
2011-01-01
High noise levels in fiber-based polarization-sensitive optical coherence tomography (PS-OCT) have broadly limited its clinical utility. In this study we investigate contribution of polarization mode dispersion (PMD) to the polarimetry noise. We develop numerical models of the PS-OCT system including PMD and validate these models with empirical data. Using these models, we provide a framework for predicting noise levels, for processing signals to reduce noise, and for designing an optimized system. PMID:21935044
CMDS9: Continuum Mechanics and Discrete Systems 9, Istanbul Technical University, Macka. Abstracts.
1998-07-01
that can only be achieved via cooperative behavior of the cells. It can be viewed as the action of a singular feedback between the micro -level (the...optimal micro -geometries of multicomponent mixtures. Also, we discuss dynamics of a transition in natural unstable systems that leads to a micro ...failure process. This occurs once the impact load reaches a critical threshold level and results in a collection of oriented matrix micro -cracks
Immersed Boundary Methods for Optimization of Strongly Coupled Fluid-Structure Systems
NASA Astrophysics Data System (ADS)
Jenkins, Nicholas J.
Conventional methods for design of tightly coupled multidisciplinary systems, such as fluid-structure interaction (FSI) problems, traditionally rely on manual revisions informed by a loosely coupled linearized analysis. These approaches are both inaccurate for a multitude of applications, and they require an intimate understanding of the assumptions and limitations of the procedure in order to soundly optimize the design. Computational optimization, in particular topology optimization, has been shown to yield remarkable results for problems in solid mechanics using density interpolations schemes. In the context of FSI, however, well defined boundaries play a key role in both the design problem and the mechanical model. Density methods neither accurately represent the material boundary, nor provide a suitable platform to apply appropriate interface conditions. This thesis presents a new framework for shape and topology optimization of FSI problems that uses for the design problem the Level Set method (LSM) to describe the geometry evolution in the optimization process. The Extended Finite Element method (XFEM) is combined with a fictitiously deforming fluid domain (stationary arbitrary Lagrangian-Eulerian method) to predict the FSI response. The novelty of the proposed approach lies in the fact that the XFEM explicitly captures the material boundary defined by the level set iso-surface. Moreover, the XFEM provides a means to discretize the governing equations, and weak immersed boundary conditions are applied with Nitsche's Method to couple the fields. The flow is predicted by the incompressible Navier-Stokes equations, and a finite-deformation solid model is developed and tested for both hyperelastic and linear elastic problems. Transient and stationary numerical examples are presented to validate the FSI model and numerical solver approach. Pertaining to the optimization of FSI problems, the parameters of the discretized level set function are defined as explicit functions of the optimization variables, and the parameteric optimization problem is solved by nonlinear programming methods. The gradients of the objective and constrains are computed by the adjoint method for the global monolithic fluid-solid system. Two types of design problems are explored for optimization of the fluid-structure response: 1) the internal structural topology is varied, preserving the fluid-solid interface geometry, and 2) the fluid-solid interface is manipulated directly, which leads to simultaneously configuring both internal structural topology and outer mold shape. The numerical results show that the LSM-XFEM approach is well suited for designing practical applications, while at the same time reducing the requirement on highly refined mesh resolution compared to traditional density methods. However, these results also emphasize the need for a more robust embedded boundary condition framework. Further, the LSM can exhibit greater dependence on initial design seeding, and can impede design convergence. In particular for the strongly coupled FSI analysis developed here, the thinning and eventual removal of structural members can cause jumps in the evolution of the optimization functions.
NASA Technical Reports Server (NTRS)
Barthelemy, J. F. M.
1983-01-01
A general algorithm is proposed which carries out the design process iteratively, starting at the top of the hierarchy and proceeding downward. Each subproblem is optimized separately for fixed controls from higher level subproblems. An optimum sensitivity analysis is then performed which determines the sensitivity of the subproblem design to changes in higher level subproblem controls. The resulting sensitivity derivatives are used to construct constraints which force the controlling subproblems into chosing their own designs so as to improve the lower levels subproblem designs while satisfying their own constraints. The applicability of the proposed algorithm is demonstrated by devising a four-level hierarchy to perform the simultaneous aerodynamic and structural design of a high-performance sailplane wing for maximum cross-country speed. Finally, the concepts discussed are applied to the two-level minimum weight structural design of the sailplane wing. The numerical experiments show that discontinuities in the sensitivity derivatives may delay convergence, but that the algorithm is robust enough to overcome these discontinuities and produce low-weight feasible designs, regardless of whether the optimization is started from the feasible space or the infeasible one.
Zhang, Dezhi; Li, Shuangyan
2014-01-01
This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level. PMID:24977209
Zhang, Dezhi; Li, Shuangyan; Qin, Jin
2014-01-01
This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level.
Liu, Zhen-Fei; Egger, David A.; Refaely-Abramson, Sivan; ...
2017-02-21
The alignment of the frontier orbital energies of an adsorbed molecule with the substrate Fermi level at metal-organic interfaces is a fundamental observable of significant practical importance in nanoscience and beyond. Typical density functional theory calculations, especially those using local and semi-local functionals, often underestimate level alignment leading to inaccurate electronic structure and charge transport properties. Here, we develop a new fully self-consistent predictive scheme to accurately compute level alignment at certain classes of complex heterogeneous molecule-metal interfaces based on optimally tuned range-separated hybrid functionals. Starting from a highly accurate description of the gas-phase electronic structure, our method by constructionmore » captures important nonlocal surface polarization effects via tuning of the long-range screened exchange in a range-separated hybrid in a non-empirical and system-specific manner. We implement this functional in a plane-wave code and apply it to several physisorbed and chemisorbed molecule-metal interface systems. Our results are in quantitative agreement with experiments, the both the level alignment and work function changes. This approach constitutes a new practical scheme for accurate and efficient calculations of the electronic structure of molecule-metal interfaces.« less
NASA Astrophysics Data System (ADS)
Liu, Zhen-Fei; Egger, David A.; Refaely-Abramson, Sivan; Kronik, Leeor; Neaton, Jeffrey B.
2017-03-01
The alignment of the frontier orbital energies of an adsorbed molecule with the substrate Fermi level at metal-organic interfaces is a fundamental observable of significant practical importance in nanoscience and beyond. Typical density functional theory calculations, especially those using local and semi-local functionals, often underestimate level alignment leading to inaccurate electronic structure and charge transport properties. In this work, we develop a new fully self-consistent predictive scheme to accurately compute level alignment at certain classes of complex heterogeneous molecule-metal interfaces based on optimally tuned range-separated hybrid functionals. Starting from a highly accurate description of the gas-phase electronic structure, our method by construction captures important nonlocal surface polarization effects via tuning of the long-range screened exchange in a range-separated hybrid in a non-empirical and system-specific manner. We implement this functional in a plane-wave code and apply it to several physisorbed and chemisorbed molecule-metal interface systems. Our results are in quantitative agreement with experiments, the both the level alignment and work function changes. Our approach constitutes a new practical scheme for accurate and efficient calculations of the electronic structure of molecule-metal interfaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schanen, Michel; Marin, Oana; Zhang, Hong
Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less
Communications and tracking expert systems study
NASA Technical Reports Server (NTRS)
Leibfried, T. F.; Feagin, Terry; Overland, David
1987-01-01
The original objectives of the study consisted of five broad areas of investigation: criteria and issues for explanation of communication and tracking system anomaly detection, isolation, and recovery; data storage simplification issues for fault detection expert systems; data selection procedures for decision tree pruning and optimization to enhance the abstraction of pertinent information for clear explanation; criteria for establishing levels of explanation suited to needs; and analysis of expert system interaction and modularization. Progress was made in all areas, but to a lesser extent in the criteria for establishing levels of explanation suited to needs. Among the types of expert systems studied were those related to anomaly or fault detection, isolation, and recovery.
Applying operations research to optimize a novel population management system for cancer screening.
Zai, Adrian H; Kim, Seokjin; Kamis, Arnold; Hung, Ken; Ronquillo, Jeremiah G; Chueh, Henry C; Atlas, Steven J
2014-02-01
To optimize a new visit-independent, population-based cancer screening system (TopCare) by using operations research techniques to simulate changes in patient outreach staffing levels (delegates, navigators), modifications to user workflow within the information technology (IT) system, and changes in cancer screening recommendations. TopCare was modeled as a multiserver, multiphase queueing system. Simulation experiments implemented the queueing network model following a next-event time-advance mechanism, in which systematic adjustments were made to staffing levels, IT workflow settings, and cancer screening frequency in order to assess their impact on overdue screenings per patient. TopCare reduced the average number of overdue screenings per patient from 1.17 at inception to 0.86 during simulation to 0.23 at steady state. Increases in the workforce improved the effectiveness of TopCare. In particular, increasing the delegate or navigator staff level by one person improved screening completion rates by 1.3% or 12.2%, respectively. In contrast, changes in the amount of time a patient entry stays on delegate and navigator lists had little impact on overdue screenings. Finally, lengthening the screening interval increased efficiency within TopCare by decreasing overdue screenings at the patient level, resulting in a smaller number of overdue patients needing delegates for screening and a higher fraction of screenings completed by delegates. Simulating the impact of changes in staffing, system parameters, and clinical inputs on the effectiveness and efficiency of care can inform the allocation of limited resources in population management.
NASA Astrophysics Data System (ADS)
Villanueva Perez, Carlos Hernan
Computational design optimization provides designers with automated techniques to develop novel and non-intuitive optimal designs. Topology optimization is a design optimization technique that allows for the evolution of a broad variety of geometries in the optimization process. Traditional density-based topology optimization methods often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the optimized design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology optimization framework that uses the Level Set Method (LSM) to describe the design geometry and the eXtended Finite Element Method (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based optimization approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art methods for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the method. A comparison of the framework against density-based topology optimization approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology optimization problems. The design optimization problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.
A new design approach to innovative spectrometers. Case study: TROPOLITE
NASA Astrophysics Data System (ADS)
Volatier, Jean-Baptiste; Baümer, Stefan; Kruizinga, Bob; Vink, Rob
2014-05-01
Designing a novel optical system is a nested iterative process. The optimization loop, from a starting point to final system is already mostly automated. However this loop is part of a wider loop which is not. This wider loop starts with an optical specification and ends with a manufacturability assessment. When designing a new spectrometer with emphasis on weight and cost, numerous iterations between the optical- and mechanical designer are inevitable. The optical designer must then be able to reliably produce optical designs based on new input gained from multidisciplinary studies. This paper presents a procedure that can automatically generate new starting points based on any kind of input or new constraint that might arise. These starting points can then be handed over to a generic optimization routine to make the design tasks extremely efficient. The optical designer job is then not to design optical systems, but to meta-design a procedure that produces optical systems paving the way for system level optimization. We present here this procedure and its application to the design of TROPOLITE a lightweight push broom imaging spectrometer.
Witt, Adam; Magee, Timothy; Stewart, Kevin; ...
2017-08-10
Managing energy, water, and environmental priorities and constraints within a cascade hydropower system is a challenging multiobjective optimization effort that requires advanced modeling and forecasting tools. Within the mid-Columbia River system, there is currently a lack of specific solutions for predicting how coordinated operational decisions can mitigate the impacts of total dissolved gas (TDG) supersaturation while satisfying multiple additional policy and hydropower generation objectives. In this study, a reduced-order TDG uptake equation is developed that predicts tailrace TDG at seven hydropower facilities on the mid-Columbia River. The equation is incorporated into a general multiobjective river, reservoir, and hydropower optimization toolmore » as a prioritized operating goal within a broader set of system-level objectives and constraints. A test case is presented to assess the response of TDG and hydropower generation when TDG supersaturation is optimized to remain under state water-quality standards. Satisfaction of TDG as an operating goal is highly dependent on whether constraints that limit TDG uptake are implemented at a higher priority than generation requests. According to the model, an opportunity exists to reduce TDG supersaturation and meet hydropower generation requirements by shifting spillway flows to different time periods. In conclusion, a coordinated effort between all project owners is required to implement systemwide optimized solutions that satisfy the operating policies of all stakeholders.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witt, Adam; Magee, Timothy; Stewart, Kevin
Managing energy, water, and environmental priorities and constraints within a cascade hydropower system is a challenging multiobjective optimization effort that requires advanced modeling and forecasting tools. Within the mid-Columbia River system, there is currently a lack of specific solutions for predicting how coordinated operational decisions can mitigate the impacts of total dissolved gas (TDG) supersaturation while satisfying multiple additional policy and hydropower generation objectives. In this study, a reduced-order TDG uptake equation is developed that predicts tailrace TDG at seven hydropower facilities on the mid-Columbia River. The equation is incorporated into a general multiobjective river, reservoir, and hydropower optimization toolmore » as a prioritized operating goal within a broader set of system-level objectives and constraints. A test case is presented to assess the response of TDG and hydropower generation when TDG supersaturation is optimized to remain under state water-quality standards. Satisfaction of TDG as an operating goal is highly dependent on whether constraints that limit TDG uptake are implemented at a higher priority than generation requests. According to the model, an opportunity exists to reduce TDG supersaturation and meet hydropower generation requirements by shifting spillway flows to different time periods. In conclusion, a coordinated effort between all project owners is required to implement systemwide optimized solutions that satisfy the operating policies of all stakeholders.« less
NASA Astrophysics Data System (ADS)
Goltsch, Mandy
Design denotes the transformation of an identified need to its physical embodiment in a traditionally iterative approach of trial and error. Conceptual design plays a prominent role but an almost infinite number of possible solutions at the outset of design necessitates fast evaluations. The corresponding practice of empirical equations and low fidelity analyses becomes obsolete in the light of novel concepts. Ever increasing system complexity and resource scarcity mandate new approaches to adequately capture system characteristics. Contemporary concerns in atmospheric science and homeland security created an operational need for unconventional configurations. Unmanned long endurance flight at high altitudes offers a unique showcase for the exploration of new design spaces and the incidental deficit of conceptual modeling and simulation capabilities. Structural and aerodynamic performance requirements necessitate light weight materials and high aspect ratio wings resulting in distinct structural and aeroelastic response characteristics that stand in close correlation with natural vibration modes. The present research effort evolves around the development of an efficient and accurate optimization algorithm for high aspect ratio wings subject to natural frequency constraints. Foundational corner stones are beam dimensional reduction and modal perturbation redesign. Local and global analyses inherent to the former suggest corresponding levels of local and global optimization. The present approach departs from this suggestion. It introduces local level surrogate models to capacitate a methodology that consists of multi level analyses feeding into a single level optimization. The innovative heart of the new algorithm originates in small perturbation theory. A sequence of small perturbation solutions allows the optimizer to make incremental movements within the design space. It enables a directed search that is free of costly gradients. System matrices are decomposed based on a Timoshenko stiffness effect separation. The formulation of respective linear changes falls back on surrogate models that approximate cross sectional properties. Corresponding functional responses are readily available. Their direct use by the small perturbation based optimizer ensures constitutive laws and eliminates a previously necessary optimization at the local level. The scope of the present work is derived from an existing configuration such as a conceptual baseline or a prototype that experiences aeroelastic instabilities. Due to the lack of respective design studies in the traditional design process it is not uncommon for an initial wing design to have such stability problems. The developed optimization scheme allows the effective redesign of high aspect ratio wings subject to natural frequency objectives. Its successful application is demonstrated by three separate optimization studies. The implementation results of all three studies confirm that the gradient liberation of the new methodology brings about great computational savings. A generic wing study is used to indicate the connection between the proposed methodology and the aeroelastic stability problems outlined in the motivation. It is also used to illustrate an important practical aspect of structural redesign, i.e., a minimum departure from the existing baseline configuration. The proposed optimization scheme is naturally conducive to this practical aspect by using a minimum change optimization criterion. However, only an elemental formulation truly enables a minimum change solution. It accounts for the spanwise significance of a structural modification to the mode of interest. This idea of localized reinforcement greatly benefits the practical realization of structural redesign efforts. The implementation results also highlight the fundamental limitation of the proposed methodology. The exclusive consideration of mass and stiffness effects on modal response characteristics disregards other disciplinary problems such as allowable stresses or buckling loads. Both are of central importance to the structural integrity of an aircraft but are currently not accounted for in the proposed optimization scheme. The concluding discussion thus outlines the need for respective constraints and/or additional analyses to capture all requirements necessary for a comprehensive structural redesign study.
Comparison of cryogenic low-pass filters.
Thalmann, M; Pernau, H-F; Strunk, C; Scheer, E; Pietsch, T
2017-11-01
Low-temperature electronic transport measurements with high energy resolution require both effective low-pass filtering of high-frequency input noise and an optimized thermalization of the electronic system of the experiment. In recent years, elaborate filter designs have been developed for cryogenic low-level measurements, driven by the growing interest in fundamental quantum-physical phenomena at energy scales corresponding to temperatures in the few millikelvin regime. However, a single filter concept is often insufficient to thermalize the electronic system to the cryogenic bath and eliminate spurious high frequency noise. Moreover, the available concepts often provide inadequate filtering to operate at temperatures below 10 mK, which are routinely available now in dilution cryogenic systems. Herein we provide a comprehensive analysis of commonly used filter types, introduce a novel compact filter type based on ferrite compounds optimized for the frequency range above 20 GHz, and develop an improved filtering scheme providing adaptable broad-band low-pass characteristic for cryogenic low-level and quantum measurement applications at temperatures down to few millikelvin.
Comparison of cryogenic low-pass filters
NASA Astrophysics Data System (ADS)
Thalmann, M.; Pernau, H.-F.; Strunk, C.; Scheer, E.; Pietsch, T.
2017-11-01
Low-temperature electronic transport measurements with high energy resolution require both effective low-pass filtering of high-frequency input noise and an optimized thermalization of the electronic system of the experiment. In recent years, elaborate filter designs have been developed for cryogenic low-level measurements, driven by the growing interest in fundamental quantum-physical phenomena at energy scales corresponding to temperatures in the few millikelvin regime. However, a single filter concept is often insufficient to thermalize the electronic system to the cryogenic bath and eliminate spurious high frequency noise. Moreover, the available concepts often provide inadequate filtering to operate at temperatures below 10 mK, which are routinely available now in dilution cryogenic systems. Herein we provide a comprehensive analysis of commonly used filter types, introduce a novel compact filter type based on ferrite compounds optimized for the frequency range above 20 GHz, and develop an improved filtering scheme providing adaptable broad-band low-pass characteristic for cryogenic low-level and quantum measurement applications at temperatures down to few millikelvin.
NASA Technical Reports Server (NTRS)
Clem, Kirk A.; Nelson, George J.; Mesmer, Bryan L.; Watson, Michael D.; Perry, Jay L.
2016-01-01
When optimizing the performance of complex systems, a logical area for concern is improving the efficiency of useful energy. The energy available for a system to perform work is defined as a system's energy content. Interactions between a system's subsystems and the surrounding environment can be accounted for by understanding various subsystem energy efficiencies. Energy balance of reactants and products, and enthalpies and entropies, can be used to represent a chemical process. Heat transfer energy represents heat loads, and flow energy represents system flows and filters. These elements allow for a system level energy balance. The energy balance equations are developed for the subsystems of the Environmental Control and Life Support (ECLS) system aboard the International Space Station (ISS). The use of these equations with system information would allow for the calculation of the energy efficiency of the system, enabling comparisons of the ISS ECLS system to other systems as well as allows for an integrated systems analysis for system optimization.
Power plant maintenance scheduling using ant colony optimization: an improved formulation
NASA Astrophysics Data System (ADS)
Foong, Wai Kuan; Maier, Holger; Simpson, Angus
2008-04-01
It is common practice in the hydropower industry to either shorten the maintenance duration or to postpone maintenance tasks in a hydropower system when there is expected unserved energy based on current water storage levels and forecast storage inflows. It is therefore essential that a maintenance scheduling optimizer can incorporate the options of shortening the maintenance duration and/or deferring maintenance tasks in the search for practical maintenance schedules. In this article, an improved ant colony optimization-power plant maintenance scheduling optimization (ACO-PPMSO) formulation that considers such options in the optimization process is introduced. As a result, both the optimum commencement time and the optimum outage duration are determined for each of the maintenance tasks that need to be scheduled. In addition, a local search strategy is presented in this article to boost the robustness of the algorithm. When tested on a five-station hydropower system problem, the improved formulation is shown to be capable of allowing shortening of maintenance duration in the event of expected demand shortfalls. In addition, the new local search strategy is also shown to have significantly improved the optimization ability of the ACO-PPMSO algorithm.
A system-level view of optimizing high-channel-count wireless biosignal telemetry.
Chandler, Rodney J; Gibson, Sarah; Karkare, Vaibhav; Farshchi, Shahin; Marković, Dejan; Judy, Jack W
2009-01-01
In this paper we perform a system-level analysis of a wireless biosignal telemetry system. We perform an analysis of each major system component (e.g., analog front end, analog-to-digital converter, digital signal processor, and wireless link), in which we consider physical, algorithmic, and design limitations. Since there are a wide range applications for wireless biosignal telemetry systems, each with their own unique set of requirements for key parameters (e.g., channel count, power dissipation, noise level, number of bits, etc.), our analysis is equally broad. The net result is a set of plots, in which the power dissipation for each component and as the system as a whole, are plotted as a function of the number of channels for different architectural strategies. These results are also compared to existing implementations of complete wireless biosignal telemetry systems.
He, L; Huang, G H; Lu, H W
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.
The Sizing and Optimization Language, (SOL): Computer language for design problems
NASA Technical Reports Server (NTRS)
Lucas, Stephen H.; Scotti, Stephen J.
1988-01-01
The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.
NASA Astrophysics Data System (ADS)
Yuan, Jinlong; Zhang, Xu; Liu, Chongyang; Chang, Liang; Xie, Jun; Feng, Enmin; Yin, Hongchao; Xiu, Zhilong
2016-09-01
Time-delay dynamical systems, which depend on both the current state of the system and the state at delayed times, have been an active area of research in many real-world applications. In this paper, we consider a nonlinear time-delay dynamical system of dha-regulonwith unknown time-delays in batch culture of glycerol bioconversion to 1,3-propanediol induced by Klebsiella pneumonia. Some important properties and strong positive invariance are discussed. Because of the difficulty in accurately measuring the concentrations of intracellular substances and the absence of equilibrium points for the time-delay system, a quantitative biological robustness for the concentrations of intracellular substances is defined by penalizing a weighted sum of the expectation and variance of the relative deviation between system outputs before and after the time-delays are perturbed. Our goal is to determine optimal values of the time-delays. To this end, we formulate an optimization problem in which the time delays are decision variables and the cost function is to minimize the biological robustness. This optimization problem is subject to the time-delay system, parameter constraints, continuous state inequality constraints for ensuring that the concentrations of extracellular and intracellular substances lie within specified limits, a quality constraint to reflect operational requirements and a cost sensitivity constraint for ensuring that an acceptable level of the system performance is achieved. It is approximated as a sequence of nonlinear programming sub-problems through the application of constraint transcription and local smoothing approximation techniques. Due to the highly complex nature of this optimization problem, the computational cost is high. Thus, a parallel algorithm is proposed to solve these nonlinear programming sub-problems based on the filled function method. Finally, it is observed that the obtained optimal estimates for the time-delays are highly satisfactory via numerical simulations.
Shale gas wastewater management under uncertainty.
Zhang, Xiaodong; Sun, Alexander Y; Duncan, Ian J
2016-01-01
This work presents an optimization framework for evaluating different wastewater treatment/disposal options for water management during hydraulic fracturing (HF) operations. This framework takes into account both cost-effectiveness and system uncertainty. HF has enabled rapid development of shale gas resources. However, wastewater management has been one of the most contentious and widely publicized issues in shale gas production. The flowback and produced water (known as FP water) generated by HF may pose a serious risk to the surrounding environment and public health because this wastewater usually contains many toxic chemicals and high levels of total dissolved solids (TDS). Various treatment/disposal options are available for FP water management, such as underground injection, hazardous wastewater treatment plants, and/or reuse. In order to cost-effectively plan FP water management practices, including allocating FP water to different options and planning treatment facility capacity expansion, an optimization model named UO-FPW is developed in this study. The UO-FPW model can handle the uncertain information expressed in the form of fuzzy membership functions and probability density functions in the modeling parameters. The UO-FPW model is applied to a representative hypothetical case study to demonstrate its applicability in practice. The modeling results reflect the tradeoffs between economic objective (i.e., minimizing total-system cost) and system reliability (i.e., risk of violating fuzzy and/or random constraints, and meeting FP water treatment/disposal requirements). Using the developed optimization model, decision makers can make and adjust appropriate FP water management strategies through refining the values of feasibility degrees for fuzzy constraints and the probability levels for random constraints if the solutions are not satisfactory. The optimization model can be easily integrated into decision support systems for shale oil/gas lifecycle management. Copyright © 2015 Elsevier Ltd. All rights reserved.
Defining operating rules for mitigation of drought effects on water supply systems
NASA Astrophysics Data System (ADS)
Rossi, G.; Caporali, E.; Garrote, L.; Federici, G. V.
2012-04-01
Reservoirs play a pivotal role for water supply systems regulation and management especially during drought periods. Optimization of reservoir releases, related to drought mitigation rules is particularly required. The hydrologic state of the system is evaluated defining some threshold values, expressed in probabilistic terms. Risk deficit curves are used to reduce the ensemble of possible rules for simulation. Threshold values can be linked to specific actions in an operational context in different levels of severity, i.e. normal, pre-alert, alert and emergency scenarios. A simplified model of the water resources system is built to evaluate the threshold values and the management rules. The threshold values are defined considering the probability to satisfy a given fraction of the demand in a certain time horizon, and are validated with a long term simulation that takes into account the characteristics of the evaluated system. The threshold levels determine some curves that define reservoir releases as a function of existing storage volume. A demand reduction is related to each threshold level. The rules to manage the system in drought conditions, the threshold levels and the reductions are optimized using long term simulations with different hypothesized states of the system. Synthetic sequences of flows with the same statistical properties of the historical ones are produced to evaluate the system behaviour. Performances of different values of reduction and different threshold curves are evaluated using different objective function and performances indices. The methodology is applied to the urban area Firenze-Prato-Pistoia in central Tuscany, in Central Italy. The considered demand centres are Firenze and Bagno a Ripoli that have, accordingly to the census ISTAT 2001, a total of 395.000 inhabitants.
Shimansky, Yury P; Kang, Tao; He, Jiping
2004-02-01
A computational model of a learning system (LS) is described that acquires knowledge and skill necessary for optimal control of a multisegmental limb dynamics (controlled object or CO), starting from "knowing" only the dimensionality of the object's state space. It is based on an optimal control problem setup different from that of reinforcement learning. The LS solves the optimal control problem online while practicing the manipulation of CO. The system's functional architecture comprises several adaptive components, each of which incorporates a number of mapping functions approximated based on artificial neural nets. Besides the internal model of the CO's dynamics and adaptive controller that computes the control law, the LS includes a new type of internal model, the minimal cost (IM(mc)) of moving the controlled object between a pair of states. That internal model appears critical for the LS's capacity to develop an optimal movement trajectory. The IM(mc) interacts with the adaptive controller in a cooperative manner. The controller provides an initial approximation of an optimal control action, which is further optimized in real time based on the IM(mc). The IM(mc) in turn provides information for updating the controller. The LS's performance was tested on the task of center-out reaching to eight randomly selected targets with a 2DOF limb model. The LS reached an optimal level of performance in a few tens of trials. It also quickly adapted to movement perturbations produced by two different types of external force field. The results suggest that the proposed design of a self-optimized control system can serve as a basis for the modeling of motor learning that includes the formation and adaptive modification of the plan of a goal-directed movement.
Optimized coordinates in vibrational coupled cluster calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomsen, Bo; Christiansen, Ove; Yagi, Kiyoshi
The use of variationally optimized coordinates, which minimize the vibrational self-consistent field (VSCF) ground state energy with respect to orthogonal transformations of the coordinates, has recently been shown to improve the convergence of vibrational configuration interaction (VCI) towards the exact full VCI [K. Yagi, M. Keçeli, and S. Hirata, J. Chem. Phys. 137, 204118 (2012)]. The present paper proposes an incorporation of optimized coordinates into the vibrational coupled cluster (VCC), which has in the past been shown to outperform VCI in approximate calculations where similar restricted state spaces are employed in VCI and VCC. An embarrassingly parallel algorithm for variationalmore » optimization of coordinates for VSCF is implemented and the resulting coordinates and potentials are introduced into a VCC program. The performance of VCC in optimized coordinates (denoted oc-VCC) is examined through pilot applications to water, formaldehyde, and a series of water clusters (dimer, trimer, and hexamer) by comparing the calculated vibrational energy levels with those of the conventional VCC in normal coordinates and VCI in optimized coordinates. For water clusters, in particular, oc-VCC is found to gain orders of magnitude improvement in the accuracy, exemplifying that the combination of optimized coordinates localized to each monomer with the size-extensive VCC wave function provides a supreme description of systems consisting of weakly interacting sub-systems.« less
Decentralized regulation of dynamic systems. [for controlling large scale linear systems
NASA Technical Reports Server (NTRS)
Chu, K. C.
1975-01-01
A special class of decentralized control problem is discussed in which the objectives of the control agents are to steer the state of the system to desired levels. Each agent is concerned about certain aspects of the state of the entire system. The state and control equations are given for linear time-invariant systems. Stability and coordination, and the optimization of decentralized control are analyzed, and the information structure design is presented.
NASA Technical Reports Server (NTRS)
Sander, Erik J.; Gosdin, Dennis R.
1992-01-01
Engineers regularly analyze SSME ground test and flight data with respect to engine systems performance. Recently, a redesigned SSME powerhead was introduced to engine-level testing in part to increase engine operational margins through optimization of the engine internal environment. This paper presents an overview of the MSFC personnel engine systems analysis results and conclusions reached from initial engine level testing of the redesigned powerhead, and further redesigns incorporated to eliminate accelerated main injector baffle and main combustion chamber hot gas wall degradation. The conclusions are drawn from instrumented engine ground test data and hardware integrity analysis reports and address initial engine test results with respect to the apparent design change effects on engine system and component operation.
Information pricing based on trusted system
NASA Astrophysics Data System (ADS)
Liu, Zehua; Zhang, Nan; Han, Hongfeng
2018-05-01
Personal information has become a valuable commodity in today's society. So our goal aims to develop a price point and a pricing system to be realistic. First of all, we improve the existing BLP system to prevent cascading incidents, design a 7-layer model. Through the cost of encryption in each layer, we develop PI price points. Besides, we use association rules mining algorithms in data mining algorithms to calculate the importance of information in order to optimize informational hierarchies of different attribute types when located within a multi-level trusted system. Finally, we use normal distribution model to predict encryption level distribution for users in different classes and then calculate information prices through a linear programming model with the help of encryption level distribution above.
Application of Boiler Op for combustion optimization at PEPCO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maines, P.; Williams, S.; Levy, E.
1997-09-01
Title IV requires the reduction of NOx at all stations within the PEPCO system. To assist PEPCO plant personnel in achieving low heat rates while meeting NOx targets, Lehigh University`s Energy Research Center and PEPCO developed a new combustion optimization software package called Boiler Op. The Boiler Op code contains an expert system, neural networks and an optimization algorithm. The expert system guides the plant engineer through a series of parametric boiler tests, required for the development of a comprehensive boiler database. The data are then analyzed by the neural networks and optimization algorithm to provide results on the boilermore » control settings which result in the best possible heat rate at a target NOx level or produce minimum NOx. Boiler Op has been used at both Potomac River and Morgantown Stations to help PEPCO engineers optimize combustion. With the use of Boiler Op, Morgantown Station operates under low NOx restrictions and continues to achieve record heat rate values, similar to pre-retrofit conditions. Potomac River Station achieves the regulatory NOx limit through the use of Boiler Op recommended control settings and without NOx burners. Importantly, any software like Boiler Op cannot be used alone. Its application must be in concert with human intelligence to ensure unit safety, reliability and accurate data collection.« less
Fractional Programming for Communication Systems—Part I: Power Control and Beamforming
NASA Astrophysics Data System (ADS)
Shen, Kaiming; Yu, Wei
2018-05-01
This two-part paper explores the use of FP in the design and optimization of communication systems. Part I of this paper focuses on FP theory and on solving continuous problems. The main theoretical contribution is a novel quadratic transform technique for tackling the multiple-ratio concave-convex FP problem--in contrast to conventional FP techniques that mostly can only deal with the single-ratio or the max-min-ratio case. Multiple-ratio FP problems are important for the optimization of communication networks, because system-level design often involves multiple signal-to-interference-plus-noise ratio terms. This paper considers the applications of FP to solving continuous problems in communication system design, particularly for power control, beamforming, and energy efficiency maximization. These application cases illustrate that the proposed quadratic transform can greatly facilitate the optimization involving ratios by recasting the original nonconvex problem as a sequence of convex problems. This FP-based problem reformulation gives rise to an efficient iterative optimization algorithm with provable convergence to a stationary point. The paper further demonstrates close connections between the proposed FP approach and other well-known algorithms in the literature, such as the fixed-point iteration and the weighted minimum mean-square-error beamforming. The optimization of discrete problems is discussed in Part II of this paper.
Load leveling on industrial refrigeration systems
NASA Astrophysics Data System (ADS)
Bierenbaum, H. S.; Kraus, A. D.
1982-01-01
A computer model was constructed of a brewery with a 2000 horsepower compressor/refrigeration system. The various conservation and load management options were simulated using the validated model. The savings available for implementing the most promising options were verified by trials in the brewery. Result show that an optimized methodology for implementing load leveling and energy conservation consisted of: (1) adjusting (or tuning) refrigeration systems controller variables to minimize unnecessary compressor starts, (2) The primary refrigeration system operating parameters, compressor suction pressure, and discharge pressure are carefully controlled (modulated) to satisfy product quality constraints (as well as in-process material cooling rates and temperature levels) and energy evaluating the energy cost savings associated with reject heat recovery, and (4) a decision is made to implement the reject heat recovery system based on a cost/benefits analysis.
Discovery of Unforeseen Lead Level Optimization Issues for High pH and Low DIC Conditions
A large northeast water utility serving over 500,000 retail and wholesale customers had historically been slightly below the 90th percentile Action Level for lead. The system had been operating at a pH of approximately 10.3, a DIC concentration of approximately 5 mg/L as C, and ...
Optimizing the Sustainment of U.S. Army Weapon Systems
2016-03-17
Current Military Rank/Civilian Grade ................................................................................ 33 Figure 9: Education Level...across the military services from lows experienced in the wake of fiscal year 2013 sequestration when only 2 Army non-missioned brigade combat teams...committee notes that recovery from these ebbs in readiness has taken time, with most military services reporting a return to pre-sequester levels of
NASA Astrophysics Data System (ADS)
Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng
2018-02-01
Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.
NASA Astrophysics Data System (ADS)
Chen, Yizhong; Lu, Hongwei; Li, Jing; Ren, Lixia; He, Li
2017-05-01
This study presents the mathematical formulation and implementations of a synergistic optimization framework based on an understanding of water availability and reliability together with the characteristics of multiple water demands. This framework simultaneously integrates a set of leader-followers-interactive objectives established by different decision makers during the synergistic optimization. The upper-level model (leader's one) determines the optimal pollutants discharge to satisfy the environmental target. The lower-level model (follower's one) accepts the dispatch requirement from the upper-level one and dominates the optimal water-allocation strategy to maximize economic benefits representing the regional authority. The complicated bi-level model significantly improves upon the conventional programming methods through the mutual influence and restriction between the upper- and lower-level decision processes, particularly when limited water resources are available for multiple completing users. To solve the problem, a bi-level interactive solution algorithm based on satisfactory degree is introduced into the decision-making process for measuring to what extent the constraints are met and the objective reaches its optima. The capabilities of the proposed model are illustrated through a real-world case study of water resources management system in the district of Fengtai located in Beijing, China. Feasible decisions in association with water resources allocation, wastewater emission and pollutants discharge would be sequentially generated for balancing the objectives subject to the given water-related constraints, which can enable Stakeholders to grasp the inherent conflicts and trade-offs between the environmental and economic interests. The performance of the developed bi-level model is enhanced by comparing with single-level models. Moreover, in consideration of the uncertainty in water demand and availability, sensitivity analysis and policy analysis are employed for identifying their impacts on the final decisions and improving the practical applications.
Systems design of transformation toughened blast-resistant naval hull steels
NASA Astrophysics Data System (ADS)
Saha, Arup
A systems approach to computational materials design has demonstrated a new class of ultratough, weldable secondary hardened plate steels combining new levels of strength and toughness while meeting processability requirements. A first prototype alloy has achieved property goals motivated by projected naval hull applications requiring extreme fracture toughness (Cv > 85 ft-lbs (115 J) corresponding to KId > 200 ksi.in1/2 (220 MPa.m1/2)) at strength levels of 150--180 ksi (1034--1241 MPa) yield strength in weldable, formable plate steels. A theoretical design concept was explored integrating the mechanism of precipitated nickel-stabilized dispersed austenite for transformation toughening in an alloy strengthened by combined precipitation of M2C carbides and BCC copper both at an optimal ˜3nm particle size for efficient strengthening. This concept was adapted to plate steel design by employing a mixed bainitic/martensitic matrix microstructure produced by air-cooling after solution-treatment and constraining the composition to low carbon content for weldability. With optimized levels of copper and M2C carbide formers based on a quantitative strength model, a required alloy nickel content of 6.5 wt% was predicted for optimal austenite stability for transformation toughening at the desired strength level of 160 ksi (1100 MPa) yield strength. A relatively high Cu level of 3.65 wt% was employed to allow a carbon limit of 0.05 wt% for good weldability. Hardness and tensile tests conducted on the designed prototype confirmed predicted precipitation strengthening behavior in quench and tempered material. Multi-step tempering conditions were employed to achieve the optimal austenite stability resulting in significant increase of impact toughness to 130 ft-lb (176 J) at a strength level of 160 ksi (1100 MPa). Comparison with the baseline toughness-strength combination determined by isochronal tempering studies indicates a transformation toughening increment of 60% in Charpy energy. Predicted Cu particle number densities and the heterogeneous nucleation of optimal stability high Ni 5 nm austenite on nanometer-scale copper precipitates in the multi-step tempered samples was confirmed using three-dimensional atom probe microscopy. Charpy impact tests and fractography demonstrate ductile fracture with C v > 90 ft-lbs (122 J) down to -40°C, with a substantial toughness peak at 25°C consistent with designed transformation toughening behavior. The properties demonstrated in this first prototype represent a substantial advance over existing naval hull steels.
Cross-layer Energy Optimization Under Image Quality Constraints for Wireless Image Transmissions.
Yang, Na; Demirkol, Ilker; Heinzelman, Wendi
2012-01-01
Wireless image transmission is critical in many applications, such as surveillance and environment monitoring. In order to make the best use of the limited energy of the battery-operated cameras, while satisfying the application-level image quality constraints, cross-layer design is critical. In this paper, we develop an image transmission model that allows the application layer (e.g., the user) to specify an image quality constraint, and optimizes the lower layer parameters of transmit power and packet length, to minimize the energy dissipation in image transmission over a given distance. The effectiveness of this approach is evaluated by applying the proposed energy optimization to a reference ZigBee system and a WiFi system, and also by comparing to an energy optimization study that does not consider any image quality constraint. Evaluations show that our scheme outperforms the default settings of the investigated commercial devices and saves a significant amount of energy at middle-to-large transmission distances.
A Nonlinear Physics-Based Optimal Control Method for Magnetostrictive Actuators
NASA Technical Reports Server (NTRS)
Smith, Ralph C.
1998-01-01
This paper addresses the development of a nonlinear optimal control methodology for magnetostrictive actuators. At moderate to high drive levels, the output from these actuators is highly nonlinear and contains significant magnetic and magnetomechanical hysteresis. These dynamics must be accommodated by models and control laws to utilize the full capabilities of the actuators. A characterization based upon ferromagnetic mean field theory provides a model which accurately quantifies both transient and steady state actuator dynamics under a variety of operating conditions. The control method consists of a linear perturbation feedback law used in combination with an optimal open loop nonlinear control. The nonlinear control incorporates the hysteresis and nonlinearities inherent to the transducer and can be computed offline. The feedback control is constructed through linearization of the perturbed system about the optimal system and is efficient for online implementation. As demonstrated through numerical examples, the combined hybrid control is robust and can be readily implemented in linear PDE-based structural models.
The Calculation of Accurate Metal-Ligand Bond Energies
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W.; Partridge, Harry, III; Ricca, Alessandra; Arnold, James O. (Technical Monitor)
1997-01-01
The optimization of the geometry and calculation of zero-point energies are carried out at the B3LYP level of theory. The bond energies are determined at this level, as well as at the CCSD(T) level using very large basis sets. The successive OH bond energies to the first row transition metal cations are reported. For most systems there has been an experimental determination of the first OH. In general, the CCSD(T) values are in good agreement with experiment. The bonding changes from mostly covalent for the early metals to mostly electrostatic for the late transition metal systems.
Darmanin, Geraldine; Jaggard, Matthew; Hettiaratchy, Shehan; Nanchahal, Jagdeep; Jain, Abhilash
2013-06-01
It is common practice to elevate the limbs postoperatively to reduce oedema and hence optimise perfusion and facilitate rehabilitation. However, elevation may be counterproductive as it reduces the mean perfusion pressure. There are no clear data on the optimal position of the limbs even in normal subjects. The optimal position of limbs was investigated in 25 healthy subjects using a non-invasive micro-lightguide spectrophotometry system "O2C", which indirectly measures skin and superficial tissue perfusion through blood flow, oxygen saturation and relative haemoglobin concentration. We found a reduction in skin and superficial tissue blood flow of 17% (p=0.0001) on arm elevation (180° shoulder flexion) as compared to heart level and an increase in skin and superficial tissue blood flow of 25% (p=0.02) on forearm elevation of 45°. Lower limb skin and superficial tissue blood flow decreased by 15% (p=0.004) on elevation to 47 cm and by 70% on dependency (p=0.0001) compared to heart level. However, on elevation of the lower limb there was also a 28% reduction in superficial venous pooling (p=0.0001) compared to heart level. In the normal limb, the position for optimal superficial perfusion of the upper limb is with the arm placed at heart level and forearm at 45°. In the lower limb the optimal position for superficial perfusion would be at heart level. However, some degree of elevation may be useful if there is an element of venous congestion. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Goel, R.; Kofman, I.; DeDios, Y. E.; Jeevarajan, J.; Stepanyan, V.; Nair, M.; Congdon, S.; Fregia, M.; Cohen, H.; Bloomberg, J.J.;
2015-01-01
Sensorimotor changes such as postural and gait instabilities can affect the functional performance of astronauts when they transition across different gravity environments. We are developing a method, based on stochastic resonance (SR), to enhance information transfer by applying non-zero levels of external noise on the vestibular system (vestibular stochastic resonance, VSR). Our previous work has shown the advantageous effects of VSR in a balance task of standing on an unstable surface [1]. This technique to improve detection of vestibular signals uses a stimulus delivery system that provides imperceptibly low levels of white noise-based binaural bipolar electrical stimulation of the vestibular system. The goal of this project is to determine optimal levels of stimulation for SR applications by using a defined vestibular threshold of motion detection. A series of experiments were carried out to determine a robust paradigm to identify a vestibular threshold that can then be used to recommend optimal stimulation levels for sensorimotor adaptability (SA) training applications customized to each crewmember. The amplitude of stimulation to be used in the VSR application has varied across studies in the literature such as 60% of nociceptive stimulus thresholds [2]. We compared subjects' perceptual threshold with that obtained from two measures of body sway. Each test session was 463s long and consisted of several 15s long sinusoidal stimuli, at different current amplitudes (0-2 mA), interspersed with 20-20.5s periods of no stimulation. Subjects sat on a chair with their eyes closed and had to report their perception of motion through a joystick. A force plate underneath the chair recorded medio-lateral shear forces and roll moments. Comparison of threshold of motion detection obtained from joystick data versus body sway suggests that perceptual thresholds were significantly lower. In the balance task, subjects stood on an unstable surface and had to maintain balance, and the stimulation was administered from 20-400% of subjects' vestibular threshold. Optimal stimulation amplitude was determined at which the balance performance was best compared to control (no stimulation). Preliminary results show that, in general, using stimulation amplitudes at 40-60% of perceptual motion threshold significantly improved the balance performance. We hypothesize that VSR stimulation will act synergistically with SA training to improve adaptability by increasing utilization of vestibular information and therefore will help us to optimize and personalize a SA countermeasure prescription. This combination may help to significantly reduce the number of days required to recover functional performance to preflight levels after long-duration spaceflight.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y; Souri, S; Gill, G
Purpose: To statistically determine the optimal tolerance level in the verification of delivery dose compared to the planned dose in an in vivo dosimetry system in radiotherapy. Methods: The LANDAUER MicroSTARii dosimetry system with screened nanoDots (optically stimulated luminescence dosimeters) was used for in vivo dose measurements. Ideally, the measured dose should match with the planned dose and falls within a normal distribution. Any deviation from the normal distribution may be redeemed as a mismatch, therefore a potential sign of the dose misadministration. Randomly mis-positioned nanoDots can yield a continuum background distribution. A percentage difference of the measured dose tomore » its corresponding planned dose (ΔD) can be used to analyze combined data sets for different patients. A model of a Gaussian plus a flat function was used to fit the ΔD distribution. Results: Total 434 nanoDot measurements for breast cancer patients were collected across a period of three months. The fit yields a Gaussian mean of 2.9% and a standard deviation (SD) of 5.3%. The observed shift of the mean from zero is attributed to the machine output bias and calibration of the dosimetry system. A pass interval of −2SD to +2SD was applied and a mismatch background was estimated to be 4.8%. With such a tolerance level, one can expect that 99.99% of patients should pass the verification and at most 0.011% might have a potential dose misadministration that may not be detected after 3 times of repeated measurements. After implementation, a number of new start breast cancer patients were monitored and the measured pass rate is consistent with the model prediction. Conclusion: It is feasible to implement an optimal tolerance level in order to maintain a low limit of potential dose misadministration while still to keep a relatively high pass rate in radiotherapy delivery verification.« less
Systems-Level Synthetic Biology for Advanced Biofuel Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruffing, Anne; Jensen, Travis J.; Strickland, Lucas Marshall
2015-03-01
Cyanobacteria have been shown to be capable of producing a variety of advanced biofuels; however, product yields remain well below those necessary for large scale production. New genetic tools and high throughput metabolic engineering techniques are needed to optimize cyanobacterial metabolisms for enhanced biofuel production. Towards this goal, this project advances the development of a multiple promoter replacement technique for systems-level optimization of gene expression in a model cyanobacterial host: Synechococcus sp. PCC 7002. To realize this multiple-target approach, key capabilities were developed, including a high throughput detection method for advanced biofuels, enhanced transformation efficiency, and genetic tools for Synechococcusmore » sp. PCC 7002. Moreover, several additional obstacles were identified for realization of this multiple promoter replacement technique. The techniques and tools developed in this project will help to enable future efforts in the advancement of cyanobacterial biofuels.« less
NASA Astrophysics Data System (ADS)
Lu, Lihao; Zhang, Jianxiong; Tang, Wansheng
2016-04-01
An inventory system for perishable items with limited replenishment capacity is introduced in this paper. The demand rate depends on the stock quantity displayed in the store as well as the sales price. With the goal to realise profit maximisation, an optimisation problem is addressed to seek for the optimal joint dynamic pricing and replenishment policy which is obtained by solving the optimisation problem with Pontryagin's maximum principle. A joint mixed policy, in which the sales price is a static decision variable and the replenishment rate remains to be a dynamic decision variable, is presented to compare with the joint dynamic policy. Numerical results demonstrate the advantages of the joint dynamic one, and further show the effects of different system parameters on the optimal joint dynamic policy and the maximal total profit.
Bao, Yijun; Li, Lizhuo; Guan, Yanlei; Wang, Wei; Liu, Yan; Wang, Pengfei; Huang, Xiaolong; Tao, Shanwei; Wang, Yunjie
2017-02-01
Anxiety and depression have been identified as common psychological distresses faced by the majority of patients with cancer. However, no studies have investigated the relationship between positive psychological variables (hope, optimism and general self-efficacy) and anxiety and depression among patients with central nervous system (CNS) tumors in China. Our hypothesis is that the patients with higher levels of hope, optimism or general self-efficacy have lower levels of anxiety and depression when encountered by stressful life events such as CNS tumors. Questionnaires, including the Hospital Anxiety and Depression Scale, the Herth Hope Index, the Life Orientation Scale-Revised and the General Self-Efficacy Scale, and demographic and clinical records were used to collect information about patients with CNS tumors in Liaoning Province, China. The study included 222 patients (effective response rate: 66.1%). Hierarchical linear regression analyses were performed to explore the associations among hope, optimism, general self-efficacy and anxiety/depression. Prevalence of anxiety and depression were 42.8 and 32.4%, respectively, among patients with CNS tumors. Hope and optimism both were negatively associated with anxiety and together accounted for 21.4% of variance in anxiety. Similarly, hope and optimism both were negatively associated with depression and accounted for 32.4% of variance in depression. The high prevalence of anxiety and depression among patients with CNS tumors should receive more attention in Chinese medical settings. To help reduce anxiety and depression, health care professionals should develop interventions to promote hope and optimism based on patients' specific needs. Copyright © 2016 John Wiley & Sons, Ltd.
Repetitively Pulsed High Power RF Solid-State System
NASA Astrophysics Data System (ADS)
Bowman, Chris; Ziemba, Timothy; Miller, Kenneth E.; Prager, James; Quinley, Morgan
2017-10-01
Eagle Harbor Technologies, Inc. (EHT) is developing a low-cost, fully solid-state architecture for the generation of the RF frequencies and power levels necessary for plasma heating and diagnostic systems at validation platform experiments within the fusion science community. In Year 1 of this program, EHT has developed a solid-state RF system that combines an inductive adder, nonlinear transmission line (NLTL), and antenna into a single system that can be deployed at fusion science experiments. EHT has designed and optimized a lumped-element NLTL that will be suitable RF generation near the lower-hybrid frequency at the High Beta Tokamak (HBT) located at Columbia University. In Year 2, EHT will test this system at the Helicity Injected Torus at the University of Washington and HBT at Columbia. EHT will present results from Year 1 testing and optimization of the NLTL-based RF system. With support of DOE SBIR.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Rogers, J. L., Jr.
1986-01-01
A finite element based programming system for minimum weight design of a truss-type structure subjected to displacement, stress, and lower and upper bounds on design variables is presented. The programming system consists of a number of independent processors, each performing a specific task. These processors, however, are interfaced through a well-organized data base, thus making the tasks of modifying, updating, or expanding the programming system much easier in a friendly environment provided by many inexpensive personal computers. The proposed software can be viewed as an important step in achieving a 'dummy' finite element for optimization. The programming system has been implemented on both large and small computers (such as VAX, CYBER, IBM-PC, and APPLE) although the focus is on the latter. Examples are presented to demonstrate the capabilities of the code. The present programming system can be used stand-alone or as part of the multilevel decomposition procedure to obtain optimum design for very large scale structural systems. Furthermore, other related research areas such as developing optimization algorithms (or in the larger level: a structural synthesis program) for future trends in using parallel computers may also benefit from this study.
Is there a preference for linearity when viewing natural images?
NASA Astrophysics Data System (ADS)
Kane, David; Bertamío, Marcelo
2015-01-01
The system gamma of the imaging pipeline, defined as the product of the encoding and decoding gammas, is typically greater than one and is stronger for images viewed with a dark background (e.g. cinema) than those viewed in lighter conditions (e.g. office displays).1-3 However, for high dynamic range (HDR) images reproduced on a low dynamic range (LDR) monitor, subjects often prefer a system gamma of less than one,4 presumably reflecting the greater need for histogram equalization in HDR images. In this study we ask subjects to rate the perceived quality of images presented on a LDR monitor using various levels of system gamma. We reveal that the optimal system gamma is below one for images with a HDR and approaches or exceeds one for images with a LDR. Additionally, the highest quality scores occur for images where a system gamma of one is optimal, suggesting a preference for linearity (where possible). We find that subjective image quality scores can be predicted by computing the degree of histogram equalization of the lightness distribution. Accordingly, an optimal, image dependent system gamma can be computed that maximizes perceived image quality.
NASA Technical Reports Server (NTRS)
Weber, Gary A.
1991-01-01
During the 90-day study, support was provided to NASA in defining a point-of-departure space transfer vehicle (STV). The resulting STV concept was performance optimized with a two-stage LTV/LEV configuration. Appendix A reports on the effort during this period of the study. From the end of the 90-day study until the March Interim Review, effort was placed on optimizing the two-stage vehicle approach identified in the 90-day effort. After the March Interim Review, the effort was expanded to perform a full architectural trade study with the intent of developing a decision database to support STV system decisions in response to changing SEI infrastructure concepts. Several of the architecture trade studies were combined in a System Architecture Trade Study. In addition to this trade, system optimization/definition trades and analyses were completed and some special topics were addressed. Program- and system-level trade study and analyses methodologies and results are presented in this section. Trades and analyses covered in this section are: (1) a system architecture trade study; (2) evolution; (3) safety and abort considerations; (4) STV as a launch vehicle upper stage; and (5) optimum crew and cargo split.
Cloud, Aerosol, and Volcanic Ash Retrievals Using ASTR and SLSTR with ORAC
NASA Astrophysics Data System (ADS)
McGarragh, Gregory; Poulsen, Caroline; Povey, Adam; Thomas, Gareth; Christensen, Matt; Sus, Oliver; Schlundt, Cornelia; Stapelberg, Stefan; Stengel, Martin; Grainger, Don
2015-12-01
The Optimal Retrieval of Aerosol and Cloud (ORAC) is a generalized optimal estimation system that retrieves cloud, aerosol and volcanic ash parameters using satellite imager measurements in the visible to infrared. Use of the same algorithm for different sensors and parameters leads to consistency that facilitates inter-comparison and interaction studies. ORAC currently supports ATSR, AVHRR, MODIS and SEVIRI. In this proceeding we discuss the ORAC retrieval algorithm applied to ATSR data including the retrieval methodology, the forward model, uncertainty characterization and discrimination/classification techniques. Application of ORAC to SLSTR data is discussed including the additional features that SLSTR provides relative to the ATSR heritage. The ORAC level 2 and level 3 results are discussed and an application of level 3 results to the study of cloud/aerosol interactions is presented.
NASA Astrophysics Data System (ADS)
Gondal, M. A.; Dastageer, M. A.; Al-Adel, F. F.; Naqvi, A. A.; Habibullah, Y. B.
2015-12-01
A sensitive laser induced breakdown spectroscopic system was developed and optimized for using it as a sensor for the detection of trace levels of lead and chromium present in the cosmetic eyeliner (kohl) of different price ranges (brands) available in the local market. Kohl is widely used in developing countries for babies as well adults for beautification as well eyes protection. The atomic transition lines at 405.7 nm and 425.4 nm were used as the marker lines for the detection of lead and chromium respectively. The detection system was optimized by finding the appropriate gate delay between the laser excitation and the data acquisition system and also by achieving optically thin plasma near the target by establishing the local thermodynamic equilibrium condition. The detection system was calibrated for these two hazardous elements and the kohl samples under investigation showed 8-15 ppm by mass of lead and 4-9 ppm by mass of Chromium, which are higher than the safe permissible levels of these elements. The limits of detection of the LIBS system for lead and chromium were found to be 1 and 2 ppm respectively.
Regulating Cortical Neurodynamics for Past, Present and Future
NASA Astrophysics Data System (ADS)
Liljenström, Hans
2002-09-01
Behaving systems, biological as well as artificial, need to respond quickly and accurately to changes in the environment. The response is dependent on stored memories, and novel situations should be learnt for the guidance of future behavior. A highly nonlinear system dynamics is required in order to cope with a complex and changing environment, and this dynamics should be regulated to match the demands of the current situation, and to predict future behavior. In many cases the dynamics should be regulated to minimize processing time. We use computer simulations of cortical structures in order to investigate how the neurodynamics of these systems can be regulated for optimal performance in an unknown and changing environment. In particular, we study how cortical oscillations can serve to amplify weak signals and sustain an input pattern for more accurate information processing, and how chaotic-like behavior could increase the sensitivity in initial, exploratory states. We mimic regulating mechanisms based on neuromodulators, intrinsic noise levels, and various synchronizing effects. We find optimal noise levels where system performance is maximized, and neuromodulatory strategies for an efficient pattern recognition, where the anticipatory state of the system plays an important role.
Formal development of a clock synchronization circuit
NASA Technical Reports Server (NTRS)
Miner, Paul S.
1995-01-01
This talk presents the latest stage in formal development of a fault-tolerant clock synchronization circuit. The development spans from a high level specification of the required properties to a circuit realizing the core function of the system. An abstract description of an algorithm has been verified to satisfy the high-level properties using the mechanical verification system EHDM. This abstract description is recast as a behavioral specification input to the Digital Design Derivation system (DDD) developed at Indiana University. DDD provides a formal design algebra for developing correct digital hardware. Using DDD as the principle design environment, a core circuit implementing the clock synchronization algorithm was developed. The design process consisted of standard DDD transformations augmented with an ad hoc refinement justified using the Prototype Verification System (PVS) from SRI International. Subsequent to the above development, Wilfredo Torres-Pomales discovered an area-efficient realization of the same function. Establishing correctness of this optimization requires reasoning in arithmetic, so a general verification is outside the domain of both DDD transformations and model-checking techniques. DDD represents digital hardware by systems of mutually recursive stream equations. A collection of PVS theories was developed to aid in reasoning about DDD-style streams. These theories include a combinator for defining streams that satisfy stream equations, and a means for proving stream equivalence by exhibiting a stream bisimulation. DDD was used to isolate the sub-system involved in Torres-Pomales' optimization. The equivalence between the original design and the optimized verified was verified in PVS by exhibiting a suitable bisimulation. The verification depended upon type constraints on the input streams and made extensive use of the PVS type system. The dependent types in PVS provided a useful mechanism for defining an appropriate bisimulation.
Structural acoustic control of plates with variable boundary conditions: design methodology.
Sprofera, Joseph D; Cabell, Randolph H; Gibbs, Gary P; Clark, Robert L
2007-07-01
A method for optimizing a structural acoustic control system subject to variations in plate boundary conditions is provided. The assumed modes method is used to build a plate model with varying levels of rotational boundary stiffness to simulate the dynamics of a plate with uncertain edge conditions. A transducer placement scoring process, involving Hankel singular values, is combined with a genetic optimization routine to find spatial locations robust to boundary condition variation. Predicted frequency response characteristics are examined, and theoretically optimized results are discussed in relation to the range of boundary conditions investigated. Modeled results indicate that it is possible to minimize the impact of uncertain boundary conditions in active structural acoustic control by optimizing the placement of transducers with respect to those uncertainties.
A Century of Gestalt Psychology in Visual Perception II. Conceptual and Theoretical Foundations
Wagemans, Johan; Feldman, Jacob; Gepshtein, Sergei; Kimchi, Ruth; Pomerantz, James R.; van der Helm, Peter A.; van Leeuwen, Cees
2012-01-01
Our first review paper on the occasion of the centennial anniversary of Gestalt psychology focused on perceptual grouping and figure-ground organization. It concluded that further progress requires a reconsideration of the conceptual and theoretical foundations of the Gestalt approach, which is provided here. In particular, we review contemporary formulations of holism within an information-processing framework, allowing for operational definitions (e.g., integral dimensions, emergent features, configural superiority, global precedence, primacy of holistic/configural properties) and a refined understanding of its psychological implications (e.g., at the level of attention, perception, and decision). We also review four lines of theoretical progress regarding the law of Prägnanz—the brain’s tendency of being attracted towards states corresponding to the simplest possible organization, given the available stimulation. The first considers the brain as a complex adaptive system and explains how self-organization solves the conundrum of trading between robustness and flexibility of perceptual states. The second specifies the economy principle in terms of optimization of neural resources, showing that elementary sensors working independently to minimize uncertainty can respond optimally at the system level. The third considers how Gestalt percepts (e.g., groups, objects) are optimal given the available stimulation, with optimality specified in Bayesian terms. Fourth, Structural Information Theory explains how a Gestaltist visual system that focuses on internal coding efficiency yields external veridicality as a side-effect. To answer the fundamental question of why things look as they do, a further synthesis of these complementary perspectives is required. PMID:22845750
A century of Gestalt psychology in visual perception: II. Conceptual and theoretical foundations.
Wagemans, Johan; Feldman, Jacob; Gepshtein, Sergei; Kimchi, Ruth; Pomerantz, James R; van der Helm, Peter A; van Leeuwen, Cees
2012-11-01
Our first review article (Wagemans et al., 2012) on the occasion of the centennial anniversary of Gestalt psychology focused on perceptual grouping and figure-ground organization. It concluded that further progress requires a reconsideration of the conceptual and theoretical foundations of the Gestalt approach, which is provided here. In particular, we review contemporary formulations of holism within an information-processing framework, allowing for operational definitions (e.g., integral dimensions, emergent features, configural superiority, global precedence, primacy of holistic/configural properties) and a refined understanding of its psychological implications (e.g., at the level of attention, perception, and decision). We also review 4 lines of theoretical progress regarding the law of Prägnanz-the brain's tendency of being attracted towards states corresponding to the simplest possible organization, given the available stimulation. The first considers the brain as a complex adaptive system and explains how self-organization solves the conundrum of trading between robustness and flexibility of perceptual states. The second specifies the economy principle in terms of optimization of neural resources, showing that elementary sensors working independently to minimize uncertainty can respond optimally at the system level. The third considers how Gestalt percepts (e.g., groups, objects) are optimal given the available stimulation, with optimality specified in Bayesian terms. Fourth, structural information theory explains how a Gestaltist visual system that focuses on internal coding efficiency yields external veridicality as a side effect. To answer the fundamental question of why things look as they do, a further synthesis of these complementary perspectives is required.
Kim, Mi Jung; Baek, Kon; Park, Chung-Mo
2009-08-01
Transient genetic transformation of plant organs is an indispensable way of studying gene function in plants. This study was aimed to develop an optimized system for transient Agrobacterium-mediated transformation of the Arabidopsis leaves. The beta-glucuronidase (GUS) reporter gene was employed to evaluate growth and biochemical parameters that influence the levels of transient expression. The effects of plant culture conditions, Agrobacterial genetic backgrounds, densities of Agrobacterial cell suspensions, and of several detergents were analyzed. We found that optimization of plant culture conditions is the most critical factor among the parameters analyzed. Higher levels of transient expression were observed in plants grown under short day conditions (SDs) than in plants grown under long day conditions (LDs). Furthermore, incubation of the plants under SDs at high relative humidity (85-90%) for 24 h after infiltration greatly improved the levels of transient expression. Under the optimized culture conditions, expression of the reporter gene reached the peak 3 days after infiltration and was rapidly decreased after the peak. Among the five Agrobacterial strains examined, LAB4404 produced the highest levels of expression. We also examined the effects of detergents, including Triton X-100, Tween-20, and Silwet L-77. Supplementation of the infiltration media either with 0.01% Triton X-100 or 0.01% Tween-20 improved the levels of expression by approximately 1.6-fold. Our observations indicate that transient transformation of the Arabidopsis leaves in the infiltration media supplemented with 0.01% Triton X-100 and incubation of the infiltrated plants under SDs at high relative humidity are necessary for maximal levels of expression.
Justesen, Ulrik Stenz; Holm, Anette; Knudsen, Elisa; Andersen, Line Bisgaard; Jensen, Thøger Gorm; Kemp, Michael; Skov, Marianne Nielsine; Gahrn-Hansen, Bente; Møller, Jens Kjølseth
2011-12-01
We compared two matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) systems (Shimadzu/SARAMIS and Bruker) on a collection of consecutive clinically important anaerobic bacteria (n = 290). The Bruker system had more correct identifications to the species level (67.2% versus 49.0%), but also more incorrect identifications (7.9% versus 1.4%). The system databases need to be optimized to increase identification levels. However, MALDI-TOF MS in its present version seems to be a fast and inexpensive method for identification of most clinically important anaerobic bacteria.
Optimal Trajectories For Orbital Transfers Using Low And Medium Thrust Propulsion Systems
NASA Technical Reports Server (NTRS)
Cobb, Shannon S.
1992-01-01
For many problems it is reasonable to expect that the minimum time solution is also the minimum fuel solution. However, if one allows the propulsion system to be turned off and back on, it is clear that these two solutions may differ. In general, high thrust transfers resemble the well-known impulsive transfers where the burn arcs are of very short duration. The low and medium thrust transfers differ in that their thrust acceleration levels yield longer burn arcs which will require more revolutions, thus making the low thrust transfer computational intensive. Here, we consider optimal low and medium thrust orbital transfers.
NASA Astrophysics Data System (ADS)
Golovanova, T. M.; Gryaznov, Yu M.; Dianov, Evgenii M.; Dobryakova, N. G.; Kiselev, A. V.; Prokhorov, A. M.; Shcherbakov, E. A.
1989-08-01
An investigation was made of the parameters of an integrated-optical spectrum analyzer consisting of a Ti:LiNbO3 crystal and a semiconductor laser with a built-in microobjective, spherical geodesic lenses, and an optimized system of interdigital (opposed-comb) transducers. The characteristics of this spectrum analyzer were as follows: the band of operating frequencies was 181 MHz (at the 3 dB level); the resolution was 2.8 MHz; the signal/noise ratio (under a control voltage of 4 V) was 20 dB.
An optimal brain can be composed of conflicting agents
Livnat, Adi; Pippenger, Nicholas
2006-01-01
Many behaviors have been attributed to internal conflict within the animal and human mind. However, internal conflict has not been reconciled with evolutionary principles, in that it appears maladaptive relative to a seamless decision-making process. We study this problem through a mathematical analysis of decision-making structures. We find that, under natural physiological limitations, an optimal decision-making system can involve “selfish” agents that are in conflict with one another, even though the system is designed for a single purpose. It follows that conflict can emerge within a collective even when natural selection acts on the level of the collective only. PMID:16492775
An inverter/controller subsystem optimized for photovoltaic applications
NASA Technical Reports Server (NTRS)
Pickrell, R. L.; Osullivan, G.; Merrill, W. C.
1978-01-01
Conversion of solar array dc power to ac power stimulated the specification, design, and simulation testing of an inverter/controller subsystem tailored to the photovoltaic power source characteristics. Optimization of the inverter/controller design is discussed as part of an overall photovoltaic power system designed for maximum energy extraction from the solar array. The special design requirements for the inverter/ controller include: a power system controller (PSC) to control continuously the solar array operating point at the maximum power level based on variable solar insolation and cell temperatures; and an inverter designed for high efficiency at rated load and low losses at light loadings to conserve energy.
Flight Control Development for the ARH-70 Armed Reconnaissance Helicopter Program
NASA Technical Reports Server (NTRS)
Christensen, Kevin T.; Campbell, Kip G.; Griffith, Carl D.; Ivler, Christina M.; Tischler, Mark B.; Harding, Jeffrey W.
2008-01-01
In July 2005, Bell Helicopter won the U.S. Army's Armed Reconnaissance Helicopter competition to produce a replacement for the OH-58 Kiowa Warrior capable of performing the armed reconnaissance mission. To meet the U.S. Army requirement that the ARH-70A have Level 1 handling qualities for the scout rotorcraft mission task elements defined by ADS-33E-PRF, Bell equipped the aircraft with their generic automatic flight control system (AFCS). Under the constraints of the tight ARH-70A schedule, the development team used modem parameter identification and control law optimization techniques to optimize the AFCS gains to simultaneously meet multiple handling qualities design criteria. This paper will show how linear modeling, control law optimization, and simulation have been used to produce a Level 1 scout rotorcraft for the U.S. Army, while minimizing the amount of flight testing required for AFCS development and handling qualities evaluation of the ARH-70A.
Layout optimization with algebraic multigrid methods
NASA Technical Reports Server (NTRS)
Regler, Hans; Ruede, Ulrich
1993-01-01
Finding the optimal position for the individual cells (also called functional modules) on the chip surface is an important and difficult step in the design of integrated circuits. This paper deals with the problem of relative placement, that is the minimization of a quadratic functional with a large, sparse, positive definite system matrix. The basic optimization problem must be augmented by constraints to inhibit solutions where cells overlap. Besides classical iterative methods, based on conjugate gradients (CG), we show that algebraic multigrid methods (AMG) provide an interesting alternative. For moderately sized examples with about 10000 cells, AMG is already competitive with CG and is expected to be superior for larger problems. Besides the classical 'multiplicative' AMG algorithm where the levels are visited sequentially, we propose an 'additive' variant of AMG where levels may be treated in parallel and that is suitable as a preconditioner in the CG algorithm.
NASA Astrophysics Data System (ADS)
Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.
2017-03-01
General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.
Workflow Optimization for Tuning Prostheses with High Input Channel
2017-10-01
of Specific Aim 1 by driving a commercially available two DoF wrist and single DoF hand. The high -level control system will provide analog signals...AWARD NUMBER: W81XWH-16-1-0767 TITLE: Workflow Optimization for Tuning Prostheses with High Input Channel PRINCIPAL INVESTIGATOR: Daniel Merrill...Unlimited The views, opinions and/or findings contained in this report are those of the author(s) and should not be construed as an official Department
Development and application of optimum sensitivity analysis of structures
NASA Technical Reports Server (NTRS)
Barthelemy, J. F. M.; Hallauer, W. L., Jr.
1984-01-01
The research focused on developing an algorithm applying optimum sensitivity analysis for multilevel optimization. The research efforts have been devoted to assisting NASA Langley's Interdisciplinary Research Office (IRO) in the development of a mature methodology for a multilevel approach to the design of complex (large and multidisciplinary) engineering systems. An effort was undertaken to identify promising multilevel optimization algorithms. In the current reporting period, the computer program generating baseline single level solutions was completed and tested out.
NASA Astrophysics Data System (ADS)
Kalatzis, Fanis G.; Papageorgiou, Dimitrios G.; Demetropoulos, Ioannis N.
2006-09-01
The Merlin/MCL optimization environment and the GAMESS-US package were combined so as to offer an extended and efficient quantum chemistry optimization system, capable of implementing complex optimization strategies for generic molecular modeling problems. A communication and data exchange interface was established between the two packages exploiting all Merlin features such as multiple optimizers, box constraints, user extensions and a high level programming language. An important feature of the interface is its ability to perform dimer computations by eliminating the basis set superposition error using the counterpoise (CP) method of Boys and Bernardi. Furthermore it offers CP-corrected geometry optimizations using analytic derivatives. The unified optimization environment was applied to construct portions of the intermolecular potential energy surface of the weakly bound H-bonded complex C 6H 6-H 2O by utilizing the high level Merlin Control Language. The H-bonded dimer HF-H 2O was also studied by CP-corrected geometry optimization. The ab initio electronic structure energies were calculated using the 6-31G ** basis set at the Restricted Hartree-Fock and second-order Moller-Plesset levels, while all geometry optimizations were carried out using a quasi-Newton algorithm provided by Merlin. Program summaryTitle of program: MERGAM Catalogue identifier:ADYB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYB_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: The program is designed for machines running the UNIX operating system. It has been tested on the following architectures: IA32 (Linux with gcc/g77 v.3.2.3), AMD64 (Linux with the Portland group compilers v.6.0), SUN64 (SunOS 5.8 with the Sun Workshop compilers v.5.2) and SGI64 (IRIX 6.5 with the MIPSpro compilers v.7.4) Installations: University of Ioannina, Greece Operating systems or monitors under which the program has been tested: UNIX Programming language used: ANSI C, ANSI Fortran-77 No. of lines in distributed program, including test data, etc.:11 282 No. of bytes in distributed program, including test data, etc.: 49 458 Distribution format: tar.gz Memory required to execute with typical data: Memory requirements mainly depend on the selection of a GAMESS-US basis set and the number of atoms No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: no Nature of physical problem: Multidimensional geometry optimization is of great importance in any ab initio calculation since it usually is one of the most CPU-intensive tasks, especially on large molecular systems. For example, the geometric and energetic description of van der Waals and weakly bound H-bonded complexes requires the construction of related important portions of the multidimensional intermolecular potential energy surface (IPES). So the various held views about the nature of these bonds can be quantitatively tested. Method of solution: The Merlin/MCL optimization environment was interconnected with the GAMESS-US package to facilitate geometry optimization in quantum chemistry problems. The important portions of the IPES require the capability to program optimization strategies. The Merlin/MCL environment was used for the implementation of such strategies. In this work, a CP-corrected geometry optimization was performed on the HF-H 2O complex and an MCL program was developed to study portions of the potential energy surface of the C 6H 6-H 2O complex. Restrictions on the complexity of the problem: The Merlin optimization environment and the GAMESS-US package must be installed. The MERGAM interface requires GAMESS-US input files that have been constructed in Cartesian coordinates. This restriction occurs from a design-time requirement to not allow reorientation of atomic coordinates; this rule holds always true when applying the COORD = UNIQUE keyword in a GAMESS-US input file. Typical running time: It depends on the size of the molecular system, the size of the basis set and the method of electron correlation. Execution of the test run took approximately 5 min on a 2.8 GHz Intel Pentium CPU.
Wind farm optimization using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Ituarte-Villarreal, Carlos M.
In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a variable number of system components and wind turbines with different operating characteristics and sizes, to have a more heterogeneous model that can deal with changes in the layout and in the power generation requirements over the time. Moreover, the approach evaluates the impact of the wind-wake effect of the wind turbines upon one another to describe and evaluate the power production capacity reduction of the system depending on the layout distribution of the wind turbines.
Phillips, Steven P.; Carlson, Carl S.; Metzger, Loren F.; Howle, James F.; Galloway, Devin L.; Sneed, Michelle; Ikehara, Marti E.; Hudnut, Kenneth W.; King, Nancy E.
2003-01-01
Ground-water levels in Lancaster, California, declined more than 200 feet during the 20th century, resulting in reduced ground-water supplies and more than 6 feet of land subsidence. Facing continuing population growth, water managers are seeking solutions to these problems. Injection of imported, treated fresh water into the aquifer system when it is most available and least expensive, for later use during high-demand periods, is being evaluated as part of a management solution. The U.S. Geological Survey, in cooperation with the Los Angeles County Department of Public Works and the Antelope Valley-East Kern Water Agency, monitored a pilot injection program, analyzed the hydraulic and subsidence-related effects of injection, and developed a simulation/optimization model to help evaluate the effectiveness of using existing and proposed wells in an injection program for halting the decline of ground-water levels and avoiding future land subsidence while meeting increasing ground-water demand. A variety of methods were used to measure aquifer-system response to injection. Water levels were measured continuously in nested (multi-depth) piezometers and monitoring wells and periodically in other wells that were within several miles of the injection site. Microgravity surveys were done to estimate changes in the elevation of the water table in the absence of wells and to estimate specific yield. Aquifer-system deformation was measured directly and continuously using a dual borehole extensometer and indirectly using continuous Global Positioning System (GPS), first-order spirit leveling, and an array of tiltmeters. The injected water and extracted water were sampled periodically and analyzed for constituents, including chloride and trihalomethanes. Measured injection rates of about 750 gallons per minute (gal/min) per well at the injection site during a 5-month period showed that injection at or above the average extraction rates at that site (about 800 gal/min) was hydraulically feasible. Analyses of these data took many forms. Coupled measurements of gravity and water-level change were used to estimate the specific yield near the injection wells, which, in turn, was used to estimate areal water-table changes from distributed measurements of gravity change. Values of the skeletal components of aquifer-system storage, which are key subsidence-related characteristics of the system, were derived from continuous measurements of water levels and aquifer-system deformation. A numerical model of ground-water flow was developed for the area surrounding Lancaster and used to estimate horizontal and vertical hydraulic conductivities. A chemical mass balance was done to estimate the recovery of injected water. The ground-water-flow model was used to project changes in ground-water levels for 10 years into the future, assuming no injection, no change in pumping distribution, and forecasted increases in ground-water demand. Simulated ground-water levels decreased throughout the Lancaster area, suggesting that land subsidence would continue as would the depletion of ground-water supplies and an associated loss of well production capacity. A simulation/optimization model was developed to help identify optimal injection and extraction rates for 16 existing and 13 proposed wells to avoid future land subsidence and to minimize loss of well production capacity while meeting increasing ground-water demands. Results of model simulations suggest that these objectives can be met with phased installation of the proposed wells during the 10-year period. Water quality was not considered in the optimization, but chemical-mass-balance results indicate that a sustained injection program likely would have residual effects on the chemistry of ground water.
Gain and power optimization of the wireless optical system with multilevel modulation.
Liu, Xian
2008-06-01
When used in an outdoor environment to expedite networking access, the performance of wireless optical communication systems is affected by transmitter sway. In the design of such systems, much attention has been paid to developing power-efficient schemes. However, the bandwidth efficiency is also an important issue. One of the most natural approaches to promote bandwidth efficiency is to use multilevel modulation. This leads to multilevel pulse amplitude modulation in the context of intensity modulation and direct detection. We develop a model based on the four-level pulse amplitude modulation. We show that the model can be formulated as an optimization problem in terms of the transmitter power, bit error probability, transmitter gain, and receiver gain. The technical challenges raised by modeling and solving the problem include the analytical and numerical treatments for the improper integrals of the Gaussian functions coupled with the erfc function. The results demonstrate that, at the optimal points, the power penalty paid to the doubled bandwidth efficiency is around 3 dB.
Adaptive Virtual Reality Training to Optimize Military Medical Skills Acquisition and Retention.
Siu, Ka-Chun; Best, Bradley J; Kim, Jong Wook; Oleynikov, Dmitry; Ritter, Frank E
2016-05-01
The Department of Defense has pursued the integration of virtual reality simulation into medical training and applications to fulfill the need to train 100,000 military health care personnel annually. Medical personnel transitions, both when entering an operational area and returning to the civilian theater, are characterized by the need to rapidly reacquire skills that are essential but have decayed through disuse or infrequent use. Improved efficiency in reacquiring such skills is critical to avoid the likelihood of mistakes that may result in mortality and morbidity. We focus here on a study testing a theory of how the skills required for minimally invasive surgery for military surgeons are learned and retained. Our adaptive virtual reality surgical training system will incorporate an intelligent mechanism for tracking performance that will recognize skill deficiencies and generate an optimal adaptive training schedule. Our design is modeling skill acquisition based on a skill retention theory. The complexity of appropriate training tasks is adjusted according to the level of retention and/or surgical experience. Based on preliminary work, our system will improve the capability to interactively assess the level of skills learning and decay, optimizes skill relearning across levels of surgical experience, and positively impact skill maintenance. Our system could eventually reduce mortality and morbidity by providing trainees with the reexperience they need to help make a transition between operating theaters. This article reports some data that will support adaptive tutoring of minimally invasive surgery and similar surgical skills. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
Wagner, Peter D
2012-01-01
Exercise is the example par excellence of the body functioning as a physiological system. Conventionally we think of the O(2) transport process as a major manifestation of that system linking and integrating pulmonary, cardiovascular, hematological and skeletal muscular contributions to the task of getting O(2) from the air to the mitochondria, and this process has been well described. However, exercise invokes system responses at levels additional to those of macroscopic O(2) transport. One such set of responses appears to center on muscle intracellular PO(2), which falls dramatically from rest to exercise. At rest, it approximates 4 kPa, but during heavy endurance exercise it falls to about 0.4-0.5 kPa, an amazingly low value for a tissue absolutely dependent on the continual supply of O(2) to meet very high energy demands. One wonders why intracellular PO(2) is allowed to fall to such levels. The proposed answer, to be presented in the review, is that a low intramyocyte PO(2) is pivotal in: (a) optimizing oxygen's own physiological transport, and (b) stimulating adaptive gene expression that, after translation, enables greater exercise capacity-all the while maintaining PO(2) at levels sufficient to allow oxidative phosphorylation to operate sufficiently fast enough to support intense muscle contraction. Thus, during exercise, reductions of intracellular PO(2) to less than 1% of that in the atmosphere enables an integrated response that fundamentally and simultaneously optimizes physiological, biochemical and molecular events that support not only the exercise as it happens but the adaptive changes to increase exercise capacity over the longer term.
Melatonin and the circadian system: contributions to successful female reproduction.
Reiter, Russel J; Tamura, Hiroshi; Tan, Dun Xian; Xu, Xiao-Ying
2014-08-01
To summarize the role of melatonin and circadian rhythms in determining optimal female reproductive physiology, especially at the peripheral level. Databases were searched for the related English-language literature published up to March 1, 2014. Only papers in peer-reviewed journals are cited. Not applicable. Not applicable. Melatonin treatment, alterations of the normal light:dark cycle and light exposure at night. Melatonin levels in the blood and in the ovarian follicular fluid and melatonin synthesis, oxidative damage and circadian rhythm disturbances in peripheral reproductive organs. The central circadian regulatory system is located in the suprachiasmatic nucleus (SCN). The output of this master clock is synchronized to 24 hours by the prevailing light-dark cycle. The SCN regulates rhythms in peripheral cells via the autonomic nervous system and it sends a neural message to the pineal gland where it controls the cyclic production of melatonin; after its release, the melatonin rhythm strengthens peripheral oscillators. Melatonin is also produced in the peripheral reproductive organs, including granulosa cells, the cumulus oophorus, and the oocyte. These cells, along with the blood, may contribute melatonin to the follicular fluid, which has melatonin levels higher than those in the blood. Melatonin is a powerful free radical scavenger and protects the oocyte from oxidative stress, especially at the time of ovulation. The cyclic levels of melatonin in the blood pass through the placenta and aid in the organization of the fetal SCN. In the absence of this synchronizing effect, the offspring may exhibit neurobehavioral deficits. Also, melatonin protects the developing fetus from oxidative stress. Melatonin produced in the placenta likewise may preserve the optimal function of this organ. Both stable circadian rhythms and cyclic melatonin availability are critical for optimal ovarian physiology and placental function. Because light exposure after darkness onset at night disrupts the master circadian clock and suppresses elevated nocturnal melatonin levels, light at night should be avoided. Copyright © 2014 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Welstead, Jason
2014-01-01
This research focused on incorporating stability and control into a multidisciplinary de- sign optimization on a Boeing 737-class advanced concept called the D8.2b. A new method of evaluating the aircraft handling performance using quantitative evaluation of the sys- tem to disturbances, including perturbations, continuous turbulence, and discrete gusts, is presented. A multidisciplinary design optimization was performed using the D8.2b transport air- craft concept. The con guration was optimized for minimum fuel burn using a design range of 3,000 nautical miles. Optimization cases were run using xed tail volume coecients, static trim constraints, and static trim and dynamic response constraints. A Cessna 182T model was used to test the various dynamic analysis components, ensuring the analysis was behaving as expected. Results of the optimizations show that including stability and con- trol in the design process drastically alters the optimal design, indicating that stability and control should be included in conceptual design to avoid system level penalties later in the design process.
Passive states as optimal inputs for single-jump lossy quantum channels
NASA Astrophysics Data System (ADS)
De Palma, Giacomo; Mari, Andrea; Lloyd, Seth; Giovannetti, Vittorio
2016-06-01
The passive states of a quantum system minimize the average energy among all the states with a given spectrum. We prove that passive states are the optimal inputs of single-jump lossy quantum channels. These channels arise from a weak interaction of the quantum system of interest with a large Markovian bath in its ground state, such that the interaction Hamiltonian couples only consecutive energy eigenstates of the system. We prove that the output generated by any input state ρ majorizes the output generated by the passive input state ρ0 with the same spectrum of ρ . Then, the output generated by ρ can be obtained applying a random unitary operation to the output generated by ρ0. This is an extension of De Palma et al. [IEEE Trans. Inf. Theory 62, 2895 (2016)], 10.1109/TIT.2016.2547426, where the same result is proved for one-mode bosonic Gaussian channels. We also prove that for finite temperature this optimality property can fail already in a two-level system, where the best input is a coherent superposition of the two energy eigenstates.
Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues
NASA Astrophysics Data System (ADS)
Chakravarthy, Srinivas R.; Rumyantsev, Alexander
2018-03-01
Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication) for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.
A vibratory stimulation-based inhibition system for nocturnal bruxism: a clinical report.
Watanabe, T; Baba, K; Yamagata, K; Ohyama, T; Clark, G T
2001-03-01
For the single subject tested to date, the bruxism-contingent vibratory-feedback system for occlusal appliances effectively inhibited bruxism without inducing substantial sleep disturbance. Whether the reduction in bruxism would continue if the device no longer provided feedback and whether the force levels applied are optimal to induce suppression remain to be determined.
Implementing a bubble memory hierarchy system
NASA Technical Reports Server (NTRS)
Segura, R.; Nichols, C. D.
1979-01-01
This paper reports on implementation of a magnetic bubble memory in a two-level hierarchial system. The hierarchy used a major-minor loop device and RAM under microprocessor control. Dynamic memory addressing, dual bus primary memory, and hardware data modification detection are incorporated in the system to minimize access time. It is the objective of the system to incorporate the advantages of bipolar memory with that of bubble domain memory to provide a smart, optimal memory system which is easy to interface and independent of user's system.
NASA Astrophysics Data System (ADS)
Madhikar, Pratik Ravindra
The most important and crucial design feature while designing an Aircraft Electric Power Distribution System (EPDS) is reliability. In EPDS, the distribution of power is from top level generators to bottom level loads through various sensors, actuators and rectifiers with the help of AC & DC buses and control switches. As the demands of the consumer is never ending and the safety is utmost important, there is an increase in loads and as a result increase in power management. Therefore, the design of an EPDS should be optimized to have maximum efficiency. This thesis discusses an integrated tool that is based on a Need Based Design method and Fault Tree Analysis (FTA) to achieve the optimum design of an EPDS to provide maximum reliability in terms of continuous connectivity, power management and minimum cost. If an EPDS is formulated as an optimization problem then it can be solved with the help of connectivity, cost and power constraints by using a linear solver to get the desired output of maximum reliability at minimum cost. Furthermore, the thesis also discusses the viability and implementation of the resulted topology on typical large aircraft specifications.
Robust active noise control in the loadmaster area of a military transport aircraft.
Kochan, Kay; Sachau, Delf; Breitbach, Harald
2011-05-01
The active noise control (ANC) method is based on the superposition of a disturbance noise field with a second anti-noise field using loudspeakers and error microphones. This method can be used to reduce the noise level inside the cabin of a propeller aircraft. However, during the design process of the ANC system, extensive measurements of transfer functions are necessary to optimize the loudspeaker and microphone positions. Sometimes, the transducer positions have to be tailored according to the optimization results to achieve a sufficient noise reduction. The purpose of this paper is to introduce a controller design method for such narrow band ANC systems. The method can be seen as an extension of common transducer placement optimization procedures. In the presented method, individual weighting parameters for the loudspeakers and microphones are used. With this procedure, the tailoring of the transducer positions is replaced by adjustment of controller parameters. Moreover, the ANC system will be robust because of the fact that the uncertainties are considered during the optimization of the controller parameters. The paper describes the necessary theoretic background for the method and demonstrates the efficiency in an acoustical mock-up of a military transport aircraft.