Path planning during combustion mode switch
Jiang, Li; Ravi, Nikhil
2015-12-29
Systems and methods are provided for transitioning between a first combustion mode and a second combustion mode in an internal combustion engine. A current operating point of the engine is identified and a target operating point for the internal combustion engine in the second combustion mode is also determined. A predefined optimized transition operating point is selected from memory. While operating in the first combustion mode, one or more engine actuator settings are adjusted to cause the operating point of the internal combustion engine to approach the selected optimized transition operating point. When the engine is operating at the selected optimized transition operating point, the combustion mode is switched from the first combustion mode to the second combustion mode. While operating in the second combustion mode, one or more engine actuator settings are adjusted to cause the operating point of the internal combustion to approach the target operating point.
Patnaik, Lalit; Umanand, Loganathan
2015-10-26
The inverted pendulum is a popular model for describing bipedal dynamic walking. The operating point of the walker can be specified by the combination of initial mid-stance velocity (v0) and step angle (φm) chosen for a given walk. In this paper, using basic mechanics, a framework of physical constraints that limit the choice of operating points is proposed. The constraint lines thus obtained delimit the allowable region of operation of the walker in the v0-φm plane. A given average forward velocity vx,avg can be achieved by several combinations of v0 and φm. Only one of these combinations results in the minimum mechanical power consumption and can be considered the optimum operating point for the given vx,avg. This paper proposes a method for obtaining this optimal operating point based on tangency of the power and velocity contours. Putting together all such operating points for various vx,avg, a family of optimum operating points, called the optimal locus, is obtained. For the energy loss and internal energy models chosen, the optimal locus obtained has a largely constant step angle with increasing speed but tapers off at non-dimensional speeds close to unity.
A multiple objective optimization approach to quality control
NASA Technical Reports Server (NTRS)
Seaman, Christopher Michael
1991-01-01
The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios: tuning of process controllers to meet specified performance objectives and tuning of process inputs to meet specified quality objectives. Five case studies are presented.
NASA Technical Reports Server (NTRS)
Huyse, Luc; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
Free-form shape optimization of airfoils poses unexpected difficulties. Practical experience has indicated that a deterministic optimization for discrete operating conditions can result in dramatically inferior performance when the actual operating conditions are different from the - somewhat arbitrary - design values used for the optimization. Extensions to multi-point optimization have proven unable to adequately remedy this problem of "localized optimization" near the sampled operating conditions. This paper presents an intrinsically statistical approach and demonstrates how the shortcomings of multi-point optimization with respect to "localized optimization" can be overcome. The practical examples also reveal how the relative likelihood of each of the operating conditions is automatically taken into consideration during the optimization process. This is a key advantage over the use of multipoint methods.
Kasper, Sigrid M; Dueholm, Margit; Marinovskij, Edvard; Blaakær, Jan
2017-03-01
To analyze the ability of magnetic resonance imaging (MRI) and systematic evaluation at surgery to predict optimal cytoreduction in primary advanced ovarian cancer and to develop a preoperative scoring system for cancer staging. Preoperative MRI and standard laparotomy were performed in 99 women with either ovarian or primary peritoneal cancer. Using univariate and multivariate logistic regression analysis of a systematic description of the tumor in nine abdominal compartments obtained by MRI and during surgery plus clinical parameters, a scoring system was designed that predicted non-optimal cytoreduction. Non-optimal cytoreduction at operation was predicted by the following: (A) presence of comorbidities group 3 or 4 (ASA); (B) tumor presence in multiple numbers of different compartments, and (C) numbers of specified sites of organ involvement. The score includes: number of compartments involved (1-9 points), >1 subdiaphragmal location with presence of tumor (1 point); deep organ involvement of liver (1 point), porta hepatis (1 point), spleen (1 point), mesentery/vessel (1 point), cecum/ileocecal (1 point), rectum/vessels (1 point): ASA groups 3 and 4 (2 points). Use of the scoring system based on operative findings gave an area under the curve (AUC) of 91% (85-98%) for patients in whom optimal cytoreduction could not be achieved. The score AUC obtained by MRI was 84% (76-92%), and 43% of non-optimal cytoreduction patients were identified, with only 8% of potentially operable patients being falsely evaluated as suitable for non-optimal cytoreduction at the most optimal cut-off value. Tumor in individual locations did not predict operability. This systematic scoring system based on operative findings and MRI may predict non-optimal cytoreduction. MRI is able to assess ovarian cancer with peritoneal carcinomatosis with satisfactory concordance with laparotomic findings. This scoring system could be useful as a clinical guideline and should be evaluated and developed further in larger studies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations
NASA Technical Reports Server (NTRS)
Zhao, Yiyuan; Chen, Robert T. N.
1996-01-01
This paper presents a summary of a series of recent analytical studies conducted to investigate One-Engine-Inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, Continued TakeOff (CTO), Rejected TakeOff (RTO), Balked Landing (BL), and Continued Landing (CL) are investigated for both Vertical-TakeOff-and-Landing (VTOL) and Short-TakeOff-and-Landing (STOL) terminal-area operations. The formulation of the nonlinear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajectory optimization studies are presented. In particular, a new balanced- weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.
Optimization of bump and blowing to control the flow through a transonic compressor blade cascade
NASA Astrophysics Data System (ADS)
Mazaheri, K.; Khatibirad, S.
2018-03-01
Shock control bump (SCB) and blowing are two flow control methods, used here to improve the aerodynamic performance of transonic compressors. Both methods are applied to a NASA rotor 67 blade section and are optimized to minimize the total pressure loss. A continuous adjoint algorithm is used for multi-point optimization of a SCB to improve the aerodynamic performance of the rotor blade section, for a range of operational conditions around its design point. A multi-point and two single-point optimizations are performed in the design and off-design conditions. It is shown that the single-point optimized shapes have the best performance for their respective operating conditions, but the multi-point one has an overall better performance over the whole operating range. An analysis is given regarding how similarly both single- and multi-point optimized SCBs change the wave structure between blade sections resulting in a more favorable flow pattern. Interactions of the SCB with the boundary layer and the wave structure, and its effects on the separation regions are also studied. We have also introduced the concept of blowing for control of shock wave and boundary-layer interaction. A geometrical model is introduced, and the geometrical and physical parameters of blowing are optimized at the design point. The performance improvements of blowing are compared with the SCB. The physical interactions of SCB with the boundary layer and the shock wave are analyzed. The effects of SCB on the wave structure in the flow domain outside the boundary-layer region are investigated. It is shown that the effects of the blowing mechanism are very similar to the SCB.
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
Optimal robust control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2018-01-01
Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.
Lin, Yu-Chih; Chang, Feng-Tang
2009-05-30
In this study, we attempted to enhance the removal efficiency of a honeycomb zeolite rotor concentrator (HZRC), operated at optimal parameters, for processing TFT-LCD volatile organic compounds (VOCs) with competitive adsorption characteristics. The results indicated that when the HZRC processed a VOCs stream of mixed compounds, compounds with a high boiling point take precedence in the adsorption process. In addition, existing compounds with a low boiling point adsorbed onto the HZRC were also displaced by the high-boiling-point compounds. In order to achieve optimal operating parameters for high VOCs removal efficiency, results suggested controlling the inlet velocity to <1.5m/s, reducing the concentration ratio to 8 times, increasing the desorption temperature to 200-225 degrees C, and setting the rotation speed to 6.5rpm.
Estimation of the laser cutting operating cost by support vector regression methodology
NASA Astrophysics Data System (ADS)
Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam
2016-09-01
Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.
2015-04-01
capability to conduct airfield surveys outside of a permissive environment. Optimizing the Rapid Raptor Forward Arming and Refueling Point (FARP...9] An Initial Approach at Dispersing Air Operations: Rapid Raptor Concept ................... [12] Rapid Raptor : Optimized...Approach at Dispersing Air Operations: Rapid Raptor Concept The Air Force Rapid Raptor Fighter Forward Arming and Refueling (FARP) concept is an
NASA Astrophysics Data System (ADS)
Cao, Jia; Yan, Zheng; He, Guangyu
2016-06-01
This paper introduces an efficient algorithm, multi-objective human learning optimization method (MOHLO), to solve AC/DC multi-objective optimal power flow problem (MOPF). Firstly, the model of AC/DC MOPF including wind farms is constructed, where includes three objective functions, operating cost, power loss, and pollutant emission. Combining the non-dominated sorting technique and the crowding distance index, the MOHLO method can be derived, which involves individual learning operator, social learning operator, random exploration learning operator and adaptive strategies. Both the proposed MOHLO method and non-dominated sorting genetic algorithm II (NSGAII) are tested on an improved IEEE 30-bus AC/DC hybrid system. Simulation results show that MOHLO method has excellent search efficiency and the powerful ability of searching optimal. Above all, MOHLO method can obtain more complete pareto front than that by NSGAII method. However, how to choose the optimal solution from pareto front depends mainly on the decision makers who stand from the economic point of view or from the energy saving and emission reduction point of view.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu
Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less
Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu; ...
2017-10-10
Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less
Piston Bowl Optimization for RCCI Combustion in a Light-Duty Multi-Cylinder Engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanson, Reed M; Curran, Scott; Wagner, Robert M
2012-01-01
Reactivity Controlled Compression Ignition (RCCI) is an engine combustion strategy that that produces low NO{sub x} and PM emissions with high thermal efficiency. Previous RCCI research has been investigated in single-cylinder heavy-duty engines. The current study investigates RCCI operation in a light-duty multi-cylinder engine at 3 operating points. These operating points were chosen to cover a range of conditions seen in the US EPA light-duty FTP test. The operating points were chosen by the Ad Hoc working group to simulate operation in the FTP test. The fueling strategy for the engine experiments consisted of in-cylinder fuel blending using port fuel-injectionmore » (PFI) of gasoline and early-cycle, direct-injection (DI) of diesel fuel. At these 3 points, the stock engine configuration is compared to operation with both the original equipment manufacturer (OEM) and custom machined pistons designed for RCCI operation. The pistons were designed with assistance from the KIVA 3V computational fluid dynamics (CFD) code. By using a genetic algorithm optimization, in conjunction with KIVA, the piston bowl profile was optimized for dedicated RCCI operation to reduce unburned fuel emissions and piston bowl surface area. By reducing these parameters, the thermal efficiency of the engine was improved while maintaining low NOx and PM emissions. Results show that with the new piston bowl profile and an optimized injection schedule, RCCI brake thermal efficiency was increased from 37%, with the stock EURO IV configuration, to 40% at the 2,600 rev/min, 6.9 bar BMEP condition, and NOx and PM emissions targets were met without the need for exhaust after-treatment.« less
NASA Astrophysics Data System (ADS)
Bonne, F.; Bonnay, P.; Girard, A.; Hoa, C.; Lacroix, B.; Le Coz, Q.; Nicollet, S.; Poncet, J.-M.; Zani, L.
2017-12-01
Supercritical helium loops at 4.2 K are the baseline cooling strategy of tokamaks superconducting magnets (JT-60SA, ITER, DEMO, etc.). This loops work with cryogenic circulators that force a supercritical helium flow through the superconducting magnets in order that the temperature stay below the working range all along their length. This paper shows that a supercritical helium loop associated with a saturated liquid helium bath can satisfy temperature constraints in different ways (playing on bath temperature and on the supercritical flow), but that only one is optimal from an energy point of view (every Watt consumed at 4.2 K consumes at least 220 W of electrical power). To find the optimal operational conditions, an algorithm capable of minimizing an objective function (energy consumption at 5 bar, 5 K) subject to constraints has been written. This algorithm works with a supercritical loop model realized with the Simcryogenics [2] library. This article describes the model used and the results of constrained optimization. It will be possible to see that the changes in operating point on the temperature of the magnet (e.g. in case of a change in the plasma configuration) involves large changes on the cryodistribution optimal operating point. Recommendations will be made to ensure that the energetic consumption is kept as low as possible despite the changing operating point. This work is partially supported by EUROfusion Consortium through the Euratom Research and Training Program 20142018 under Grant 633053.
A variable-gain output feedback control design approach
NASA Technical Reports Server (NTRS)
Haylo, Nesim
1989-01-01
A multi-model design technique to find a variable-gain control law defined over the whole operating range is proposed. The design is formulated as an optimal control problem which minimizes a cost function weighing the performance at many operating points. The solution is obtained by embedding into the Multi-Configuration Control (MCC) problem, a multi-model robust control design technique. In contrast to conventional gain scheduling which uses a curve fit of single model designs, the optimal variable-gain control law stabilizes the plant at every operating point included in the design. An iterative algorithm to compute the optimal control gains is presented. The methodology has been successfully applied to reconfigurable aircraft flight control and to nonlinear flight control systems.
Li, Mingjie; Zhou, Ping; Wang, Hong; ...
2017-09-19
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Mingjie; Zhou, Ping; Wang, Hong
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
Optimization Research on Ampacity of Underground High Voltage Cable Based on Interior Point Method
NASA Astrophysics Data System (ADS)
Huang, Feng; Li, Jing
2017-12-01
The conservative operation method which takes unified current-carrying capacity as maximum load current can’t make full use of the overall power transmission capacity of the cable. It’s not the optimal operation state for the cable cluster. In order to improve the transmission capacity of underground cables in cluster, this paper regards the maximum overall load current as the objective function and the temperature of any cables lower than maximum permissible temperature as constraint condition. The interior point method which is very effective for nonlinear problem is put forward to solve the extreme value of the problem and determine the optimal operating current of each loop. The results show that the optimal solutions obtained with the purposed method is able to increase the total load current about 5%. It greatly improves the economic performance of the cable cluster.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Ji, Haoran; Wang, Chengshan
Distributed generators (DGs) including photovoltaic panels (PVs) have been integrated dramatically in active distribution networks (ADNs). Due to the strong volatility and uncertainty, the high penetration of PV generation immensely exacerbates the conditions of voltage violation in ADNs. However, the emerging flexible interconnection technology based on soft open points (SOPs) provides increased controllability and flexibility to the system operation. For fully exploiting the regulation ability of SOPs to address the problems caused by PV, this paper proposes a robust optimization method to achieve the robust optimal operation of SOPs in ADNs. A two-stage adjustable robust optimization model is built tomore » tackle the uncertainties of PV outputs, in which robust operation strategies of SOPs are generated to eliminate the voltage violations and reduce the power losses of ADNs. A column-and-constraint generation (C&CG) algorithm is developed to solve the proposed robust optimization model, which are formulated as second-order cone program (SOCP) to facilitate the accuracy and computation efficiency. Case studies on the modified IEEE 33-node system and comparisons with the deterministic optimization approach are conducted to verify the effectiveness and robustness of the proposed method.« less
Research on crude oil storage and transportation based on optimization algorithm
NASA Astrophysics Data System (ADS)
Yuan, Xuhua
2018-04-01
At present, the optimization theory and method have been widely used in the optimization scheduling and optimal operation scheme of complex production systems. Based on C++Builder 6 program development platform, the theoretical research results are implemented by computer. The simulation and intelligent decision system of crude oil storage and transportation inventory scheduling are designed. The system includes modules of project management, data management, graphics processing, simulation of oil depot operation scheme. It can realize the optimization of the scheduling scheme of crude oil storage and transportation system. A multi-point temperature measuring system for monitoring the temperature field of floating roof oil storage tank is developed. The results show that by optimizing operating parameters such as tank operating mode and temperature, the total transportation scheduling costs of the storage and transportation system can be reduced by 9.1%. Therefore, this method can realize safe and stable operation of crude oil storage and transportation system.
Multi-point optimization of recirculation flow type casing treatment in centrifugal compressors
NASA Astrophysics Data System (ADS)
Tun, Min Thaw; Sakaguchi, Daisaku
2016-06-01
High-pressure ratio and wide operating range are highly required for a turbocharger in diesel engines. A recirculation flow type casing treatment is effective for flow range enhancement of centrifugal compressors. Two ring grooves on a suction pipe and a shroud casing wall are connected by means of an annular passage and stable recirculation flow is formed at small flow rates from the downstream groove toward the upstream groove through the annular bypass. The shape of baseline recirculation flow type casing is modified and optimized by using a multi-point optimization code with a metamodel assisted evolutionary algorithm embedding a commercial CFD code CFX from ANSYS. The numerical optimization results give the optimized design of casing with improving adiabatic efficiency in wide operating flow rate range. Sensitivity analysis of design parameters as a function of efficiency has been performed. It is found that the optimized casing design provides optimized recirculation flow rate, in which an increment of entropy rise is minimized at grooves and passages of the rotating impeller.
A neural network strategy for end-point optimization of batch processes.
Krothapally, M; Palanki, S
1999-01-01
The traditional way of operating batch processes has been to utilize an open-loop "golden recipe". However, there can be substantial batch to batch variation in process conditions and this open-loop strategy can lead to non-optimal operation. In this paper, a new approach is presented for end-point optimization of batch processes by utilizing neural networks. This strategy involves the training of two neural networks; one to predict switching times and the other to predict the input profile in the singular region. This approach alleviates the computational problems associated with the classical Pontryagin's approach and the nonlinear programming approach. The efficacy of this scheme is illustrated via simulation of a fed-batch fermentation.
NASA Technical Reports Server (NTRS)
Brown, Jonathan M.; Petersen, Jeremy D.
2014-01-01
NASA's WIND mission has been operating in a large amplitude Lissajous orbit in the vicinity of the interior libration point of the Sun-Earth/Moon system since 2004. Regular stationkeeping maneuvers are required to maintain the orbit due to the instability around the collinear libration points. Historically these stationkeeping maneuvers have been performed by applying an incremental change in velocity, or (delta)v along the spacecraft-Sun vector as projected into the ecliptic plane. Previous studies have shown that the magnitude of libration point stationkeeping maneuvers can be minimized by applying the (delta)v in the direction of the local stable manifold found using dynamical systems theory. This paper presents the analysis of this new maneuver strategy which shows that the magnitude of stationkeeping maneuvers can be decreased by 5 to 25 percent, depending on the location in the orbit where the maneuver is performed. The implementation of the optimized maneuver method into operations is discussed and results are presented for the first two optimized stationkeeping maneuvers executed by WIND.
Sensor-Based Optimized Control of the Full Load Instability in Large Hydraulic Turbines
Presas, Alexandre; Valero, Carme; Egusquiza, Eduard
2018-01-01
Hydropower plants are of paramount importance for the integration of intermittent renewable energy sources in the power grid. In order to match the energy generated and consumed, Large hydraulic turbines have to work under off-design conditions, which may lead to dangerous unstable operating points involving the hydraulic, mechanical and electrical system. Under these conditions, the stability of the grid and the safety of the power plant itself can be compromised. For many Francis Turbines one of these critical points, that usually limits the maximum output power, is the full load instability. Therefore, these machines usually work far away from this unstable point, reducing the effective operating range of the unit. In order to extend the operating range of the machine, working closer to this point with a reasonable safety margin, it is of paramount importance to monitor and to control relevant parameters of the unit, which have to be obtained with an accurate sensor acquisition strategy. Within the framework of a large EU project, field tests in a large Francis Turbine located in Canada (rated power of 444 MW) have been performed. Many different sensors were used to monitor several working parameters of the unit for all its operating range. Particularly for these tests, more than 80 signals, including ten type of different sensors and several operating signals that define the operating point of the unit, were simultaneously acquired. The present study, focuses on the optimization of the acquisition strategy, which includes type, number, location, acquisition frequency of the sensors and corresponding signal analysis to detect the full load instability and to prevent the unit from reaching this point. A systematic approach to determine this strategy has been followed. It has been found that some indicators obtained with different types of sensors are linearly correlated with the oscillating power. The optimized strategy has been determined based on the correlation characteristics (linearity, sensitivity and reactivity), the simplicity of the installation and the acquisition frequency necessary. Finally, an economic and easy implementable protection system based on the resulting optimized acquisition strategy is proposed. This system, which can be used in a generic Francis turbine with a similar full load instability, permits one to extend the operating range of the unit by working close to the instability with a reasonable safety margin. PMID:29601512
Sensor-Based Optimized Control of the Full Load Instability in Large Hydraulic Turbines.
Presas, Alexandre; Valentin, David; Egusquiza, Mònica; Valero, Carme; Egusquiza, Eduard
2018-03-30
Hydropower plants are of paramount importance for the integration of intermittent renewable energy sources in the power grid. In order to match the energy generated and consumed, Large hydraulic turbines have to work under off-design conditions, which may lead to dangerous unstable operating points involving the hydraulic, mechanical and electrical system. Under these conditions, the stability of the grid and the safety of the power plant itself can be compromised. For many Francis Turbines one of these critical points, that usually limits the maximum output power, is the full load instability. Therefore, these machines usually work far away from this unstable point, reducing the effective operating range of the unit. In order to extend the operating range of the machine, working closer to this point with a reasonable safety margin, it is of paramount importance to monitor and to control relevant parameters of the unit, which have to be obtained with an accurate sensor acquisition strategy. Within the framework of a large EU project, field tests in a large Francis Turbine located in Canada (rated power of 444 MW) have been performed. Many different sensors were used to monitor several working parameters of the unit for all its operating range. Particularly for these tests, more than 80 signals, including ten type of different sensors and several operating signals that define the operating point of the unit, were simultaneously acquired. The present study, focuses on the optimization of the acquisition strategy, which includes type, number, location, acquisition frequency of the sensors and corresponding signal analysis to detect the full load instability and to prevent the unit from reaching this point. A systematic approach to determine this strategy has been followed. It has been found that some indicators obtained with different types of sensors are linearly correlated with the oscillating power. The optimized strategy has been determined based on the correlation characteristics (linearity, sensitivity and reactivity), the simplicity of the installation and the acquisition frequency necessary. Finally, an economic and easy implementable protection system based on the resulting optimized acquisition strategy is proposed. This system, which can be used in a generic Francis turbine with a similar full load instability, permits one to extend the operating range of the unit by working close to the instability with a reasonable safety margin.
Neural network based optimal control of HVAC&R systems
NASA Astrophysics Data System (ADS)
Ning, Min
Heating, Ventilation, Air-Conditioning and Refrigeration (HVAC&R) systems have wide applications in providing a desired indoor environment for different types of buildings. It is well acknowledged that 30%-40% of the total energy generated is consumed by buildings and HVAC&R systems alone account for more than 50% of the building energy consumption. Low operational efficiency especially under partial load conditions and poor control are part of reasons for such high energy consumption. To improve energy efficiency, HVAC&R systems should be properly operated to maintain a comfortable and healthy indoor environment under dynamic ambient and indoor conditions with the least energy consumption. This research focuses on the optimal operation of HVAC&R systems. The optimization problem is formulated and solved to find the optimal set points for the chilled water supply temperature, discharge air temperature and AHU (air handling unit) fan static pressure such that the indoor environment is maintained with the least chiller and fan energy consumption. To achieve this objective, a dynamic system model is developed first to simulate the system behavior under different control schemes and operating conditions. The system model is modular in structure, which includes a water-cooled vapor compression chiller model and a two-zone VAV system model. A fuzzy-set based extended transformation approach is then applied to investigate the uncertainties of this model caused by uncertain parameters and the sensitivities of the control inputs with respect to the interested model outputs. A multi-layer feed forward neural network is constructed and trained in unsupervised mode to minimize the cost function which is comprised of overall energy cost and penalty cost when one or more constraints are violated. After training, the network is implemented as a supervisory controller to compute the optimal settings for the system. In order to implement the optimal set points predicted by the supervisory controller, a set of five adaptive PI (proportional-integral) controllers are designed for each of the five local control loops of the HVAC&R system. The five controllers are used to track optimal set points and zone air temperature set points. Parameters of these PI controllers are tuned online to reduce tracking errors. The updating rules are derived from Lyapunov stability analysis. Simulation results show that compared to the conventional night reset operation scheme, the optimal operation scheme saves around 10% energy under full load condition and 19% energy under partial load conditions.
Options for Robust Airfoil Optimization under Uncertainty
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Li, Wu
2002-01-01
A robust optimization method is developed to overcome point-optimization at the sampled design points. This method combines the best features from several preliminary methods proposed by the authors and their colleagues. The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of spline control points as design variables yet the resulting airfoil shape does not need to be smoothed, and (3) it allows the user to make a tradeoff between the level of optimization and the amount of computing time consumed. For illustration purposes, the robust optimization method is used to solve a lift-constrained drag minimization problem for a two-dimensional (2-D) airfoil in Euler flow with 20 geometric design variables.
Power-limited low-thrust trajectory optimization with operation point detection
NASA Astrophysics Data System (ADS)
Chi, Zhemin; Li, Haiyang; Jiang, Fanghua; Li, Junfeng
2018-06-01
The power-limited solar electric propulsion system is considered more practical in mission design. An accurate mathematical model of the propulsion system, based on experimental data of the power generation system, is used in this paper. An indirect method is used to deal with the time-optimal and fuel-optimal control problems, in which the solar electric propulsion system is described using a finite number of operation points, which are characterized by different pairs of thruster input power. In order to guarantee the integral accuracy for the discrete power-limited problem, a power operation detection technique is embedded in the fourth-order Runge-Kutta algorithm with fixed step. Moreover, the logarithmic homotopy method and normalization technique are employed to overcome the difficulties caused by using indirect methods. Three numerical simulations with actual propulsion systems are given to substantiate the feasibility and efficiency of the proposed method.
Memory-efficient decoding of LDPC codes
NASA Technical Reports Server (NTRS)
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
Optimal Design and Operation of Permanent Irrigation Systems
NASA Astrophysics Data System (ADS)
Oron, Gideon; Walker, Wynn R.
1981-01-01
Solid-set pressurized irrigation system design and operation are studied with optimization techniques to determine the minimum cost distribution system. The principle of the analysis is to divide the irrigation system into subunits in such a manner that the trade-offs among energy, piping, and equipment costs are selected at the minimum cost point. The optimization procedure involves a nonlinear, mixed integer approach capable of achieving a variety of optimal solutions leading to significant conclusions with regard to the design and operation of the system. Factors investigated include field geometry, the effect of the pressure head, consumptive use rates, a smaller flow rate in the pipe system, and outlet (sprinkler or emitter) discharge.
NASA Technical Reports Server (NTRS)
1991-01-01
Seagull Technology, Inc., Sunnyvale, CA, produced a computer program under a Langley Research Center Small Business Innovation Research (SBIR) grant called STAFPLAN (Seagull Technology Advanced Flight Plan) that plans optimal trajectory routes for small to medium sized airlines to minimize direct operating costs while complying with various airline operating constraints. STAFPLAN incorporates four input databases, weather, route data, aircraft performance, and flight-specific information (times, payload, crew, fuel cost) to provide the correct amount of fuel optimal cruise altitude, climb and descent points, optimal cruise speed, and flight path.
An automated model-based aim point distribution system for solar towers
NASA Astrophysics Data System (ADS)
Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen
2016-05-01
Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.
Evolution of Query Optimization Methods
NASA Astrophysics Data System (ADS)
Hameurlain, Abdelkader; Morvan, Franck
Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.
1987-01-01
A concept for optimally designing output feedback controllers for plants whose dynamics exhibit gross changes over their operating regimes was developed. This was to formulate the design problem in such a way that the implemented feedback gains vary as the output of a dynamical system whose independent variable is a scalar parameterization of the plant operating point. The results of this effort include derivation of necessary conditions for optimality for the general problem formulation, and for several simplified cases. The question of existence of a solution to the design problem was also examined, and it was shown that the class of gain variation schemes developed are capable of achieving gain variation histories which are arbitrarily close to the unconstrained gain solution for each point in the plant operating range. The theory was implemented in a feedback design algorithm, which was exercised in a numerical example. The results are applicable to the design of practical high-performance feedback controllers for plants whose dynamics vary significanly during operation. Many aerospace systems fall into this category.
Development of Chemical Process Design and Control for ...
This contribution describes a novel process systems engineering framework that couples advanced control with sustainability evaluation and decision making for the optimization of process operations to minimize environmental impacts associated with products, materials, and energy. The implemented control strategy combines a biologically inspired method with optimal control concepts for finding more sustainable operating trajectories. The sustainability assessment of process operating points is carried out by using the U.S. E.P.A.’s Gauging Reaction Effectiveness for the ENvironmental Sustainability of Chemistries with a multi-Objective Process Evaluator (GREENSCOPE) tool that provides scores for the selected indicators in the economic, material efficiency, environmental and energy areas. The indicator scores describe process performance on a sustainability measurement scale, effectively determining which operating point is more sustainable if there are more than several steady states for one specific product manufacturing. Through comparisons between a representative benchmark and the optimal steady-states obtained through implementation of the proposed controller, a systematic decision can be made in terms of whether the implementation of the controller is moving the process towards a more sustainable operation. The effectiveness of the proposed framework is illustrated through a case study of a continuous fermentation process for fuel production, whose materi
Optimal Operation Method of Smart House by Controllable Loads based on Smart Grid Topology
NASA Astrophysics Data System (ADS)
Yoza, Akihiro; Uchida, Kosuke; Yona, Atsushi; Senju, Tomonobu
2013-08-01
From the perspective of global warming suppression and depletion of energy resources, renewable energy such as wind generation (WG) and photovoltaic generation (PV) are getting attention in distribution systems. Additionally, all electrification apartment house or residence such as DC smart house have increased in recent years. However, due to fluctuating power from renewable energy sources and loads, supply-demand balancing fluctuations of power system become problematic. Therefore, "smart grid" has become very popular in the worldwide. This article presents a methodology for optimal operation of a smart grid to minimize the interconnection point power flow fluctuations. To achieve the proposed optimal operation, we use distributed controllable loads such as battery and heat pump. By minimizing the interconnection point power flow fluctuations, it is possible to reduce the maximum electric power consumption and the electric cost. This system consists of photovoltaics generator, heat pump, battery, solar collector, and load. In order to verify the effectiveness of the proposed system, MATLAB is used in simulations.
Nonlocal games and optimal steering at the boundary of the quantum set
NASA Astrophysics Data System (ADS)
Zhen, Yi-Zheng; Goh, Koon Tong; Zheng, Yu-Lin; Cao, Wen-Fei; Wu, Xingyao; Chen, Kai; Scarani, Valerio
2016-08-01
The boundary between classical and quantum correlations is well characterized by linear constraints called Bell inequalities. It is much harder to characterize the boundary of the quantum set itself in the space of no-signaling correlations. For the points on the quantum boundary that violate maximally some Bell inequalities, J. Oppenheim and S. Wehner [Science 330, 1072 (2010), 10.1126/science.1192065] pointed out a complex property: Alice's optimal measurements steer Bob's local state to the eigenstate of an effective operator corresponding to its maximal eigenvalue. This effective operator is the linear combination of Bob's local operators induced by the coefficients of the Bell inequality, and it can be interpreted as defining a fine-grained uncertainty relation. It is natural to ask whether the same property holds for other points on the quantum boundary, using the Bell expression that defines the tangent hyperplane at each point. We prove that this is indeed the case for a large set of points, including some that were believed to provide counterexamples. The price to pay is to acknowledge that the Oppenheim-Wehner criterion does not respect equivalence under the no-signaling constraint: for each point, one has to look for specific forms of writing the Bell expressions.
Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization
NASA Astrophysics Data System (ADS)
Yamagishi, Masao; Yamada, Isao
2017-04-01
Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.
Nonlinear Burn Control and Operating Point Optimization in ITER
NASA Astrophysics Data System (ADS)
Boyer, Mark; Schuster, Eugenio
2013-10-01
Control of the fusion power through regulation of the plasma density and temperature will be essential for achieving and maintaining desired operating points in fusion reactors and burning plasma experiments like ITER. In this work, a volume averaged model for the evolution of the density of energy, deuterium and tritium fuel ions, alpha-particles, and impurity ions is used to synthesize a multi-input multi-output nonlinear feedback controller for stabilizing and modulating the burn condition. Adaptive control techniques are used to account for uncertainty in model parameters, including particle confinement times and recycling rates. The control approach makes use of the different possible methods for altering the fusion power, including adjusting the temperature through auxiliary heating, modulating the density and isotopic mix through fueling, and altering the impurity density through impurity injection. Furthermore, a model-based optimization scheme is proposed to drive the system as close as possible to desired fusion power and temperature references. Constraints are considered in the optimization scheme to ensure that, for example, density and beta limits are avoided, and that optimal operation is achieved even when actuators reach saturation. Supported by the NSF CAREER award program (ECCS-0645086).
Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Zhao, Changhong; Zamzam, Admed S.
This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successivemore » convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.« less
Optimizing the wireless power transfer over MIMO Channels
NASA Astrophysics Data System (ADS)
Wiedmann, Karsten; Weber, Tobias
2017-09-01
In this paper, the optimization of the power transfer over wireless channels having multiple-inputs and multiple-outputs (MIMO) is studied. Therefore, the transmitter, the receiver and the MIMO channel are modeled as multiports. The power transfer efficiency is described by a Rayleigh quotient, which is a function of the channel's scattering parameters and the incident waves from both transmitter and receiver side. This way, the power transfer efficiency can be maximized analytically by solving a generalized eigenvalue problem, which is deduced from the Rayleigh quotient. As a result, the maximum power transfer efficiency achievable over a given MIMO channel is obtained. This maximum can be used as a performance bound in order to benchmark wireless power transfer systems. Furthermore, the optimal operating point which achieves this maximum will be obtained. The optimal operating point will be described by the complex amplitudes of the optimal incident and reflected waves of the MIMO channel. This supports the design of the optimal transmitter and receiver multiports. The proposed method applies for arbitrary MIMO channels, taking transmitter-side and/or receiver-side cross-couplings in both near- and farfield scenarios into consideration. Special cases are briefly discussed in this paper in order to illustrate the method.
Multi-objective shape optimization of runner blade for Kaplan turbine
NASA Astrophysics Data System (ADS)
Semenova, A.; Chirkov, D.; Lyutov, A.; Chemy, S.; Skorospelov, V.; Pylev, I.
2014-03-01
Automatic runner shape optimization based on extensive CFD analysis proved to be a useful design tool in hydraulic turbomachinery. Previously the authors developed an efficient method for Francis runner optimization. It was successfully applied to the design of several runners with different specific speeds. In present work this method is extended to the task of a Kaplan runner optimization. Despite of relatively simpler blade shape, Kaplan turbines have several features, complicating the optimization problem. First, Kaplan turbines normally operate in a wide range of discharges, thus CFD analysis of each variant of the runner should be carried out for several operation points. Next, due to a high specific speed, draft tube losses have a great impact on the overall turbine efficiency, and thus should be accurately evaluated. Then, the flow in blade tip and hub clearances significantly affects the velocity profile behind the runner and draft tube behavior. All these features are accounted in the present optimization technique. Parameterization of runner blade surface using 24 geometrical parameters is described in details. For each variant of runner geometry steady state three-dimensional turbulent flow computations are carried out in the domain, including wicket gate, runner, draft tube, blade tip and hub clearances. The objectives are maximization of efficiency in best efficiency and high discharge operation points, with simultaneous minimization of cavitation area on the suction side of the blade. Multiobjective genetic algorithm is used for the solution of optimization problem, requiring the analysis of several thousands of runner variants. The method is applied to optimization of runner shape for several Kaplan turbines with different heads.
Automated design of image operators that detect interest points.
Trujillo, Leonardo; Olague, Gustavo
2008-01-01
This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research.
Liu, Chao; Yao, Yong; Sun, Yun Xu; Xiao, Jun Jun; Zhao, Xin Hui
2010-10-01
A model is proposed to study the average capacity optimization in free-space optical (FSO) channels, accounting for effects of atmospheric turbulence and pointing errors. For a given transmitter laser power, it is shown that both transmitter beam divergence angle and beam waist can be tuned to maximize the average capacity. Meanwhile, their optimum values strongly depend on the jitter and operation wavelength. These results can be helpful for designing FSO communication systems.
NASA Astrophysics Data System (ADS)
Ouyang, Bo; Shang, Weiwei
2016-03-01
The solution of tension distributions is infinite for cable-driven parallel manipulators(CDPMs) with redundant cables. A rapid optimization method for determining the optimal tension distribution is presented. The new optimization method is primarily based on the geometry properties of a polyhedron and convex analysis. The computational efficiency of the optimization method is improved by the designed projection algorithm, and a fast algorithm is proposed to determine which two of the lines are intersected at the optimal point. Moreover, a method for avoiding the operating point on the lower tension limit is developed. Simulation experiments are implemented on a six degree-of-freedom(6-DOF) CDPM with eight cables, and the results indicate that the new method is one order of magnitude faster than the standard simplex method. The optimal distribution of tension distribution is thus rapidly established on real-time by the proposed method.
NASA Astrophysics Data System (ADS)
Gao, F.; Song, X. H.; Zhang, Y.; Li, J. F.; Zhao, S. S.; Ma, W. Q.; Jia, Z. Y.
2017-05-01
In order to reduce the adverse effects of uncertainty on optimal dispatch in active distribution network, an optimal dispatch model based on chance-constrained programming is proposed in this paper. In this model, the active and reactive power of DG can be dispatched at the aim of reducing the operating cost. The effect of operation strategy on the cost can be reflected in the objective which contains the cost of network loss, DG curtailment, DG reactive power ancillary service, and power quality compensation. At the same time, the probabilistic constraints can reflect the operation risk degree. Then the optimal dispatch model is simplified as a series of single stage model which can avoid large variable dimension and improve the convergence speed. And the single stage model is solved using a combination of particle swarm optimization (PSO) and point estimate method (PEM). Finally, the proposed optimal dispatch model and method is verified by the IEEE33 test system.
NASA Technical Reports Server (NTRS)
Kopasakis, George
1997-01-01
Performance Seeking Control attempts to find the operating condition that will generate optimal performance and control the plant at that operating condition. In this paper a nonlinear multivariable Adaptive Performance Seeking Control (APSC) methodology will be developed and it will be demonstrated on a nonlinear system. The APSC is comprised of the Positive Gradient Control (PGC) and the Fuzzy Model Reference Learning Control (FMRLC). The PGC computes the positive gradients of the desired performance function with respect to the control inputs in order to drive the plant set points to the operating point that will produce optimal performance. The PGC approach will be derived in this paper. The feedback control of the plant is performed by the FMRLC. For the FMRLC, the conventional fuzzy model reference learning control methodology is utilized, with guidelines generated here for the effective tuning of the FMRLC controller.
Control strategy of grid-connected photovoltaic generation system based on GMPPT method
NASA Astrophysics Data System (ADS)
Wang, Zhongfeng; Zhang, Xuyang; Hu, Bo; Liu, Jun; Li, Ligang; Gu, Yongqiang; Zhou, Bowen
2018-02-01
There are multiple local maximum power points when photovoltaic (PV) array runs under partial shading condition (PSC).However, the traditional maximum power point tracking (MPPT) algorithm might be easily trapped in local maximum power points (MPPs) and cannot find the global maximum power point (GMPP). To solve such problem, a global maximum power point tracking method (GMPPT) is improved, combined with traditional MPPT method and particle swarm optimization (PSO) algorithm. Under different operating conditions of PV cells, different tracking algorithms are used. When the environment changes, the improved PSO algorithm is adopted to realize the global optimal search, and the variable step incremental conductance (INC) method is adopted to achieve MPPT in optimal local location. Based on the simulation model of the PV grid system built in Matlab/Simulink, comparative analysis of the tracking effect of MPPT by the proposed control algorithm and the traditional MPPT method under the uniform solar condition and PSC, validate the correctness, feasibility and effectiveness of the proposed control strategy.
Wang, Tiancai; He, Xing; Huang, Tingwen; Li, Chuandong; Zhang, Wei
2017-09-01
The economic emission dispatch (EED) problem aims to control generation cost and reduce the impact of waste gas on the environment. It has multiple constraints and nonconvex objectives. To solve it, the collective neurodynamic optimization (CNO) method, which combines heuristic approach and projection neural network (PNN), is attempted to optimize scheduling of an electrical microgrid with ten thermal generators and minimize the plus of generation and emission cost. As the objective function has non-derivative points considering valve point effect (VPE), differential inclusion approach is employed in the PNN model introduced to deal with them. Under certain conditions, the local optimality and convergence of the dynamic model for the optimization problem is analyzed. The capability of the algorithm is verified in a complicated situation, where transmission loss and prohibited operating zones are considered. In addition, the dynamic variation of load power at demand side is considered and the optimal scheduling of generators within 24 h is described. Copyright © 2017 Elsevier Ltd. All rights reserved.
Flow range enhancement by secondary flow effect in low solidity circular cascade diffusers
NASA Astrophysics Data System (ADS)
Sakaguchi, Daisaku; Tun, Min Thaw; Mizokoshi, Kanata; Kishikawa, Daiki
2014-08-01
High-pressure ratio and wide operating range are highly required for compressors and blowers. The technical issue of the design is achievement of suppression of flow separation at small flow rate without deteriorating the efficiency at design flow rate. A numerical simulation is very effective in design procedure, however, cost of the numerical simulation is generally high during the practical design process, and it is difficult to confirm the optimal design which is combined with many parameters. A multi-objective optimization technique is the idea that has been proposed for solving the problem in practical design process. In this study, a Low Solidity circular cascade Diffuser (LSD) in a centrifugal blower is successfully designed by means of multi-objective optimization technique. An optimization code with a meta-model assisted evolutionary algorithm is used with a commercial CFD code ANSYS-CFX. The optimization is aiming at improving the static pressure coefficient at design point and at low flow rate condition while constraining the slope of the lift coefficient curve. Moreover, a small tip clearance of the LSD blade was applied in order to activate and to stabilize the secondary flow effect at small flow rate condition. The optimized LSD blade has an extended operating range of 114 % towards smaller flow rate as compared to the baseline design without deteriorating the diffuser pressure recovery at design point. The diffuser pressure rise and operating flow range of the optimized LSD blade are experimentally verified by overall performance test. The detailed flow in the diffuser is also confirmed by means of a Particle Image Velocimeter. Secondary flow is clearly captured by PIV and it spreads to the whole area of LSD blade pitch. It is found that the optimized LSD blade shows good improvement of the blade loading in the whole operating range, while at small flow rate the flow separation on the LSD blade has been successfully suppressed by the secondary flow effect.
NASA Astrophysics Data System (ADS)
Pavlak, Gregory S.
Building energy use is a significant contributing factor to growing worldwide energy demands. In pursuit of a sustainable energy future, commercial building operations must be intelligently integrated with the electric system to increase efficiency and enable renewable generation. Toward this end, a model-based methodology was developed to estimate the capability of commercial buildings to participate in frequency regulation ancillary service markets. This methodology was integrated into a supervisory model predictive controller to optimize building operation in consideration of energy prices, demand charges, and ancillary service revenue. The supervisory control problem was extended to building portfolios to evaluate opportunities for synergistic effect among multiple, centrally-optimized buildings. Simulation studies performed showed that the multi-market optimization was able to determine appropriate opportunities for buildings to provide frequency regulation. Total savings were increased by up to thirteen percentage points, depending on the simulation case. Furthermore, optimizing buildings as a portfolio achieved up to seven additional percentage points of savings, depending on the case. Enhanced energy and cost savings opportunities were observed by taking the novel perspective of optimizing building portfolios in multiple grid markets, motivating future pursuits of advanced control paradigms that enable a more intelligent electric grid.
Fusing Satellite-Derived Irradiance and Point Measurements through Optimal Interpolation
NASA Astrophysics Data System (ADS)
Lorenzo, A.; Morzfeld, M.; Holmgren, W.; Cronin, A.
2016-12-01
Satellite-derived irradiance is widely used throughout the design and operation of a solar power plant. While satellite-derived estimates cover a large area, they also have large errors compared to point measurements from sensors on the ground. We describe an optimal interpolation routine that fuses the broad spatial coverage of satellite-derived irradiance with the high accuracy of point measurements. The routine can be applied to any satellite-derived irradiance and point measurement datasets. Unique aspects of this work include the fact that information is spread using cloud location and thickness and that a number of point measurements are collected from rooftop PV systems. The routine is sensitive to errors in the satellite image geolocation, so care must be taken to adjust the cloud locations based on the solar and satellite geometries. Analysis of the optimal interpolation routine over Tucson, AZ, with 20 point measurements shows a significant improvement in the irradiance estimate for two distinct satellite image to irradiance algorithms. Improved irradiance estimates can be used for resource assessment, distributed generation production estimates, and irradiance forecasts.
A CPS Based Optimal Operational Control System for Fused Magnesium Furnace
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Tian-you; Wu, Zhi-wei; Wang, Hong
Fused magnesia smelting for fused magnesium furnace (FMF) is an energy intensive process with high temperature and comprehensive complexities. Its operational index namely energy consumption per ton (ECPT) is defined as the consumed electrical energy per ton of acceptable quality and is difficult to measure online. Moreover, the dynamics of ECPT cannot be precisely modelled mathematically. The model parameters of the three-phase currents of the electrodes such as the molten pool level, its variation rate and resistance are uncertain and nonlinear functions of the changes in both the smelting process and the raw materials composition. In this paper, an integratedmore » optimal operational control algorithm proposed is composed of a current set-point control, a current switching control and a self-optimized tuning mechanism. The tight conjoining of and coordination between the computational resources including the integrated optimal operational control, embedded software, industrial cloud, wireless communication and the physical resources of FMF constitutes a cyber-physical system (CPS) based embedded optimal operational control system. Successful application of this system has been made for a production line with ten fused magnesium furnaces in a factory in China, leading to a significant reduced ECPT.« less
Optimization of a point-focusing, distributed receiver solar thermal electric system
NASA Technical Reports Server (NTRS)
Pons, R. L.
1979-01-01
This paper presents an approach to optimization of a solar concept which employs solar-to-electric power conversion at the focus of parabolic dish concentrators. The optimization procedure is presented through a series of trade studies, which include the results of optical/thermal analyses and individual subsystem trades. Alternate closed-cycle and open-cycle Brayton engines and organic Rankine engines are considered to show the influence of the optimization process, and various storage techniques are evaluated, including batteries, flywheels, and hybrid-engine operation.
Earth-Moon Libration Point Orbit Stationkeeping: Theory, Modeling and Operations
NASA Technical Reports Server (NTRS)
Folta, David C.; Pavlak, Thomas A.; Haapala, Amanda F.; Howell, Kathleen C.; Woodard, Mark A.
2013-01-01
Collinear Earth-Moon libration points have emerged as locations with immediate applications. These libration point orbits are inherently unstable and must be maintained regularly which constrains operations and maneuver locations. Stationkeeping is challenging due to relatively short time scales for divergence effects of large orbital eccentricity of the secondary body, and third-body perturbations. Using the Acceleration Reconnection and Turbulence and Electrodynamics of the Moon's Interaction with the Sun (ARTEMIS) mission orbit as a platform, the fundamental behavior of the trajectories is explored using Poincare maps in the circular restricted three-body problem. Operational stationkeeping results obtained using the Optimal Continuation Strategy are presented and compared to orbit stability information generated from mode analysis based in dynamical systems theory.
Liu, Jianguo; Yang, Bo; Chen, Changzhen
2013-02-01
The optimization of operating parameters for the isolation of peroxidase from horseradish (Armoracia rusticana) roots with ultrafiltration (UF) technology was systemically studied. The effects of UF operating conditions on the transmission of proteins were quantified using the parameter scanning UF. These conditions included solution pH, ionic strength, stirring speed and permeate flux. Under optimized conditions, the purity of horseradish peroxidase (HRP) obtained was greater than 84 % after a two-stage UF process and the recovery of HRP from the feedstock was close to 90 %. The resulting peroxidase product was then analysed by isoelectric focusing, SDS-PAGE and circular dichroism, to confirm its isoelectric point, molecular weight and molecular secondary structure. The effects of calcium ion on HRP specific activities were also experimentally determined.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Jamshid; Mahdizadeh, Kourosh; Afshar, Abbas
2004-08-01
Application of stochastic dynamic programming (SDP) models to reservoir optimization calls for state variables discretization. As an important variable discretization of reservoir storage volume has a pronounced effect on the computational efforts. The error caused by storage volume discretization is examined by considering it as a fuzzy state variable. In this approach, the point-to-point transitions between storage volumes at the beginning and end of each period are replaced by transitions between storage intervals. This is achieved by using fuzzy arithmetic operations with fuzzy numbers. In this approach, instead of aggregating single-valued crisp numbers, the membership functions of fuzzy numbers are combined. Running a simulated model with optimal release policies derived from fuzzy and non-fuzzy SDP models shows that a fuzzy SDP with a coarse discretization scheme performs as well as a classical SDP having much finer discretized space. It is believed that this advantage in the fuzzy SDP model is due to the smooth transitions between storage intervals which benefit from soft boundaries.
Gooi, Patrick; Ahmed, Yusuf; Ahmed, Iqbal Ike K
2014-07-01
We describe the use of a microscope-mounted wide-angle point-of-view camera to record optimal hand positions in ocular surgery. The camera is mounted close to the objective lens beneath the surgeon's oculars and faces the same direction as the surgeon, providing a surgeon's view. A wide-angle lens enables viewing of both hands simultaneously and does not require repositioning the camera during the case. Proper hand positioning and instrument placement through microincisions are critical for effective and atraumatic handling of tissue within the eye. Our technique has potential in the assessment and training of optimal hand position for surgeons performing intraocular surgery. It is an innovative way to routinely record instrument and operating hand positions in ophthalmic surgery and has minimal requirements in terms of cost, personnel, and operating-room space. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Miclosina, C. O.; Balint, D. I.; Campian, C. V.; Frunzaverde, D.; Ion, I.
2012-11-01
This paper deals with the optimization of the axial hydraulic turbines of Kaplan type. The optimization of the runner blade is presented systematically from two points of view: hydrodynamic and constructive. Combining these aspects in order to gain a safer operation when unsteady effects occur in the runner of the turbine is attempted. The design and optimization of the runner blade is performed with QTurbo3D software developed at the Center for Research in Hydraulics, Automation and Thermal Processes (CCHAPT) from "Eftimie Murgu" University of Resita, Romania. QTurbo3D software offers possibilities to design the meridian channel of hydraulic turbines design the blades and optimize the runner blade. 3D modeling and motion analysis of the runner blade operating mechanism are accomplished using SolidWorks software. The purpose of motion study is to obtain forces, torques or stresses in the runner blade operating mechanism, necessary to estimate its lifetime. This paper clearly states the importance of combining the hydrodynamics with the structural design in the optimization procedure of the runner of hydraulic turbines.
NASA Astrophysics Data System (ADS)
Canright, David; Osvik, Dag Arne
We explore ways to reduce the number of bit operations required to implement AES. One way involves optimizing the composite field approach for entire rounds of AES. Another way is integrating the Galois multiplications of MixColumns with the linear transformations of the S-box. Combined with careful optimizations, these reduce the number of bit operations to encrypt one block by 9.0%, compared to earlier work that used the composite field only in the S-box. For decryption, the improvement is 13.5%. This work may be useful both as a starting point for a bit-sliced software implementation, where reducing operations increases speed, and also for hardware with limited resources.
Motamed, Nima; Miresmail, Seyed Javad Haji; Rabiee, Behnam; Keyvani, Hossein; Farahani, Behzad; Maadi, Mansooreh; Zamani, Farhad
2016-03-01
The present study was carried out to determine the optimal cutoff points for homeostatic model assessment (HOMA-IR) and quantitative insulin sensitivity check index (QUICKI) in the diagnosis of metabolic syndrome (MetS) and non-alcoholic fatty liver disease (NAFLD). The baseline data of 5511 subjects aged ≥18years of a cohort study in northern Iran were utilized to analyze. Receiver operating characteristic (ROC) analysis was conducted to determine the discriminatory capability of HOMA-IR and QUICKI in the diagnosis of MetS and NAFLD. Youden index was utilized to determine the optimal cutoff points of HOMA-IR and QUICKI in the diagnosis of MetS and NAFLD. The optimal cutoff points for HOMA-IR in the diagnosis of MetS and NAFLD were 2.0 [sensitivity=64.4%, specificity=66.8%] and 1.79 [sensitivity=66.2%, specificity=62.2%] in men and were 2.5 [sensitivity=57.6%, specificity=67.9%] and 1.95 [sensitivity=65.1%, specificity=54.7%] in women respectively. Furthermore, the optimal cutoff points for QUICKI in the diagnosis of MetS and NAFLD were 0.343 [sensitivity=63.7%, specificity=67.8%] and 0.347 [sensitivity=62.9%, specificity=65.0%] in men and were 0.331 [sensitivity=55.7%, specificity=70.7%] and 0.333 [sensitivity=53.2%, specificity=67.7%] in women respectively. Not only the optimal cutoff points of HOMA-IR and QUICKI were different for MetS and NAFLD, but also different cutoff points were obtained for men and women for each of these two conditions. Copyright © 2016 Elsevier Inc. All rights reserved.
Genetic algorithms applied to the scheduling of the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Sponsler, Jeffrey L.
1989-01-01
A prototype system employing a genetic algorithm (GA) has been developed to support the scheduling of the Hubble Space Telescope. A non-standard knowledge structure is used and appropriate genetic operators have been created. Several different crossover styles (random point selection, evolving points, and smart point selection) are tested and the best GA is compared with a neural network (NN) based optimizer. The smart crossover operator produces the best results and the GA system is able to evolve complete schedules using it. The GA is not as time-efficient as the NN system and the NN solutions tend to be better.
An approach for aerodynamic optimization of transonic fan blades
NASA Astrophysics Data System (ADS)
Khelghatibana, Maryam
Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to demonstrate the relation between near-stall efficiency and stall margin. The proposed method is applied to redesign NASA rotor 67 for single and multiple operating conditions. The single-point design optimization showed +0.28 points improvement of isentropic efficiency at design point, while the design pressure ratio and mass flow are, respectively, within 0.12% and 0.11% of the reference blade. Two cases of multi-point optimization are performed: First, the proposed multi-point optimization problem is relaxed by removing the choke margin constraint in order to demonstrate the relation between near-stall efficiency and stall margin. An investigation on the Pareto-optimal solutions of this optimization shows that the stall margin has been increased with improving near-stall efficiency. The second multi-point optimization case is performed with considering all the objectives and constraints. One selected optimized design on the Pareto front presents +0.41, +0.56 and +0.9 points improvement in near-peak efficiency, near-stall efficiency and stall margin, respectively. The design pressure ratio and mass flow are, respectively, within 0.3% and 0.26% of the reference blade. Moreover the optimized design maintains the required choking margin. Detailed aerodynamic analyses are performed to investigate the effect of shape optimization on shock occurrence, secondary flows, tip leakage and shock/tip-leakage interactions in both single and multi-point optimizations.
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
NASA Technical Reports Server (NTRS)
1984-01-01
A solar pond electric power generation subsystem, an electric power transformer and switch yard, a large solar pond, a water treatment plant, and numerous storage and evaporation ponds. Because a solar pond stores thermal energy over a long period of time, plant operation at any point in time is dependent upon past operation and future perceived generation plans. This time or past history factor introduces a new dimension in the design process. The design optimization of a plant must go beyond examination of operational state points and consider the seasonal variations in solar, solar pond energy storage, and desired plant annual duty-cycle profile. Models or design tools will be required to optimize a plant design. These models should be developed in order to include a proper but not excessive level of detail. The model should be targeted to a specific objective and not conceived as a do everything analysis tool, i.e., system design and not gradient-zone stability.
Increase of Gas-Turbine Plant Efficiency by Optimizing Operation of Compressors
NASA Astrophysics Data System (ADS)
Matveev, V.; Goriachkin, E.; Volkov, A.
2018-01-01
The article presents optimization method for improving of the working process of axial compressors of gas turbine engines. Developed method allows to perform search for the best geometry of compressor blades automatically by using optimization software IOSO and CFD software NUMECA Fine/Turbo. The calculation of the compressor parameters was performed for work and stall point of its performance map on each optimization step. Study was carried out for seven-stage high-pressure compressor and three-stage low-pressure compressors. As a result of optimization, improvement of efficiency was achieved for all investigated compressors.
NASA Astrophysics Data System (ADS)
Gramajo, German G.
This thesis presents an algorithm for a search and coverage mission that has increased autonomy in generating an ideal trajectory while explicitly considering the available energy in the optimization. Further, current algorithms used to generate trajectories depend on the operator providing a discrete set of turning rate requirements to obtain an optimal solution. This work proposes an additional modification to the algorithm so that it optimizes the trajectory for a range of turning rates instead of a discrete set of turning rates. This thesis conducts an evaluation of the algorithm with variation in turn duration, entry-heading angle, and entry point. Comparative studies of the algorithm with existing method indicates improved autonomy in choosing the optimization parameters while producing trajectories with better coverage area and closer final distance to the desired terminal point.
Noise in Charge Amplifiers— A gm/ID Approach
NASA Astrophysics Data System (ADS)
Alvarez, Enrique; Avila, Diego; Campillo, Hernan; Dragone, Angelo; Abusleme, Angel
2012-10-01
Charge amplifiers represent the standard solution to amplify signals from capacitive detectors in high energy physics experiments. In a typical front-end, the noise due to the charge amplifier, and particularly from its input transistor, limits the achievable resolution. The classic approach to attenuate noise effects in MOSFET charge amplifiers is to use the maximum power available, to use a minimum-length input device, and to establish the input transistor width in order to achieve the optimal capacitive matching at the input node. These conclusions, reached by analysis based on simple noise models, lead to sub-optimal results. In this work, a new approach on noise analysis for charge amplifiers based on an extension of the gm/ID methodology is presented. This method combines circuit equations and results from SPICE simulations, both valid for all operation regions and including all noise sources. The method, which allows to find the optimal operation point of the charge amplifier input device for maximum resolution, shows that the minimum device length is not necessarily the optimal, that flicker noise is responsible for the non-monotonic noise versus current function, and provides a deeper insight on the noise limits mechanism from an alternative and more design-oriented point of view.
Control strategy optimization of HVAC plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Facci, Andrea Luigi; Zanfardino, Antonella; Martini, Fabrizio
In this paper we present a methodology to optimize the operating conditions of heating, ventilation and air conditioning (HVAC) plants to achieve a higher energy efficiency in use. Semi-empiric numerical models of the plant components are used to predict their performances as a function of their set-point and the environmental and occupied space conditions. The optimization is performed through a graph-based algorithm that finds the set-points of the system components that minimize energy consumption and/or energy costs, while matching the user energy demands. The resulting model can be used with systems of almost any complexity, featuring both HVAC components andmore » energy systems, and is sufficiently fast to make it applicable to real-time setting.« less
Robust Neighboring Optimal Guidance for the Advanced Launch System
NASA Technical Reports Server (NTRS)
Hull, David G.
1993-01-01
In recent years, optimization has become an engineering tool through the availability of numerous successful nonlinear programming codes. Optimal control problems are converted into parameter optimization (nonlinear programming) problems by assuming the control to be piecewise linear, making the unknowns the nodes or junction points of the linear control segments. Once the optimal piecewise linear control (suboptimal) control is known, a guidance law for operating near the suboptimal path is the neighboring optimal piecewise linear control (neighboring suboptimal control). Research conducted under this grant has been directed toward the investigation of neighboring suboptimal control as a guidance scheme for an advanced launch system.
PIV study of the wake of a model wind turbine transitioning between operating set points
NASA Astrophysics Data System (ADS)
Houck, Dan; Cowen, Edwin (Todd)
2016-11-01
Wind turbines are ideally operated at their most efficient tip speed ratio for a given wind speed. There is increasing interest, however, in operating turbines at other set points to increase the overall power production of a wind farm. Specifically, Goit and Meyers (2015) used LES to examine a wind farm optimized by unsteady operation of its turbines. In this study, the wake of a model wind turbine is measured in a water channel using PIV. We measure the wake response to a change in operational set point of the model turbine, e.g., from low to high tip speed ratio or vice versa, to examine how it might influence a downwind turbine. A modified torque transducer after Kang et al. (2010) is used to calibrate in situ voltage measurements of the model turbine's generator operating across a resistance to the torque on the generator. Changes in operational set point are made by changing the resistance or the flow speed, which change the rotation rate measured by an encoder. Single camera PIV on vertical planes reveals statistics of the wake at various distances downstream as the turbine transitions from one set point to another. From these measurements, we infer how the unsteady operation of a turbine may affect the performance of a downwind turbine as its incoming flow. National Science Foundation and the Atkinson Center for a Sustainable Future.
The Efficiency and the Scalability of an Explicit Operator on an IBM POWER4 System
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We present an evaluation of the efficiency and the scalability of an explicit CFD operator on an IBM POWER4 system. The POWER4 architecture exhibits a common trend in HPC architectures: boosting CPU processing power by increasing the number of functional units, while hiding the latency of memory access by increasing the depth of the memory hierarchy. The overall machine performance depends on the ability of the caches-buses-fabric-memory to feed the functional units with the data to be processed. In this study we evaluate the efficiency and scalability of one explicit CFD operator on an IBM POWER4. This operator performs computations at the points of a Cartesian grid and involves a few dozen floating point numbers and on the order of 100 floating point operations per grid point. The computations in all grid points are independent. Specifically, we estimate the efficiency of the RHS operator (SP of NPB) on a single processor as the observed/peak performance ratio. Then we estimate the scalability of the operator on a single chip (2 CPUs), a single MCM (8 CPUs), 16 CPUs, and the whole machine (32 CPUs). Then we perform the same measurements for a chache-optimized version of the RHS operator. For our measurements we use the HPM (Hardware Performance Monitor) counters available on the POWER4. These counters allow us to analyze the obtained performance results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramachandran, Thiagarajan; Kundu, Soumya; Chen, Yan
This paper develops and utilizes an optimization based framework to investigate the maximal energy efficiency potentially attainable by HVAC system operation in a non-predictive context. Performance is evaluated relative to the existing state of the art set point reset strategies. The expected efficiency increase driven by operation constraints relaxations is evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramachandran, Thiagarajan; Kundu, Soumya; Chen, Yan
This paper develops and utilizes an optimization based framework to investigate the maximal energy efficiency potentially attainable by HVAC system operation in a non-predictive context. Performance is evaluated relative to the existing state of the art set-point reset strategies. The expected efficiency increase driven by operation constraints relaxations is evaluated.
Optimal PID gain schedule for hydrogenerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orelind, G.; Wozniak, L.; Medanic, J.
1989-09-01
This paper describes the development and testing of a digital gain switching governor for hydrogenerators. Optimal gains were found at different load points by minimizing a quadratic performance criterion prior to controller operating. During operation, the gain sets are switched in depending on the gate position and speed error magnitude. With gain switching operating, the digital governor was shown to have a substantial reduction of noise on the command signal and up to 42% faster responses to power requests. Non-linear control strategies enabled the digital governor to have a 2.5% to 2% reduction in speed overshoot on startups, and anmore » 8% to 1% reduction in undershoot on load rejections as compared to the analog.« less
Mechanism of bandwidth improvement in passively cooled SMA position actuators
NASA Astrophysics Data System (ADS)
Gorbet, R. B.; Morris, K. A.; Chau, R. C. C.
2009-09-01
The heating of shape memory alloy (SMA) materials leads to a thermally driven phase change which can be used to do work. An SMA wire can be thermally cycled by controlling electric current through the wire, creating an electro-mechanical actuator. Such actuators are typically heated electrically and cooled through convection. The thermal time constants and lack of active cooling limit the operating frequencies. In this work, the bandwidth of a still-air-cooled SMA wire controlled with a PID controller is improved through optimization of the controller gains. Results confirm that optimization can improve the ability of the actuator to operate at a given frequency. Overshoot is observed in the optimal controllers at low frequencies. This is a result of hysteresis in the wire's contraction-temperature characteristic, since different input temperatures can achieve the same output value. The optimal controllers generate overshoot during heating, in order to cause the system to operate at a point on the hysteresis curve where faster cooling can be achieved. The optimization results in a controller which effectively takes advantage of the multi-valued nature of the hysteresis to improve performance.
Space Instrument Optimization by Implementing of Generic Three Bodies Circular Restricted Problem
NASA Astrophysics Data System (ADS)
Nejat, Cyrus
2011-01-01
In this study, the main discussion emphasizes on the spacecraft operation with a concentration on stationary points in space. To achieve these objectives, the circular restricted problem was solved for selected approaches. The equations of motion of three body restricted problem was demonstrated to apply in cases other than Lagrange's (1736-1813 A.D.) achievements, by means of the purposed CN (Cyrus Nejat) theorem along with appropriate comments. In addition to five Lagrange, two other points, CN1 and CN2 were found to be in unstable equilibrium points in a very large distance respect to Lagrange points, but stable at infinity. A very interesting simulation of Milky Way Galaxy and Andromeda Galaxy were created to find the Lagrange points, CN points (Cyrus Nejat Points), and CN lines (Cyrus Nejat Lines). The equations of motion were rearranged such a way that the transfer trajectory would be conical, by means of decoupling concept. The main objective was to make a halo orbit transfer about CN lines. The author purposes therefore that all of the corresponding sizing design that they must be developed by optimization techniques would be considered in future approaches. The optimization techniques are sufficient procedures to search for the most ideal response of a system.
NASA Technical Reports Server (NTRS)
Riehl, John P.; Sjauw, Waldy K.
2004-01-01
Trajectory, mission, and vehicle engineers concern themselves with finding the best way for an object to get from one place to another. These engineers rely upon special software to assist them in this. For a number of years, many engineers have used the OTIS program for this assistance. With OTIS, an engineer can fully optimize trajectories for airplanes, launch vehicles like the space shuttle, interplanetary spacecraft, and orbital transfer vehicles. OTIS provides four modes of operation, with each mode providing successively stronger optimization capability. The most powerful mode uses a mathematical method called implicit integration to solve what engineers and mathematicians call the optimal control problem. OTIS 3.2, which was developed at the NASA Glenn Research Center, is the latest release of this industry workhorse and features new capabilities for parameter optimization and mission design. OTIS stands for Optimal Control by Implicit Simulation, and it is implicit integration that makes OTIS so powerful at solving trajectory optimization problems. Why is this so important? The optimization process not only determines how to get from point A to point B, but it can also determine how to do this with the least amount of propellant, with the lightest starting weight, or in the fastest time possible while avoiding certain obstacles along the way. There are numerous conditions that engineers can use to define optimal, or best. OTIS provides a framework for defining the starting and ending points of the trajectory (point A and point B), the constraints on the trajectory (requirements like "avoid these regions where obstacles occur"), and what is being optimized (e.g., minimize propellant). The implicit integration method can find solutions to very complicated problems when there is not a lot of information available about what the optimal trajectory might be. The method was first developed for solving two-point boundary value problems and was adapted for use in OTIS. Implicit integration usually allows OTIS to find solutions to problems much faster than programs that use explicit integration and parametric methods. Consequently, OTIS is best suited to solving very complicated and highly constrained problems.
Computerized optimization of multiple isocentres in stereotactic convergent beam irradiation
NASA Astrophysics Data System (ADS)
Treuer, U.; Treuer, H.; Hoevels, M.; Müller, R. P.; Sturm, V.
1998-01-01
A method for the fully computerized determination and optimization of positions of target points and collimator sizes in convergent beam irradiation is presented. In conventional interactive trial and error methods, which are very time consuming, the treatment parameters are chosen according to the operator's experience and improved successively. This time is reduced significantly by the use of a computerized procedure. After the definition of target volume and organs at risk in the CT or MR scans, an initial configuration is created automatically. In the next step the target point positions and collimator diameters are optimized by the program. The aim of the optimization is to find a configuration for which a prescribed dose at the target surface is approximated as close as possible. At the same time dose peaks inside the target volume are minimized and organs at risk and tissue surrounding the target are spared. To enhance the speed of the optimization a fast method for approximate dose calculation in convergent beam irradiation is used. A possible application of the method for calculating the leaf positions when irradiating with a micromultileaf collimator is briefly discussed. The success of the procedure has been demonstrated for several clinical cases with up to six target points.
NASA Astrophysics Data System (ADS)
Mehrpooya, Mehdi; Dehghani, Hossein; Ali Moosavian, S. M.
2016-02-01
A combined system containing solid oxide fuel cell-gas turbine power plant, Rankine steam cycle and ammonia-water absorption refrigeration system is introduced and analyzed. In this process, power, heat and cooling are produced. Energy and exergy analyses along with the economic factors are used to distinguish optimum operating point of the system. The developed electrochemical model of the fuel cell is validated with experimental results. Thermodynamic package and main parameters of the absorption refrigeration system are validated. The power output of the system is 500 kW. An optimization problem is defined in order to finding the optimal operating point. Decision variables are current density, temperature of the exhaust gases from the boiler, steam turbine pressure (high and medium), generator temperature and consumed cooling water. Results indicate that electrical efficiency of the combined system is 62.4% (LHV). Produced refrigeration (at -10 °C) and heat recovery are 101 kW and 22.1 kW respectively. Investment cost for the combined system (without absorption cycle) is about 2917 kW-1.
Design and optimization of integrated gas/condensate plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Root, C.R.; Wilson, J.L.
1995-11-01
An optimized design is demonstrated for combining gas processing and condensate stabilization plants into a single integrated process facility. This integrated design economically provides improved condensate recovery versus use of a simple stabilizer design. A selection matrix showing likely application of this integrated process is presented for use on future designs. Several methods for developing the fluid characterization and for using a process simulator to predict future design compositions are described, which could be useful in other designs. Optimization of flowsheet equipment choices and of design operating pressures and temperatures is demonstrated including the effect of both continuous and discretemore » process equipment size changes. Several similar designs using a turboexpander to provide refrigeration for liquids recovery and stabilizer reflux are described. Operating overthrust and from the P/15-D platform in the Dutch sector of the North Sea has proven these integrated designs are effective. Concerns do remain around operation near or above the critical pressure that should be addressed in future work including providing conservative separator designs, providing sufficient process design safety margin to meet dew point specifications, selecting the most conservative design values of predicted gas dew point and equipment size calculated with different Equations-of-State, and possibly improving the accuracy of PVT calculations in the near critical area.« less
Battery Storage Evaluation Tool, version 1.x
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-02
The battery storage evaluation tool developed at Pacific Northwest National Laboratory is used to run a one-year simulation to evaluate the benefits of battery storage for multiple grid applications, including energy arbitrage, balancing service, capacity value, distribution system equipment deferral, and outage mitigation. This tool is based on the optimal control strategies to capture multiple services from a single energy storage device. In this control strategy, at each hour, a lookahead optimization is first formulated and solved to determine the battery base operating point. The minute-by-minute simulation is then performed to simulate the actual battery operation.
Joint CPT and N resonance in compact atomic time standards
NASA Astrophysics Data System (ADS)
Crescimanno, Michael; Hohensee, Michael; Xiao, Yanhong; Phillips, David; Walsworth, Ron
2008-05-01
Currently development efforts towards small, low power atomic time standards use current-modulated VCSELs to generate phase-coherent optical sidebands that interrogate the hyperfine structure of alkali atoms such as rubidium. We describe and use a modified four-level quantum optics model to study the optimal operating regime of the joint CPT- and N-resonance clock. Resonant and non-resonant light shifts as well as modulation comb detuning effects play a key role in determining the optimal operating point of such clocks. We further show that our model is in good agreement with experimental tests performed using Rb-87 vapor cells.
A coronagraph for operational space weather predication
NASA Astrophysics Data System (ADS)
Middleton, Kevin F.
2017-09-01
Accurate prediction of the arrival of solar wind phenomena, in particular coronal mass ejections (CMEs), at Earth, and possibly elsewhere in the heliosphere, is becoming increasingly important given our ever-increasing reliance on technology. The potentially severe impact on human technological systems of such phenomena is termed space weather. A coronagraph is arguably the instrument that provides the earliest definitive evidence of CME eruption; from a vantage point on or near the Sun-Earth line, a coronagraph can provide near-definitive identification of an Earth-bound CME. Currently, prediction of CME arrival is critically dependent on ageing science coronagraphs whose design and operation were not optimized for space weather services. We describe the early stages of the conceptual design of SCOPE (the Solar Coronagraph for OPErations), optimized to support operational space weather services.
Optimization study on multiple train formation scheme of urban rail transit
NASA Astrophysics Data System (ADS)
Xia, Xiaomei; Ding, Yong; Wen, Xin
2018-05-01
The new organization method, represented by the mixed operation of multi-marshalling trains, can adapt to the characteristics of the uneven distribution of passenger flow, but the research on this aspect is still not perfect enough. This paper introduced the passenger sharing rate and congestion penalty coefficient with different train formations. On this basis, this paper established an optimization model with the minimum passenger cost and operation cost as objective, and operation frequency and passenger demand as constraint. The ideal point method is used to solve this model. Compared with the fixed marshalling operation model, the overall cost of this scheme saves 9.24% and 4.43% respectively. This result not only validates the validity of the model, but also illustrate the advantages of the multiple train formations scheme.
CHP Fundamentals, NFMT High Performance Buildings (Presentation) – June 3, 2015
This presentation discusses how CHP can improve energy efficiency at a building or facility, and play a major role in reducing carbon emissions, optimizing fuel flexibility, lowering operating costs, and earning LEED points.
Determination of the wind power systems load to achieve operation in the maximum energy area
NASA Astrophysics Data System (ADS)
Chioncel, C. P.; Tirian, G. O.; Spunei, E.; Gillich, N.
2018-01-01
This paper analyses the operation of the wind turbine, WT, in the maximum power point, MPP, by linking the load of the Permanent Magnet Synchronous Generator, PMSG, with the wind speed value. The load control methods at wind power systems aiming an optimum performance in terms of energy are based on the fact that the energy captured by the wind turbine significantly depends on the mechanical angular speed of the wind turbine. The presented control method consists in determining the optimal mechanical angular speed, ωOPTIM, using an auxiliary low power wind turbine, WTAUX, operating without load, at maximum angular velocity, ωMAX. The method relies on the fact that the ratio ωOPTIM/ωMAX has a constant value for a given wind turbine and does not depend on the time variation of the wind speed values.
Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm
Chen, C.; Xia, J.; Liu, J.; Feng, G.
2006-01-01
Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.
Multi-point Adjoint-Based Design of Tilt-Rotors in a Noninertial Reference Frame
NASA Technical Reports Server (NTRS)
Jones, William T.; Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Acree, Cecil W.
2014-01-01
Optimization of tilt-rotor systems requires the consideration of performance at multiple design points. In the current study, an adjoint-based optimization of a tilt-rotor blade is considered. The optimization seeks to simultaneously maximize the rotorcraft figure of merit in hover and the propulsive efficiency in airplane-mode for a tilt-rotor system. The design is subject to minimum thrust constraints imposed at each design point. The rotor flowfields at each design point are cast as steady-state problems in a noninertial reference frame. Geometric design variables used in the study to control blade shape include: thickness, camber, twist, and taper represented by as many as 123 separate design variables. Performance weighting of each operational mode is considered in the formulation of the composite objective function, and a build up of increasing geometric degrees of freedom is used to isolate the impact of selected design variables. In all cases considered, the resulting designs successfully increase both the hover figure of merit and the airplane-mode propulsive efficiency for a rotor designed with classical techniques.
A Review of High-Order and Optimized Finite-Difference Methods for Simulating Linear Wave Phenomena
NASA Technical Reports Server (NTRS)
Zingg, David W.
1996-01-01
This paper presents a review of high-order and optimized finite-difference methods for numerically simulating the propagation and scattering of linear waves, such as electromagnetic, acoustic, or elastic waves. The spatial operators reviewed include compact schemes, non-compact schemes, schemes on staggered grids, and schemes which are optimized to produce specific characteristics. The time-marching methods discussed include Runge-Kutta methods, Adams-Bashforth methods, and the leapfrog method. In addition, the following fourth-order fully-discrete finite-difference methods are considered: a one-step implicit scheme with a three-point spatial stencil, a one-step explicit scheme with a five-point spatial stencil, and a two-step explicit scheme with a five-point spatial stencil. For each method studied, the number of grid points per wavelength required for accurate simulation of wave propagation over large distances is presented. Recommendations are made with respect to the suitability of the methods for specific problems and practical aspects of their use, such as appropriate Courant numbers and grid densities. Avenues for future research are suggested.
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael
1997-01-01
This paper discusses the calculation of sensitivities. or derivatives, for optimization problems involving systems governed by differential equations and other state relations. The subject is examined from the point of view of nonlinear programming, beginning with the analytical structure of the first and second derivatives associated with such problems and the relation of these derivatives to implicit differentiation and equality constrained optimization. We also outline an error analysis of the analytical formulae and compare the results with similar results for finite-difference estimates of derivatives. We then attend to an investigation of the nature of the adjoint method and the adjoint equations and their relation to directions of steepest descent. We illustrate the points discussed with an optimization problem in which the variables are the coefficients in a differential operator.
Estimating the relative utility of screening mammography.
Abbey, Craig K; Eckstein, Miguel P; Boone, John M
2013-05-01
The concept of diagnostic utility is a fundamental component of signal detection theory, going back to some of its earliest works. Attaching utility values to the various possible outcomes of a diagnostic test should, in principle, lead to meaningful approaches to evaluating and comparing such systems. However, in many areas of medical imaging, utility is not used because it is presumed to be unknown. In this work, we estimate relative utility (the utility benefit of a detection relative to that of a correct rejection) for screening mammography using its known relation to the slope of a receiver operating characteristic (ROC) curve at the optimal operating point. The approach assumes that the clinical operating point is optimal for the goal of maximizing expected utility and therefore the slope at this point implies a value of relative utility for the diagnostic task, for known disease prevalence. We examine utility estimation in the context of screening mammography using the Digital Mammographic Imaging Screening Trials (DMIST) data. We show how various conditions can influence the estimated relative utility, including characteristics of the rating scale, verification time, probability model, and scope of the ROC curve fit. Relative utility estimates range from 66 to 227. We argue for one particular set of conditions that results in a relative utility estimate of 162 (±14%). This is broadly consistent with values in screening mammography determined previously by other means. At the disease prevalence found in the DMIST study (0.59% at 365-day verification), optimal ROC slopes are near unity, suggesting that utility-based assessments of screening mammography will be similar to those found using Youden's index.
NASA Astrophysics Data System (ADS)
Shaw, Amelia R.; Smith Sawyer, Heather; LeBoeuf, Eugene J.; McDonald, Mark P.; Hadjerioua, Boualem
2017-11-01
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2 is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. The reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.
Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.; ...
2017-10-24
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less
Development of a magnetic lab-on-a-chip for point-of-care sepsis diagnosis
NASA Astrophysics Data System (ADS)
Schotter, Joerg; Shoshi, Astrit; Brueckl, Hubert
2009-05-01
We present design criteria, operation principles and experimental examples of magnetic marker manipulation for our magnetic lab-on-a-chip prototype. It incorporates both magnetic sample preparation and detection by embedded GMR-type magnetoresistive sensors and is optimized for the automated point-of-care detection of four different sepsis-indicative cytokines directly from about 5 μl of whole blood. The sample volume, magnetic particle size and cytokine concentration determine the microfluidic volume, sensor size and dimensioning of the magnetic gradient field generators. By optimizing these parameters to the specific diagnostic task, best performance is expected with respect to sensitivity, analysis time and reproducibility.
Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong
2018-05-01
Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.
The design of digital-adaptive controllers for VTOL aircraft
NASA Technical Reports Server (NTRS)
Stengel, R. F.; Broussard, J. R.; Berry, P. W.
1976-01-01
Design procedures for VTOL automatic control systems have been developed and are presented. Using linear-optimal estimation and control techniques as a starting point, digital-adaptive control laws have been designed for the VALT Research Aircraft, a tandem-rotor helicopter which is equipped for fully automatic flight in terminal area operations. These control laws are designed to interface with velocity-command and attitude-command guidance logic, which could be used in short-haul VTOL operations. Developments reported here include new algorithms for designing non-zero-set-point digital regulators, design procedures for rate-limited systems, and algorithms for dynamic control trim setting.
Gallegos-Lopez, Gabriel
2012-10-02
Methods, system and apparatus are provided for increasing voltage utilization in a five-phase vector controlled machine drive system that employs third harmonic current injection to increase torque and power output by a five-phase machine. To do so, a fundamental current angle of a fundamental current vector is optimized for each particular torque-speed of operating point of the five-phase machine.
Optimization of Angular-Momentum Biases of Reaction Wheels
NASA Technical Reports Server (NTRS)
Lee, Clifford; Lee, Allan
2008-01-01
RBOT [RWA Bias Optimization Tool (wherein RWA signifies Reaction Wheel Assembly )] is a computer program designed for computing angular momentum biases for reaction wheels used for providing spacecraft pointing in various directions as required for scientific observations. RBOT is currently deployed to support the Cassini mission to prevent operation of reaction wheels at unsafely high speeds while minimizing time in undesirable low-speed range, where elasto-hydrodynamic lubrication films in bearings become ineffective, leading to premature bearing failure. The problem is formulated as a constrained optimization problem in which maximum wheel speed limit is a hard constraint and a cost functional that increases as speed decreases below a low-speed threshold. The optimization problem is solved using a parametric search routine known as the Nelder-Mead simplex algorithm. To increase computational efficiency for extended operation involving large quantity of data, the algorithm is designed to (1) use large time increments during intervals when spacecraft attitudes or rates of rotation are nearly stationary, (2) use sinusoidal-approximation sampling to model repeated long periods of Earth-point rolling maneuvers to reduce computational loads, and (3) utilize an efficient equation to obtain wheel-rate profiles as functions of initial wheel biases based on conservation of angular momentum (in an inertial frame) using pre-computed terms.
Swirling Flow Computation at the Trailing Edge of Radial-Axial Hydraulic Turbines
NASA Astrophysics Data System (ADS)
Susan-Resiga, Romeo; Muntean, Sebastian; Popescu, Constantin
2016-11-01
Modern hydraulic turbines require optimized runners within a range of operating points with respect to minimum weighted average draft tube losses and/or flow instabilities. Tractable optimization methodologies must include realistic estimations of the swirling flow exiting the runner and further ingested by the draft tube, prior to runner design. The paper presents a new mathematical model and the associated numerical algorithm for computing the swirling flow at the trailing edge of Francis turbine runner, operated at arbitrary discharge. The general turbomachinery throughflow theory is particularized for an arbitrary hub-to-shroud line in the meridian half-plane and the resulting boundary value problem is solved with the finite element method. The results obtained with the present model are validated against full 3D runner flow computations within a range of discharge value. The mathematical model incorporates the full information for the relative flow direction, as well as the curvatures of the hub-to-shroud line and meridian streamlines, respectively. It is shown that the flow direction can be frozen within a range of operating points in the neighborhood of the best efficiency regime.
Policy tree optimization for adaptive management of water resources systems
NASA Astrophysics Data System (ADS)
Herman, Jonathan; Giuliani, Matteo
2017-04-01
Water resources systems must cope with irreducible uncertainty in supply and demand, requiring policy alternatives capable of adapting to a range of possible future scenarios. Recent studies have developed adaptive policies based on "signposts" or "tipping points" that suggest the need of updating the policy. However, there remains a need for a general method to optimize the choice of the signposts to be used and their threshold values. This work contributes a general framework and computational algorithm to design adaptive policies as a tree structure (i.e., a hierarchical set of logical rules) using a simulation-optimization approach based on genetic programming. Given a set of feature variables (e.g., reservoir level, inflow observations, inflow forecasts), the resulting policy defines both the optimal reservoir operations and the conditions under which such operations should be triggered. We demonstrate the approach using Folsom Reservoir (California) as a case study, in which operating policies must balance the risk of both floods and droughts. Numerical results show that the tree-based policies outperform the ones designed via Dynamic Programming. In addition, they display good adaptive capacity to the changing climate, successfully adapting the reservoir operations across a large set of uncertain climate scenarios.
Nurse Scheduling by Cooperative GA with Effective Mutation Operator
NASA Astrophysics Data System (ADS)
Ohki, Makoto
In this paper, we propose an effective mutation operators for Cooperative Genetic Algorithm (CGA) to be applied to a practical Nurse Scheduling Problem (NSP). The nurse scheduling is a very difficult task, because NSP is a complex combinatorial optimizing problem for which many requirements must be considered. In real hospitals, the schedule changes frequently. The changes of the shift schedule yields various problems, for example, a fall in the nursing level. We describe a technique of the reoptimization of the nurse schedule in response to a change. The conventional CGA is superior in ability for local search by means of its crossover operator, but often stagnates at the unfavorable situation because it is inferior to ability for global search. When the optimization stagnates for long generation cycle, a searching point, population in this case, would be caught in a wide local minimum area. To escape such local minimum area, small change in a population should be required. Based on such consideration, we propose a mutation operator activated depending on the optimization speed. When the optimization stagnates, in other words, when the optimization speed decreases, the mutation yields small changes in the population. Then the population is able to escape from a local minimum area by means of the mutation. However, this mutation operator requires two well-defined parameters. This means that user have to consider the value of these parameters carefully. To solve this problem, we propose a periodic mutation operator which has only one parameter to define itself. This simplified mutation operator is effective over a wide range of the parameter value.
Leakage and sweet spots in triple-quantum-dot spin qubits: A molecular-orbital study
NASA Astrophysics Data System (ADS)
Zhang, Chengxian; Yang, Xu-Chen; Wang, Xin
2018-04-01
A triple-quantum-dot system can be operated as either an exchange-only qubit or a resonant-exchange qubit. While it is generally believed that the decisive advantage of the resonant-exchange qubit is the suppression of charge noise because it is operated at a sweet spot, we show that the leakage is also an important factor. Through molecular-orbital-theoretic calculations, we show that when the system is operated in the exchange-only scheme, the leakage to states with double electron occupancy in quantum dots is severe when rotations around the axis 120∘ from z ̂ is performed. While this leakage can be reduced by either shrinking the dots or separating them further, the exchange interactions are also suppressed at the same time, making the gate operations unfavorably slow. When the system is operated as a resonant-exchange qubit, the leakage is three to five orders of magnitude smaller. We have also calculated the optimal detuning point which minimizes the leakage for the resonant-exchange qubit, and have found that although it does not coincide with the double sweet spot for the charge noise, they are rather close. Our results suggest that the resonant-exchange qubit has another advantage, that leakage can be greatly suppressed compared to the exchange-only qubit, and operating at the double sweet spot point should be optimal both for reducing charge noise and suppressing leakage.
Ekoru, K; Murphy, G A V; Young, E H; Delisle, H; Jerome, C S; Assah, F; Longo-Mbenza, B; Nzambi, J P D; On'Kin, J B K; Buntix, F; Muyer, M C; Christensen, D L; Wesseh, C S; Sabir, A; Okafor, C; Gezawa, I D; Puepet, F; Enang, O; Raimi, T; Ohwovoriole, E; Oladapo, O O; Bovet, P; Mollentze, W; Unwin, N; Gray, W K; Walker, R; Agoudavi, K; Siziya, S; Chifamba, J; Njelekela, M; Fourie, C M; Kruger, S; Schutte, A E; Walsh, C; Gareta, D; Kamali, A; Seeley, J; Norris, S A; Crowther, N J; Pillay, D; Kaleebu, P; Motala, A A; Sandhu, M S
2017-10-03
Waist circumference (WC) thresholds derived from western populations continue to be used in sub-Saharan Africa (SSA) despite increasing evidence of ethnic variation in the association between adiposity and cardiometabolic disease and availability of data from African populations. We aimed to derive a SSA-specific optimal WC cut-point for identifying individuals at increased cardiometabolic risk. We used individual level cross-sectional data on 24 181 participants aged ⩾15 years from 17 studies conducted between 1990 and 2014 in eight countries in SSA. Receiver operating characteristic curves were used to derive optimal WC cut-points for detecting the presence of at least two components of metabolic syndrome (MS), excluding WC. The optimal WC cut-point was 81.2 cm (95% CI 78.5-83.8 cm) and 81.0 cm (95% CI 79.2-82.8 cm) for men and women, respectively, with comparable accuracy in men and women. Sensitivity was higher in women (64%, 95% CI 63-65) than in men (53%, 95% CI 51-55), and increased with the prevalence of obesity. Having WC above the derived cut-point was associated with a twofold probability of having at least two components of MS (age-adjusted odds ratio 2.6, 95% CI 2.4-2.9, for men and 2.2, 95% CI 2.0-2.3, for women). The optimal WC cut-point for identifying men at increased cardiometabolic risk is lower (⩾81.2 cm) than current guidelines (⩾94.0 cm) recommend, and similar to that in women in SSA. Prospective studies are needed to confirm these cut-points based on cardiometabolic outcomes.International Journal of Obesity advance online publication, 31 October 2017; doi:10.1038/ijo.2017.240.
NASA Astrophysics Data System (ADS)
Khusainov, R.; Klimchik, A.; Magid, E.
2017-01-01
The paper presents comparison analysis of two approaches in defining leg trajectories for biped locomotion. The first one operates only with kinematic limitations of leg joints and finds the maximum possible locomotion speed for given limits. The second approach defines leg trajectories from the dynamic stability point of view and utilizes ZMP criteria. We show that two methods give different trajectories and demonstrate that trajectories based on pure dynamic optimization cannot be realized due to joint limits. Kinematic optimization provides unstable solution which can be balanced by upper body movement.
Temperature Scaling Law for Quantum Annealing Optimizers.
Albash, Tameem; Martin-Mayor, Victor; Hen, Itay
2017-09-15
Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.
Optimal Power Scheduling for a Medium Voltage AC/DC Hybrid Distribution Network
Zhu, Zhenshan; Liu, Dichen; Liao, Qingfen; ...
2018-01-26
With the great increase of renewable generation as well as the DC loads in the distribution network; DC distribution technology is receiving more attention; since the DC distribution network can improve operating efficiency and power quality by reducing the energy conversion stages. This paper presents a new architecture for the medium voltage AC/DC hybrid distribution network; where the AC and DC subgrids are looped by normally closed AC soft open point (ACSOP) and DC soft open point (DCSOP); respectively. The proposed AC/DC hybrid distribution systems contain renewable generation (i.e., wind power and photovoltaic (PV) generation); energy storage systems (ESSs); softmore » open points (SOPs); and both AC and DC flexible demands. An energy management strategy for the hybrid system is presented based on the dynamic optimal power flow (DOPF) method. The main objective of the proposed power scheduling strategy is to minimize the operating cost and reduce the curtailment of renewable generation while meeting operational and technical constraints. The proposed approach is verified in five scenarios. The five scenarios are classified as pure AC system; hybrid AC/DC system; hybrid system with interlinking converter; hybrid system with DC flexible demand; and hybrid system with SOPs. Results show that the proposed scheduling method can successfully dispatch the controllable elements; and that the presented architecture for the AC/DC hybrid distribution system is beneficial for reducing operating cost and renewable generation curtailment.« less
Optimal Power Scheduling for a Medium Voltage AC/DC Hybrid Distribution Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Zhenshan; Liu, Dichen; Liao, Qingfen
With the great increase of renewable generation as well as the DC loads in the distribution network; DC distribution technology is receiving more attention; since the DC distribution network can improve operating efficiency and power quality by reducing the energy conversion stages. This paper presents a new architecture for the medium voltage AC/DC hybrid distribution network; where the AC and DC subgrids are looped by normally closed AC soft open point (ACSOP) and DC soft open point (DCSOP); respectively. The proposed AC/DC hybrid distribution systems contain renewable generation (i.e., wind power and photovoltaic (PV) generation); energy storage systems (ESSs); softmore » open points (SOPs); and both AC and DC flexible demands. An energy management strategy for the hybrid system is presented based on the dynamic optimal power flow (DOPF) method. The main objective of the proposed power scheduling strategy is to minimize the operating cost and reduce the curtailment of renewable generation while meeting operational and technical constraints. The proposed approach is verified in five scenarios. The five scenarios are classified as pure AC system; hybrid AC/DC system; hybrid system with interlinking converter; hybrid system with DC flexible demand; and hybrid system with SOPs. Results show that the proposed scheduling method can successfully dispatch the controllable elements; and that the presented architecture for the AC/DC hybrid distribution system is beneficial for reducing operating cost and renewable generation curtailment.« less
Al-Lawati, Jawad A; Jousilahti, Pekka
2008-01-01
There are no data on optimal cut-off points to classify obesity among Omani Arabs. The existing cut-off points were obtained from studies of European populations. To determine gender-specific optimal cut-off points for body mass index (BMI), waist circumference (WC) and waist-to-hip ratio (WHR) associated with elevated prevalent cardiovascular disease (CVD) risk among Omani Arabs. A community-based cross-sectional study. The survey was conducted in the city of Nizwa in Oman in 2001. The study contained a probabilistic random sample of 1421 adults aged > or =20 years. Prevalent CVD risk was defined as the presence of at least two of the following three risk factors: hyperglycaemia, hypertension and dyslipidaemia. Logistic regression and receiver-operating characteristic (ROC) curve analyses were used to determine optimal cut-off points for BMI, WC and WHR in relation to the area under the curve (AUC), sensitivity and specificity. Over 87% of Omanis had at least one CVD risk factor (38% had hyperglycaemia, 19% hypertension and 34.5% had high total cholesterol). All three indices including BMI (AUC = 0.766), WC (AUC = 0.772) and WHR (AUC = 0.767) predicted prevalent CVD risk factors equally well. The optimal cut-off points for men and women respectively were 23.2 and 26.8 kg m-2 for BMI, 80.0 and 84.5 cm for WC, and 0.91 and 0.91 for WHR. To identify Omani subjects of Arab ethnicity at high risk of CVD, cut-off points lower than currently recommended for BMI, WC and WHR are needed for men while higher cut-off points are suggested for women.
Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode
NASA Astrophysics Data System (ADS)
Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.
2012-12-01
Nowadays, using satellite in space to observe ground is an important and major method to obtain ground information. With the development of the scientific technology in the field of space, many fields such as military and economic and other areas have more and more requirement of space technology because of the benefits of the satellite's widespread, timeliness and unlimited of area and country. And at the same time, because of the wide use of all kinds of satellites, sensors, repeater satellites and ground receiving stations, ground control system are now facing great challenge. Therefore, how to make the best value of satellite resources so as to make full use of them becomes an important problem of ground control system. Satellite scheduling is to distribute the resource to all tasks without conflict to obtain the scheduling result so as to complete as many tasks as possible to meet user's requirement under considering the condition of the requirement of satellites, sensors and ground receiving stations. Considering the size of the task, we can divide tasks into point task and area task. This paper only considers point targets. In this paper, a description of satellite scheduling problem and a chief introduction of the theory of satellite scheduling are firstly made. We also analyze the restriction of resource and task in scheduling satellites. The input and output flow of scheduling process are also chiefly described in the paper. On the basis of these analyses, we put forward a scheduling model named as multi-variable optimization model for multi-satellite and point target task on swinging mode. In the multi-variable optimization model, the scheduling problem is transformed the parametric optimization problem. The parameter we wish to optimize is the swinging angle of every time-window. In the view of the efficiency and accuracy, some important problems relating the satellite scheduling such as the angle relation between satellites and ground targets, positive and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.
Genetics-based control of a mimo boiler-turbine plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimeo, R.M.; Lee, K.Y.
1994-12-31
A genetic algorithm is used to develop an optimal controller for a non-linear, multi-input/multi-output boiler-turbine plant. The algorithm is used to train a control system for the plant over a wide operating range in an effort to obtain better performance. The results of the genetic algorithm`s controller designed from the linearized plant model at a nominal operating point. Because the genetic algorithm is well-suited to solving traditionally difficult optimization problems it is found that the algorithm is capable of developing the controller based on input/output information only. This controller achieves a performance comparable to the standard linear quadratic regulator.
NASA Astrophysics Data System (ADS)
Ginting, E.; Tambunanand, M. M.; Syahputri, K.
2018-02-01
Evolutionary Operation Methods (EVOP) is a method that is designed used in the process of running or operating routinely in the company to enables high productivity. Quality is one of the critical factors for a company to win the competition. Because of these conditions, the research for products quality has been done by gathering the production data of the company and make a direct observation to the factory floor especially the drying department to identify the problem which is the high water content in the mosquito incense coil. PT.X which is producing mosquito coils attempted to reduce product defects caused by the inaccuracy of operating conditions. One of the parameters of good quality insect repellent that is water content, that if the moisture content is too high then the product easy to mold and broken, and vice versa if it is too low the products are easily broken and burn shorter hours. Three factors that affect the value of the optimal water content, the stirring time, drying temperature and drying time. To obtain the required conditions Evolutionary Operation (EVOP) methods is used. Evolutionary Operation (EVOP) is used as an efficient technique for optimization of two or three variable experimental parameters using two-level factorial designs with center point. Optimal operating conditions in the experiment are stirring time performed for 20 minutes, drying temperature at 65°C, and drying time for 130 minutes. The results of the analysis based on the method of Evolutionary Operation (EVOP) value is the optimum water content of 6.90%, which indicates the value has approached the optimal in a production plant that is 7%.
Control strategies for wind farm power optimization: LES study
NASA Astrophysics Data System (ADS)
Ciri, Umberto; Rotea, Mario; Leonardi, Stefano
2017-11-01
Turbines in wind farms operate in off-design conditions as wake interactions occur for particular wind directions. Advanced wind farm control strategies aim at coordinating and adjusting turbine operations to mitigate power losses in such conditions. Coordination is achieved by controlling on upstream turbines either the wake intensity, through the blade pitch angle or the generator torque, or the wake direction, through yaw misalignment. Downstream turbines can be adapted to work in waked conditions and limit power losses, using the blade pitch angle or the generator torque. As wind conditions in wind farm operations may change significantly, it is difficult to determine and parameterize the variations of the coordinated optimal settings. An alternative is model-free control and optimization of wind farms, which does not require any parameterization and can track the optimal settings as conditions vary. In this work, we employ a model-free optimization algorithm, extremum-seeking control, to find the optimal set-points of generator torque, blade pitch and yaw angle for a three-turbine configuration. Large-Eddy Simulations are used to provide a virtual environment to evaluate the performance of the control strategies under realistic, unsteady incoming wind. This work was supported by the National Science Foundation, Grants No. 1243482 (the WINDINSPIRE project) and IIP 1362033 (I/UCRC WindSTAR). TACC is acknowledged for providing computational time.
Optimal tracking and second order sliding power control of the DFIG wind turbine
NASA Astrophysics Data System (ADS)
Abdeddaim, S.; Betka, A.; Charrouf, O.
2017-02-01
In the present paper, an optimal operation of a grid-connected variable speed wind turbine equipped with a Doubly Fed Induction Generator (DFIG) is presented. The proposed cascaded nonlinear controller is designed to perform two main objectives. In the outer loop, a maximum power point tracking (MPPT) algorithm based on fuzzy logic theory is designed to permanently extract the optimal aerodynamic energy, whereas in the inner loop, a second order sliding mode control (2-SM) is applied to achieve smooth regulation of both stator active and reactive powers quantities. The obtained simulation results show a permanent track of the MPP point regardless of the turbine power-speed slope moreover the proposed sliding mode control strategy presents attractive features such as chattering-free, compared to the conventional first order sliding technique (1-SM).
Engine With Regression and Neural Network Approximators Designed
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.
2001-01-01
At the NASA Glenn Research Center, the NASA engine performance program (NEPP, ref. 1) and the design optimization testbed COMETBOARDS (ref. 2) with regression and neural network analysis-approximators have been coupled to obtain a preliminary engine design methodology. The solution to a high-bypass-ratio subsonic waverotor-topped turbofan engine, which is shown in the preceding figure, was obtained by the simulation depicted in the following figure. This engine is made of 16 components mounted on two shafts with 21 flow stations. The engine is designed for a flight envelope with 47 operating points. The design optimization utilized both neural network and regression approximations, along with the cascade strategy (ref. 3). The cascade used three algorithms in sequence: the method of feasible directions, the sequence of unconstrained minimizations technique, and sequential quadratic programming. The normalized optimum thrusts obtained by the three methods are shown in the following figure: the cascade algorithm with regression approximation is represented by a triangle, a circle is shown for the neural network solution, and a solid line indicates original NEPP results. The solutions obtained from both approximate methods lie within one standard deviation of the benchmark solution for each operating point. The simulation improved the maximum thrust by 5 percent. The performance of the linear regression and neural network methods as alternate engine analyzers was found to be satisfactory for the analysis and operation optimization of air-breathing propulsion engines (ref. 4).
NASA Astrophysics Data System (ADS)
Meyer, Quentin; Ronaszegi, Krisztian; Pei-June, Gan; Curnick, Oliver; Ashton, Sean; Reisch, Tobias; Adcock, Paul; Shearing, Paul R.; Brett, Daniel J. L.
2015-09-01
Selecting the ideal operating point for a fuel cell depends on the application and consequent trade-off between efficiency, power density and various operating considerations. A systematic methodology for determining the optimal operating point for fuel cells is lacking; there is also the need for a single-value metric to describe and compare fuel cell performance. This work shows how the 'current of lowest resistance' can be accurately measured using electrochemical impedance spectroscopy and used as a useful metric of fuel cell performance. This, along with other measures, is then used to generate an 'electro-thermal performance map' of fuel cell operation. A commercial air-cooled open-cathode fuel cell is used to demonstrate how the approach can be used; in this case leading to the identification of the optimum operating temperature of ∼45 °C.
Computer-aided placement of deep brain stimulators: from planning to intraoperative guidance
NASA Astrophysics Data System (ADS)
D'Haese, Pierre-Francois; Pallavaram, Srivatsan; Kao, Chris; Konrad, Peter E.; Dawant, Benoit M.
2005-04-01
The long term objective of our research is to develop a system that will automate as much as possible DBS implantation procedures. It is estimated that about 180,000 patients/year would benefit from DBS implantation. Yet, only 3000 procedures are performed annually. This is so because the combined expertise required to perform the procedure successfully is only available at a limited number of sites. Our goal is to transform this procedure into a procedure that can be performed by a general neurosurgeon at a community hospital. In this work we report on our current progress toward developing a system for the computer-assisted pre-operative selection of target points and for the intra-operative adjustment of these points. The system consists of a deformable atlas of optimal target points that can be used to select automatically the pre-operative target, of an electrophysiological atlas, and of an intra-operative interface. The atlas is deformed using a rigid then a non-rigid registration algorithm developed at our institution. Results we have obtained show that automatic prediction of target points is an achievable goal. Our results also indicate that electrophysiological information can be used to resolve structures not visible in anatomic images, thus improving both pre-operative and intra-operative guidance. Our intra-operative system has reached the stage of a working prototype that is clinically used at our institution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langrish, T.A.G.; Harvey, A.C.
2000-01-01
A model of a well-mixed fluidized-bed dryer within a process flowsheeting package (SPEEDUP{trademark}) has been developed and applied to a parameter sensitivity study, a steady-state controllability analysis and an optimization study. This approach is more general and would be more easily applied to a complex flowsheet than one which relied on stand-alone dryer modeling packages. The simulation has shown that industrial data may be fitted to the model outputs with sensible values of unknown parameters. For this case study, the parameter sensitivity study has found that the heat loss from the dryer and the critical moisture content of the materialmore » have the greatest impact on the dryer operation at the current operating point. An optimization study has demonstrated the dominant effect of the heat loss from the dryer on the current operating cost and the current operating conditions, and substantial cost savings (around 50%) could be achieved with a well-insulated and airtight dryer, for the specific case studied here.« less
Research on global path planning based on ant colony optimization for AUV
NASA Astrophysics Data System (ADS)
Wang, Hong-Jian; Xiong, Wei
2009-03-01
Path planning is an important issue for autonomous underwater vehicles (AUVs) traversing an unknown environment such as a sea floor, a jungle, or the outer celestial planets. For this paper, global path planning using large-scale chart data was studied, and the principles of ant colony optimization (ACO) were applied. This paper introduced the idea of a visibility graph based on the grid workspace model. It also brought a series of pheromone updating rules for the ACO planning algorithm. The operational steps of the ACO algorithm are proposed as a model for a global path planning method for AUV. To mimic the process of smoothing a planned path, a cutting operator and an insertion-point operator were designed. Simulation results demonstrated that the ACO algorithm is suitable for global path planning. The system has many advantages, including that the operating path of the AUV can be quickly optimized, and it is shorter, safer, and smoother. The prototype system successfully demonstrated the feasibility of the concept, proving it can be applied to surveys of unstructured unmanned environments.
Broadcasting a message in a parallel computer
Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN
2011-08-02
Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G
2016-01-01
This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.
Design of Launch Abort System Thrust Profile and Concept of Operations
NASA Technical Reports Server (NTRS)
Litton, Daniel; O'Keefe, Stephen A.; Winski, Richard G.; Davidson, John B.
2008-01-01
This paper describes how the Abort Motor thrust profile has been tailored and how optimizing the Concept of Operations on the Launch Abort System (LAS) of the Orion Crew Exploration Vehicle (CEV) aides in getting the crew safely away from a failed Crew Launch Vehicle (CLV). Unlike the passive nature of the Apollo system, the Orion Launch Abort Vehicle will be actively controlled, giving the program a more robust abort system with a higher probability of crew survival for an abort at all points throughout the CLV trajectory. By optimizing the concept of operations and thrust profile the Orion program will be able to take full advantage of the active Orion LAS. Discussion will involve an overview of the development of the abort motor thrust profile and the current abort concept of operations as well as their effects on the performance of LAS aborts. Pad Abort (for performance) and Maximum Drag (for separation from the Launch Vehicle) are the two points that dictate the required thrust and shape of the thrust profile. The results in this paper show that 95% success of all performance requirements is not currently met for Pad Abort. Future improvements to the current parachute sequence and other potential changes will mitigate the current problems, and meet abort performance requirements.
Herrera, VM; Casas, JP; Miranda, JJ; Perel, P; Pichardo, R; González, A; Sanchez, JR; Ferreccio, C; Aguilera, X; Silva, E; Oróstegui, M; Gómez, LF; Chirinos, JA; Medina-Lezama, J; Pérez, CM; Suárez, E; Ortiz, AP; Rosero, L; Schapochnik, N; Ortiz, Z; Ferrante, D; Diaz, M; Bautista, LE
2009-01-01
Background Cut points for defining obesity have been derived from mortality data among Whites from Europe and the United States and their accuracy to screen for high risk of coronary heart disease (CHD) in other ethnic groups has been questioned. Objective To compare the accuracy and to define ethnic and gender-specific optimal cut points for body mass index (BMI), waist circumference (WC) and waist-to-hip ratio (WHR) when they are used in screening for high risk of CHD in the Latin-American and the US populations. Methods We estimated the accuracy and optimal cut points for BMI, WC and WHR to screen for CHD risk in Latin Americans (n=18 976), non-Hispanic Whites (Whites; n=8956), non-Hispanic Blacks (Blacks; n=5205) and Hispanics (n=5803). High risk of CHD was defined as a 10-year risk ≥20% (Framingham equation). The area under the receiver operator characteristic curve (AUC) and the misclassification-cost term were used to assess accuracy and to identify optimal cut points. Results WHR had the highest AUC in all ethnic groups (from 0.75 to 0.82) and BMI had the lowest (from 0.50 to 0.59). Optimal cut point for BMI was similar across ethnic/gender groups (27 kg/m2). In women, cut points for WC (94 cm) and WHR (0.91) were consistent by ethnicity. In men, cut points for WC and WHR varied significantly with ethnicity: from 91 cm in Latin Americans to 102 cm in Whites, and from 0.94 in Latin Americans to 0.99 in Hispanics, respectively. Conclusion WHR is the most accurate anthropometric indicator to screen for high risk of CHD, whereas BMI is almost uninformative. The same BMI cut point should be used in all men and women. Unique cut points for WC and WHR should be used in all women, but ethnic-specific cut points seem warranted among men. PMID:19238159
Xu, Shi-Zhou; Wang, Chun-Jie; Lin, Fang-Li; Li, Shi-Xiang
2017-10-31
The multi-device open-circuit fault is a common fault of ANPC (Active Neutral-Point Clamped) three-level inverter and effect the operation stability of the whole system. To improve the operation stability, this paper summarized the main solutions currently firstly and analyzed all the possible states of multi-device open-circuit fault. Secondly, an order-reduction optimal control strategy was proposed under multi-device open-circuit fault to realize fault-tolerant control based on the topology and control requirement of ANPC three-level inverter and operation stability. This control strategy can solve the faults with different operation states, and can works in order-reduction state under specific open-circuit faults with specific combined devices, which sacrifices the control quality to obtain the stability priority control. Finally, the simulation and experiment proved the effectiveness of the proposed strategy.
NASA Astrophysics Data System (ADS)
El-Zoghby, Helmy M.; Bendary, Ahmed F.
2016-10-01
Maximum Power Point Tracking (MPPT) is now widely used method in increasing the photovoltaic (PV) efficiency. The conventional MPPT methods have many problems concerning the accuracy, flexibility and efficiency. The MPP depends on the PV temperature and solar irradiation that randomly varied. In this paper an artificial intelligence based controller is presented through implementing of an Adaptive Neuro-Fuzzy Inference System (ANFIS) to obtain maximum power from PV. The ANFIS inputs are the temperature and cell current, and the output is optimal voltage at maximum power. During operation the trained ANFIS senses the PV current using suitable sensor and also senses the temperature to determine the optimal operating voltage that corresponds to the current at MPP. This voltage is used to control the boost converter duty cycle. The MATLAB simulation results shows the effectiveness of the ANFIS with sensing the PV current in obtaining the MPPT from the PV.
Programs To Optimize Spacecraft And Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Brauer, G. L.; Petersen, F. M.; Cornick, D.E.; Stevenson, R.; Olson, D. W.
1994-01-01
POST/6D POST is set of two computer programs providing ability to target and optimize trajectories of powered or unpowered spacecraft or aircraft operating at or near rotating planet. POST treats point-mass, three-degree-of-freedom case. 6D POST treats more-general rigid-body, six-degree-of-freedom (with point masses) case. Used to solve variety of performance, guidance, and flight-control problems for atmospheric and orbital vehicles. Applications include computation of performance or capability of vehicle in ascent, or orbit, and during entry into atmosphere, simulation and analysis of guidance and flight-control systems, dispersion-type analyses and analyses of loads, general-purpose six-degree-of-freedom simulation of controlled and uncontrolled vehicles, and validation of performance in six degrees of freedom. Written in FORTRAN 77 and C language. Two machine versions available: one for SUN-series computers running SunOS(TM) (LAR-14871) and one for Silicon Graphics IRIS computers running IRIX(TM) operating system (LAR-14869).
NASA Astrophysics Data System (ADS)
Miharja, M.; Priadi, Y. N.
2018-05-01
Promoting a better public transport is a key strategy to cope with urban transport problems which are mostly caused by a huge private vehicle usage. A better public transport service quality not only focuses on one type of public transport mode, but also concerns on inter modes service integration. Fragmented inter mode public transport service leads to a longer trip chain as well as average travel time which would result in its failure to compete with a private vehicle. This paper examines the optimation process of operation system integration between Trans Jakarta Bus as the main public transport mode and Kopaja Bus as feeder public transport service in Jakarta. Using scoring-interview method combined with standard parameters in operation system integration, this paper identifies the key factors that determine the success of the two public transport operation system integrations. The study found that some key integration parameters, such as the cancellation of “system setoran”, passenger get in-get out at official stop points, and systematic payment, positively contribute to a better service integration. However, some parameters such as fine system, time and changing point reliability, and information system reliability are among those which need improvement. These findings are very useful for the authority to set the right strategy to improve operation system integration between Trans Jakarta and Kopaja Bus services.
Application of Contraction Mappings to the Control of Nonlinear Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Killingsworth, W. R., Jr.
1972-01-01
The theoretical and applied aspects of successive approximation techniques are considered for the determination of controls for nonlinear dynamical systems. Particular emphasis is placed upon the methods of contraction mappings and modified contraction mappings. It is shown that application of the Pontryagin principle to the optimal nonlinear regulator problem results in necessary conditions for optimality in the form of a two point boundary value problem (TPBVP). The TPBVP is represented by an operator equation and functional analytic results on the iterative solution of operator equations are applied. The general convergence theorems are translated and applied to those operators arising from the optimal regulation of nonlinear systems. It is shown that simply structured matrices and similarity transformations may be used to facilitate the calculation of the matrix Green functions and the evaluation of the convergence criteria. A controllability theory based on the integral representation of TPBVP's, the implicit function theorem, and contraction mappings is developed for nonlinear dynamical systems. Contraction mappings are theoretically and practically applied to a nonlinear control problem with bounded input control and the Lipschitz norm is used to prove convergence for the nondifferentiable operator. A dynamic model representing community drug usage is developed and the contraction mappings method is used to study the optimal regulation of the nonlinear system.
Price, Anthony N.; Padormo, Francesco; Hajnal, Joseph V.; Malik, Shaihan J.
2017-01-01
Cardiac magnetic resonance imaging (MRI) at high field presents challenges because of the high specific absorption rate and significant transmit field (B 1 +) inhomogeneities. Parallel transmission MRI offers the ability to correct for both issues at the level of individual radiofrequency (RF) pulses, but must operate within strict hardware and safety constraints. The constraints are themselves affected by sequence parameters, such as the RF pulse duration and TR, meaning that an overall optimal operating point exists for a given sequence. This work seeks to obtain optimal performance by performing a ‘sequence‐level’ optimization in which pulse sequence parameters are included as part of an RF shimming calculation. The method is applied to balanced steady‐state free precession cardiac MRI with the objective of minimizing TR, hence reducing the imaging duration. Results are demonstrated using an eight‐channel parallel transmit system operating at 3 T, with an in vivo study carried out on seven male subjects of varying body mass index (BMI). Compared with single‐channel operation, a mean‐squared‐error shimming approach leads to reduced imaging durations of 32 ± 3% with simultaneous improvement in flip angle homogeneity of 32 ± 8% within the myocardium. PMID:28195684
Beqiri, Arian; Price, Anthony N; Padormo, Francesco; Hajnal, Joseph V; Malik, Shaihan J
2017-06-01
Cardiac magnetic resonance imaging (MRI) at high field presents challenges because of the high specific absorption rate and significant transmit field (B 1 + ) inhomogeneities. Parallel transmission MRI offers the ability to correct for both issues at the level of individual radiofrequency (RF) pulses, but must operate within strict hardware and safety constraints. The constraints are themselves affected by sequence parameters, such as the RF pulse duration and TR, meaning that an overall optimal operating point exists for a given sequence. This work seeks to obtain optimal performance by performing a 'sequence-level' optimization in which pulse sequence parameters are included as part of an RF shimming calculation. The method is applied to balanced steady-state free precession cardiac MRI with the objective of minimizing TR, hence reducing the imaging duration. Results are demonstrated using an eight-channel parallel transmit system operating at 3 T, with an in vivo study carried out on seven male subjects of varying body mass index (BMI). Compared with single-channel operation, a mean-squared-error shimming approach leads to reduced imaging durations of 32 ± 3% with simultaneous improvement in flip angle homogeneity of 32 ± 8% within the myocardium. © 2017 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.
Arkell, Karolina; Knutson, Hans-Kristian; Frederiksen, Søren S; Breil, Martin P; Nilsson, Bernt
2018-01-12
With the shift of focus of the regulatory bodies, from fixed process conditions towards flexible ones based on process understanding, model-based optimization is becoming an important tool for process development within the biopharmaceutical industry. In this paper, a multi-objective optimization study of separation of three insulin variants by reversed-phase chromatography (RPC) is presented. The decision variables were the load factor, the concentrations of ethanol and KCl in the eluent, and the cut points for the product pooling. In addition to the purity constraints, a solubility constraint on the total insulin concentration was applied. The insulin solubility is a function of the ethanol concentration in the mobile phase, and the main aim was to investigate the effect of this constraint on the maximal productivity. Multi-objective optimization was performed with and without the solubility constraint, and visualized as Pareto fronts, showing the optimal combinations of the two objectives productivity and yield for each case. Comparison of the constrained and unconstrained Pareto fronts showed that the former diverges when the constraint becomes active, because the increase in productivity with decreasing yield is almost halted. Consequently, we suggest the operating point at which the total outlet concentration of insulin reaches the solubility limit as the most suitable one. According to the results from the constrained optimizations, the maximal productivity on the C 4 adsorbent (0.41 kg/(m 3 column h)) is less than half of that on the C 18 adsorbent (0.87 kg/(m 3 column h)). This is partly caused by the higher selectivity between the insulin variants on the C 18 adsorbent, but the main reason is the difference in how the solubility constraint affects the processes. Since the optimal ethanol concentration for elution on the C 18 adsorbent is higher than for the C 4 one, the insulin solubility is also higher, allowing a higher pool concentration. An alternative method of finding the suggested operating point was also evaluated, and it was shown to give very satisfactory results for well-mapped Pareto fronts. Copyright © 2017 Elsevier B.V. All rights reserved.
AS Migration and Optimization of the Power Integrated Data Network
NASA Astrophysics Data System (ADS)
Zhou, Junjie; Ke, Yue
2018-03-01
In the transformation process of data integration network, the impact on the business has always been the most important reference factor to measure the quality of network transformation. With the importance of the data network carrying business, we must put forward specific design proposals during the transformation, and conduct a large number of demonstration and practice to ensure that the transformation program meets the requirements of the enterprise data network. This paper mainly demonstrates the scheme of over-migrating point-to-point access equipment in the reconstruction project of power data comprehensive network to migrate the BGP autonomous domain to the specified domain defined in the industrial standard, and to smooth the intranet OSPF protocol Migration into ISIS agreement. Through the optimization design, eventually making electric power data network performance was improved on traffic forwarding, traffic forwarding path optimized, extensibility, get larger, lower risk of potential loop, the network stability was improved, and operational cost savings, etc.
An Energy Storage Assessment: Using Optimal Control Strategies to Capture Multiple Services
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Jin, Chunlian; Balducci, Patrick J.
2015-09-01
This paper presents a methodology for evaluating benefits of battery storage for multiple grid applications, including energy arbitrage, balancing service, capacity value, distribution system equipment deferral, and outage mitigation. In the proposed method, at each hour, a look-ahead optimization is first formulated and solved to determine battery base operating point. The minute by minute simulation is then performed to simulate the actual battery operation. This methodology is used to assess energy storage alternatives in Puget Sound Energy System. Different battery storage candidates are simulated for a period of one year to assess different value streams and overall benefits, as partmore » of a financial feasibility evaluation of battery storage projects.« less
Qualitative thermal characterization and cooling of lithium batteries for electric vehicles
NASA Astrophysics Data System (ADS)
Mariani, A.; D'Annibale, F.; Boccardi, G.; Celata, G. P.; Menale, C.; Bubbico, R.; Vellucci, F.
2014-04-01
The paper deals with the cooling of batteries. The first step was the thermal characterization of a single cell of the module, which consists in the detection of the thermal field by means of thermographic tests during electric charging and discharging. The purpose was to identify possible critical hot points and to evaluate the cooling demand during the normal operation of an electric car. After that, a study on the optimal configuration to obtain the flattening of the temperature profile and to avoid hot points was executed. An experimental plant for cooling capacity evaluation of the batteries, using air as cooling fluid, was realized in our laboratory in ENEA Casaccia. The plant is designed to allow testing at different flow rate and temperatures of the cooling air, useful for the assessment of operative thermal limits in different working conditions. Another experimental facility was built to evaluate the thermal behaviour changes with water as cooling fluid. Experimental tests were carried out on the LiFePO4 batteries, under different electric working conditions using the two loops. In the future, different type of batteries will be tested and the influence of various parameters on the heat transfer will be assessed for possible optimal operative solutions.
Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kutz, J Nathan
2016-01-01
In this wIn this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.ork, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.
Optimizing Hydropower Day-Ahead Scheduling for the Oroville-Thermalito Project
NASA Astrophysics Data System (ADS)
Veselka, T. D.; Mahalik, M.
2012-12-01
Under an award from the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Water Power Program, a team of national laboratories is developing and demonstrating a suite of advanced, integrated analytical tools to assist managers and planners increase hydropower resources while enhancing the environment. As part of the project, Argonne National Laboratory is developing the Conventional Hydropower Energy and Environmental Systems (CHEERS) model to optimize day-ahead scheduling and real-time operations. We will present the application of CHEERS to the Oroville-Thermalito Project located in Northern California. CHEERS will aid California Department of Water Resources (CDWR) schedulers in making decisions about unit commitments and turbine-level operating points using a system-wide approach to increase hydropower efficiency and the value of power generation and ancillary services. The model determines schedules and operations that are constrained by physical limitations, characteristics of plant components, operational preferences, reliability, and environmental considerations. The optimization considers forebay and afterbay implications, interactions between cascaded power plants, turbine efficiency curves and rough zones, and operator preferences. CHEERS simultaneously considers over time the interactions among all CDWR power and water resources, hydropower economics, reservoir storage limitations, and a set of complex environmental constraints for the Thermalito Afterbay and Feather River habitats. Power marketers, day-ahead schedulers, and plant operators provide system configuration and detailed operational data, along with feedback on model design and performance. CHEERS is integrated with CDWR data systems to obtain historic and initial conditions of the system as the basis from which future operations are then optimized. Model results suggest alternative operational regimes that improve the value of CDWR resources to the grid while enhancing the environment and complying with water delivery obligations for non-power uses.
Computing the Feasible Spaces of Optimal Power Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Computing the Feasible Spaces of Optimal Power Flow Problems
Molzahn, Daniel K.
2017-03-15
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Uher, Vojtěch; Gajdoš, Petr; Radecký, Michal; Snášel, Václav
2016-01-01
The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds.
Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds
Radecký, Michal; Snášel, Václav
2016-01-01
The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds. PMID:27974884
Morales-Pérez, Ariadna A; Maravilla, Pablo; Solís-López, Myriam; Schouwenaars, Rafael; Durán-Moreno, Alfonso; Ramírez-Zamora, Rosa-María
2016-01-01
An experimental design methodology was used to optimize the synthesis of an iron-supported nanocatalyst as well as the inactivation process of Ascaris eggs (Ae) using this material. A factor screening design was used for identifying the significant experimental factors for nanocatalyst support (supported %Fe, (w/w), temperature and time of calcination) and for the inactivation process called the heterogeneous Fenton-like reaction (H2O2 dose, mass ratio Fe/H2O2, pH and reaction time). The optimization of the significant factors was carried out using a face-centered central composite design. The optimal operating conditions for both processes were estimated with a statistical model and implemented experimentally with five replicates. The predicted value of the Ae inactivation rate was close to the laboratory results. At the optimal operating conditions of the nanocatalyst production and Ae inactivation process, the Ascaris ova showed genomic damage to the point that no cell reparation was possible showing that this advanced oxidation process was highly efficient for inactivating this pathogen.
U.S. Army Delayed Entry Program Optimization Model
2004-08-01
United States Military Academy West Point, New York 10996 OPERATIONS RESEARCH CENTER OF EXCELLENCE TECHNICAL REPORT No. DSE-TR- 0428 DTIC #: ADAXXXXX...following entries: Author(s) Department of Systems Engineering 2 Mahan Hall West Point, NY 10996 Client USAAC CAR 4 1307 Third Ave., Fort Knox, KY 40121...Wolter, LTC Michael J. Kwinn, Jr., LTC John Halstead DSE-R- 0428 5S. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8
Optimizing Disaster Relief: Real-Time Operational and Tactical Decision Support
1993-01-01
efficiencies in completing the tAsks. Allocations recognize task priorities and the logistica l effects of geographic prox- imity, In addition...as if they ar~ collocated. Arcs connect loc-•I J>airs of zones to represent feasible dTrect point-to-point transportation and bear cost> ror...data to thl.’ de >~red level of aggregation. We have tested ARES manuall)’ ;mtl by replacins tbc deci~ion maker wrlh the decision simulator which
Acupuncture in the Management of Injury and Operative Pain Under Field Conditions.
1976-03-01
id,, iI ne.e.siry andl’,:,ni’, I’ Acupuncture Analgesia, ’ Pain Control Orofacial Acupuncture, Tooth Pulp.. Regional Analgesia Tisdocument reports...experimental series. The primary Acupuncture points which presently enjoy maximal favor in terms of the control of orofacial pain (there are three; see below...be used to ideitify optimal waveforms for three critical Acupuncture "points" associated wit., orofacial pain control. The same model will then be
Optimized stereo matching in binocular three-dimensional measurement system using structured light.
Liu, Kun; Zhou, Changhe; Wei, Shengbin; Wang, Shaoqing; Fan, Xin; Ma, Jianyong
2014-09-10
In this paper, we develop an optimized stereo-matching method used in an active binocular three-dimensional measurement system. A traditional dense stereo-matching algorithm is time consuming due to a long search range and the high complexity of a similarity evaluation. We project a binary fringe pattern in combination with a series of N binary band limited patterns. In order to prune the search range, we execute an initial matching before exhaustive matching and evaluate a similarity measure using logical comparison instead of a complicated floating-point operation. Finally, an accurate point cloud can be obtained by triangulation methods and subpixel interpolation. The experiment results verify the computational efficiency and matching accuracy of the method.
The Earth Phenomena Observing System: Intelligent Autonomy for Satellite Operations
NASA Technical Reports Server (NTRS)
Ricard, Michael; Abramson, Mark; Carter, David; Kolitz, Stephan
2003-01-01
Earth monitoring systems of the future may include large numbers of inexpensive small satellites, tasked in a coordinated fashion to observe both long term and transient targets. For best performance, a tool which helps operators optimally assign targets to satellites will be required. We present the design of algorithms developed for real-time optimized autonomous planning of large numbers of small single-sensor Earth observation satellites. The algorithms will reduce requirements on the human operators of such a system of satellites, ensure good utilization of system resources, and provide the capability to dynamically respond to temporal terrestrial phenomena. Our initial real-time system model consists of approximately 100 satellites and large number of points of interest on Earth (e.g., hurricanes, volcanoes, and forest fires) with the objective to maximize the total science value of observations over time. Several options for calculating the science value of observations include the following: 1) total observation time, 2) number of observations, and the 3) quality (a function of e.g., sensor type, range, slant angle) of the observations. An integrated approach using integer programming, optimization and astrodynamics is used to calculate optimized observation and sensor tasking plans.
Some Results on Proper Eigenvalues and Eigenvectors with Applications to Scaling.
ERIC Educational Resources Information Center
McDonald, Roderick P.; And Others
1979-01-01
Problems in avoiding the singularity problem in analyzing matrices for optimal scaling are addressed. Conditions are given under which the stationary points and values of a ratio of quadratic forms in two singular matrices can be obtained by a series of simple matrix operations. (Author/JKS)
Optimizing Web-Based Instruction: A Case Study Using Poultry Processing Unit Operations
ERIC Educational Resources Information Center
O' Bryan, Corliss A.; Crandall, Philip G.; Shores-Ellis, Katrina; Johnson, Donald M.; Ricke, Steven C.; Marcy, John
2009-01-01
Food companies and supporting industries need inexpensive, revisable training methods for large numbers of hourly employees due to continuing improvements in Hazard Analysis Critical Control Point (HACCP) programs, new processing equipment, and high employee turnover. HACCP-based food safety programs have demonstrated their value by reducing the…
Least-mean-square spatial filter for IR sensors.
Takken, E H; Friedman, D; Milton, A F; Nitzberg, R
1979-12-15
A new least-mean-square filter is defined for signal-detection problems. The technique is proposed for scanning IR surveillance systems operating in poorly characterized but primarily low-frequency clutter interference. Near-optimal detection of point-source targets is predicted both for continuous-time and sampled-data systems.
We anticipate that future laboratory results will verify our preliminary findings that the BSF is capable of removing approximately 99% of enteric bacteria and roughly 90% of enteric viruses as currently configured. We hope that by understanding the operating conditions and me...
[The development of endoscope workstation].
Qi, L; Qi, L; Qiou, Q J; Yu, Q L
2001-01-01
This paper introduces an endoscope workstation, which solved the weak points of multimedia endoscope database used by most hospitals. The endoscope workstation was built on pedal-switch and NTFS file system. This paper also Introduces how to make program optimal and quick inputting. The workstation has promoted the efficiency of the doctor's operation.
Operator induced multigrid algorithms using semirefinement
NASA Technical Reports Server (NTRS)
Decker, Naomi; Vanrosendale, John
1989-01-01
A variant of multigrid, based on zebra relaxation, and a new family of restriction/prolongation operators is described. Using zebra relaxation in combination with an operator-induced prolongation leads to fast convergence, since the coarse grid can correct all error components. The resulting algorithms are not only fast, but are also robust, in the sense that the convergence rate is insensitive to the mesh aspect ratio. This is true even though line relaxation is performed in only one direction. Multigrid becomes a direct method if an operator-induced prolongation is used, together with the induced coarse grid operators. Unfortunately, this approach leads to stencils which double in size on each coarser grid. The use of an implicit three point restriction can be used to factor these large stencils, in order to retain the usual five or nine point stencils, while still achieving fast convergence. This algorithm achieves a V-cycle convergence rate of 0.03 on Poisson's equation, using 1.5 zebra sweeps per level, while the convergence rate improves to 0.003 if optimal nine point stencils are used. Numerical results for two and three dimensional model problems are presented, together with a two level analysis explaining these results.
Richard, Gontran; Touhami, Seddik; Zeghloul, Thami; Dascalescu, Lucien
2017-02-01
Plate-type electrostatic separators are commonly employed for the selective sorting of conductive and non-conductive granular materials. The aim of this work is to identify the optimal operating conditions of such equipment, when employed for separating copper and plastics from either flexible or rigid electric wire wastes. The experiments are performed according to the response surface methodology, on samples composed of either "calibrated" particles, obtained by manually cutting of electric wires at a predefined length (4mm), or actual machine-grinded scraps, characterized by a relatively-wide size distribution (1-4mm). The results point out the effect of particle size and shape on the effectiveness of the electrostatic separation. Different optimal operating conditions are found for flexible and rigid wires. A separate processing of the two classes of wire wastes is recommended. Copyright © 2016 Elsevier Ltd. All rights reserved.
Optimal Dispatch of Unreliable Electric Grid-Connected Diesel Generator-Battery Power Systems
NASA Astrophysics Data System (ADS)
Xu, D.; Kang, L.
2015-06-01
Diesel generator (DG)-battery power systems are often adopted by telecom operators, especially in semi-urban and rural areas of developing countries. Unreliable electric grids (UEG), which have frequent and lengthy outages, are peculiar to these regions. DG-UEG-battery power system is an important kind of hybrid power system. System dispatch is one of the key factors to hybrid power system integration. In this paper, the system dispatch of a DG-UEG-lead acid battery power system is studied with the UEG of relatively ample electricity in Central African Republic (CAR) and UEG of poor electricity in Congo Republic (CR). The mathematical models of the power system and the UEG are studied for completing the system operation simulation program. The net present cost (NPC) of the power system is the main evaluation index. The state of charge (SOC) set points and battery bank charging current are the optimization variables. For the UEG in CAR, the optimal dispatch solution is SOC start and stop points 0.4 and 0.5 that belong to the Micro-Cycling strategy and charging current 0.1 C. For the UEG in CR, the optimal dispatch solution is of 0.1 and 0.8 that belongs to the Cycle-Charging strategy and 0.1 C. Charging current 0.1 C is suitable for both grid scenarios compared to 0.2 C. It makes the dispatch strategy design easier in commercial practices that there are a few very good candidate dispatch solutions with system NPC values close to that of the optimal solution for both UEG scenarios in CAR and CR.
CO and NO emissions from pellet stoves: an experimental study
NASA Astrophysics Data System (ADS)
Petrocelli, D.; Lezzi, A. M.
2014-04-01
This work presents a report on an experimental investigation on pellet stoves aimed to fully understand which parameters influence CO and NO emissions and how it is possible to find and choose the optimal point of working. Tests are performed on three pellet stoves varying heating power, combustion chamber size and burner pot geometry. After a brief review on the factors which influence the production of these pollutants, we present and discuss the results of experimental tests aimed to ascertain how the geometry of the combustion chamber and the distribution of primary and secondary air, can modify the quantity of CO and NO in the flue gas. Experimental tests show that production of CO is strongly affected by the excess air and by its distribution: in particular, it is critical an effective control of air distribution. In these devices a low-level of CO emissions does require a proper setup to operate in the optimal range of excess air that minimizes CO production. In order to simplify the optimization process, we propose the use of instantaneous data of CO and O2 concentration, instead of average values, because they allow a quick identification of the optimal point. It is shown that the optimal range of operation can be enlarged as a consequence of proper burner pot design. Finally, it is shown that NO emissions are not a critical issue, since they are well below threshold enforced by law, are not influenced by the distribution of air in the combustion chamber, and their behavior as a function of air excess is the same for all the geometries investigated here.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.
2000-01-01
The NASA Engine Performance Program (NEPP) can configure and analyze almost any type of gas turbine engine that can be generated through the interconnection of a set of standard physical components. In addition, the code can optimize engine performance by changing adjustable variables under a set of constraints. However, for engine cycle problems at certain operating points, the NEPP code can encounter difficulties: nonconvergence in the currently implemented Powell's optimization algorithm and deficiencies in the Newton-Raphson solver during engine balancing. A project was undertaken to correct these deficiencies. Nonconvergence was avoided through a cascade optimization strategy, and deficiencies associated with engine balancing were eliminated through neural network and linear regression methods. An approximation-interspersed cascade strategy was used to optimize the engine's operation over its flight envelope. Replacement of Powell's algorithm by the cascade strategy improved the optimization segment of the NEPP code. The performance of the linear regression and neural network methods as alternative engine analyzers was found to be satisfactory. This report considers two examples-a supersonic mixed-flow turbofan engine and a subsonic waverotor-topped engine-to illustrate the results, and it discusses insights gained from the improved version of the NEPP code.
A derived heuristics based multi-objective optimization procedure for micro-grid scheduling
NASA Astrophysics Data System (ADS)
Li, Xin; Deb, Kalyanmoy; Fang, Yanjun
2017-06-01
With the availability of different types of power generators to be used in an electric micro-grid system, their operation scheduling as the load demand changes with time becomes an important task. Besides satisfying load balance constraints and the generator's rated power, several other practicalities, such as limited availability of grid power and restricted ramping of power output from generators, must all be considered during the operation scheduling process, which makes it difficult to decide whether the optimization results are accurate and satisfactory. In solving such complex practical problems, heuristics-based customized optimization algorithms are suggested. However, due to nonlinear and complex interactions of variables, it is difficult to come up with heuristics in such problems off-hand. In this article, a two-step strategy is proposed in which the first task deciphers important heuristics about the problem and the second task utilizes the derived heuristics to solve the original problem in a computationally fast manner. Specifically, the specific operation scheduling is considered from a two-objective (cost and emission) point of view. The first task develops basic and advanced level knowledge bases offline from a series of prior demand-wise optimization runs and then the second task utilizes them to modify optimized solutions in an application scenario. Results on island and grid connected modes and several pragmatic formulations of the micro-grid operation scheduling problem clearly indicate the merit of the proposed two-step procedure.
A fast Chebyshev method for simulating flexible-wing propulsion
NASA Astrophysics Data System (ADS)
Moore, M. Nicholas J.
2017-09-01
We develop a highly efficient numerical method to simulate small-amplitude flapping propulsion by a flexible wing in a nearly inviscid fluid. We allow the wing's elastic modulus and mass density to vary arbitrarily, with an eye towards optimizing these distributions for propulsive performance. The method to determine the wing kinematics is based on Chebyshev collocation of the 1D beam equation as coupled to the surrounding 2D fluid flow. Through small-amplitude analysis of the Euler equations (with trailing-edge vortex shedding), the complete hydrodynamics can be represented by a nonlocal operator that acts on the 1D wing kinematics. A class of semi-analytical solutions permits fast evaluation of this operator with O (Nlog N) operations, where N is the number of collocation points on the wing. This is in contrast to the minimum O (N2) cost of a direct 2D fluid solver. The coupled wing-fluid problem is thus recast as a PDE with nonlocal operator, which we solve using a preconditioned iterative method. These techniques yield a solver of near-optimal complexity, O (Nlog N) , allowing one to rapidly search the infinite-dimensional parameter space of all possible material distributions and even perform optimization over this space.
Artificial Bee Colony Optimization for Short-Term Hydrothermal Scheduling
NASA Astrophysics Data System (ADS)
Basu, M.
2014-12-01
Artificial bee colony optimization is applied to determine the optimal hourly schedule of power generation in a hydrothermal system. Artificial bee colony optimization is a swarm-based algorithm inspired by the food foraging behavior of honey bees. The algorithm is tested on a multi-reservoir cascaded hydroelectric system having prohibited operating zones and thermal units with valve point loading. The ramp-rate limits of thermal generators are taken into consideration. The transmission losses are also accounted for through the use of loss coefficients. The algorithm is tested on two hydrothermal multi-reservoir cascaded hydroelectric test systems. The results of the proposed approach are compared with those of differential evolution, evolutionary programming and particle swarm optimization. From numerical results, it is found that the proposed artificial bee colony optimization based approach is able to provide better solution.
Zhang, Ziheng; Martin, Jonathan; Wu, Jinfeng; Wang, Haijiang; Promislow, Keith; Balcom, Bruce J
2008-08-01
Water management is critical to optimize the operation of polymer electrolyte membrane fuel cells. At present, numerical models are employed to guide water management in such fuel cells. Accurate measurements of water content variation in polymer electrolyte membrane fuel cells are required to validate these models and to optimize fuel cell behavior. We report a direct water content measurement across the Nafion membrane in an operational polymer electrolyte membrane fuel cell, employing double half k-space spin echo single point imaging techniques. The MRI measurements with T2 mapping were undertaken with a parallel plate resonator to avoid the effects of RF screening. The parallel plate resonator employs the electrodes inherent to the fuel cell to create a resonant circuit at RF frequencies for MR excitation and detection, while still operating as a conventional fuel cell at DC. Three stages of fuel cell operation were investigated: activation, operation and dehydration. Each profile was acquired in 6 min, with 6 microm nominal resolution and a SNR of better than 15.
Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2016-01-01
Design Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellation Abstract Communication systems are described that use geometrically PSK shaped constellations that have increased capacity compared to conventional PSK constellations operating within a similar SNR band. The geometrically shaped PSK constellation is optimized based upon parallel decoding capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel. In numerous embodiments, the communication uses adaptive rate encoding and the location of points within the geometrically shaped constellation changes as the code rate changes.
1982-06-01
start/stop chiller optimization , and demand limiting were added. The system monitors a 7,000 ton chiller plant and controls 74 air handlers. The EMCS does...Modify analog limits. g. Adjust setpoints of selected controllers. h. Select manual or automatic control modes. i. Enable and disable individual points...or event schedules and controller setpoints ; make nonscheduled starts and stops of equipment or disable field panels when required for routine
Parametric optimization of optical signal detectors employing the direct photodetection scheme
NASA Astrophysics Data System (ADS)
Kirakosiants, V. E.; Loginov, V. A.
1984-08-01
The problem of optimization of the optical signal detection scheme parameters is addressed using the concept of a receiver with direct photodetection. An expression is derived which accurately approximates the field of view (FOV) values obtained by a direct computer minimization of the probability of missing a signal; optimum values of the receiver FOV were found for different atmospheric conditions characterized by the number of coherence spots and the intensity fluctuations of a plane wave. It is further pointed out that the criterion presented can be possibly used for parametric optimization of detectors operating in accordance with the Neumann-Pearson criterion.
Driving external chemistry optimization via operations management principles.
Bi, F Christopher; Frost, Heather N; Ling, Xiaolan; Perry, David A; Sakata, Sylvie K; Bailey, Simon; Fobian, Yvette M; Sloan, Leslie; Wood, Anthony
2014-03-01
Confronted with the need to significantly raise the productivity of remotely located chemistry CROs Pfizer embraced a commitment to continuous improvement which leveraged the tools from both Lean Six Sigma and queue management theory to deliver positive measurable outcomes. During 2012 cycle times were reduced by 48% by optimization of the work in progress and conducting a detailed workflow analysis to identify and address pinch points. Compound flow was increased by 29% by optimizing the request process and de-risking the chemistry. Underpinning both achievements was the development of close working relationships and productive communications between Pfizer and CRO chemists. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Jin, Chunlian; Balducci, Patrick J.
2013-12-01
This volume presents the battery storage evaluation tool developed at Pacific Northwest National Laboratory (PNNL), which is used to evaluate benefits of battery storage for multiple grid applications, including energy arbitrage, balancing service, capacity value, distribution system equipment deferral, and outage mitigation. This tool is based on the optimal control strategies to capture multiple services from a single energy storage device. In this control strategy, at each hour, a look-ahead optimization is first formulated and solved to determine battery base operating point. The minute by minute simulation is then performed to simulate the actual battery operation. This volume provide backgroundmore » and manual for this evaluation tool.« less
Prevalence scaling: applications to an intelligent workstation for the diagnosis of breast cancer.
Horsch, Karla; Giger, Maryellen L; Metz, Charles E
2008-11-01
Our goal was to investigate the effects of changes that the prevalence of cancer in a population have on the probability of malignancy (PM) output and an optimal combination of a true-positive fraction (TPF) and a false-positive fraction (FPF) of a mammographic and sonographic automatic classifier for the diagnosis of breast cancer. We investigate how a prevalence-scaling transformation that is used to change the prevalence inherent in the computer estimates of the PM affects the numerical and histographic output of a previously developed multimodality intelligent workstation. Using Bayes' rule and the binormal model, we study how changes in the prevalence of cancer in the diagnostic breast population affect our computer classifiers' optimal operating points, as defined by maximizing the expected utility. Prevalence scaling affects the threshold at which a particular TPF and FPF pair is achieved. Tables giving the thresholds on the scaled PM estimates that result in particular pairs of TPF and FPF are presented. Histograms of PMs scaled to reflect clinically relevant prevalence values differ greatly from histograms of laboratory-designed PMs. The optimal pair (TPF, FPF) of our lower performing mammographic classifier is more sensitive to changes in clinical prevalence than that of our higher performing sonographic classifier. Prevalence scaling can be used to change computer PM output to reflect clinically more appropriate prevalence. Relatively small changes in clinical prevalence can have large effects on the computer classifier's optimal operating point.
Determination of stresses in RC eccentrically compressed members using optimization methods
NASA Astrophysics Data System (ADS)
Lechman, Marek; Stachurski, Andrzej
2018-01-01
The paper presents an optimization method for determining the strains and stresses in reinforced concrete (RC) members subjected to the eccentric compression. The governing equations for strains in the rectangular cross-sections are derived by integrating the equilibrium equations of cross-sections, taking account of the effect of concrete softening in plastic range and the mean compressive strength of concrete. The stress-strain relationship for concrete in compression for short term uniaxial loading is assumed according to Eurocode 2 for nonlinear analysis. For reinforcing steel linear-elastic model with hardening in plastic range is applied. The task consists in the solving the set of the derived equations s.t. box constraints. The resulting problem was solved by means of fmincon function implemented from the Matlab's Optimization Toolbox. Numerical experiments have shown the existence of many points verifying the equations with a very good accuracy. Therefore, some operations from the global optimization were included: start of fmincon from many points and clusterization. The model is verified on the set of data encountered in the engineering practice.
Vibrational self-consistent field theory using optimized curvilinear coordinates.
Bulik, Ireneusz W; Frisch, Michael J; Vaccaro, Patrick H
2017-07-28
A vibrational SCF model is presented in which the functions forming the single-mode functions in the product wavefunction are expressed in terms of internal coordinates and the coordinates used for each mode are optimized variationally. This model involves no approximations to the kinetic energy operator and does not require a Taylor-series expansion of the potential. The non-linear optimization of coordinates is found to give much better product wavefunctions than the limited variations considered in most previous applications of SCF methods to vibrational problems. The approach is tested using published potential energy surfaces for water, ammonia, and formaldehyde. Variational flexibility allowed in the current ansätze results in excellent zero-point energies expressed through single-product states and accurate fundamental transition frequencies realized by short configuration-interaction expansions. Fully variational optimization of single-product states for excited vibrational levels also is discussed. The highlighted methodology constitutes an excellent starting point for more sophisticated treatments, as the bulk characteristics of many-mode coupling are accounted for efficiently in terms of compact wavefunctions (as evident from the accurate prediction of transition frequencies).
NASA Astrophysics Data System (ADS)
Pei, Ji; Wang, Wenjie; Yuan, Shouqi; Zhang, Jinfeng
2016-09-01
In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0 Q d and 1.4 Q d is proposed. Three parameters, namely, the blade outlet width b 2, blade outlet angle β 2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0 Q d and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.
NASA Astrophysics Data System (ADS)
Shrestha, K. P.; Chitrakar, S.; Thapa, B.; Dahlhaug, O. G.
2018-06-01
Erosion on hydro turbine mostly depends on impingement velocity, angle of impact, concentration, shape, size and distribution of erodent particle and substrate material. In the case of Francis turbines, the sediment particles tend to erode more in the off-designed conditions than at the best efficiency point. Previous studies focused on the optimized runner blade design to reduce erosion at the designed flow. However, the effect of the change in the design on other operating conditions was not studied. This paper demonstrates the performance of optimized Francis turbine exposed to sediment erosion in various operating conditions. Comparative study has been carryout among the five different shapes of runner, different set of guide vane and stay vane angles. The effect of erosion is studied in terms of average erosion density rate on optimized design Francis runner with Lagrangian particle tracking method in CFD analysis. The numerical sensitivity of the results are investigated by comparing two turbulence models. Numerical results are validated from the velocity measurements carried out in the actual turbine. Results show that runner blades are susceptible to more erosion at part load conditions compared to BEP, whereas for the case of guide vanes, more erosion occurs at full load conditions. Out of the five shapes compared, Shape 5 provides an optimum combination of efficiency and erosion on the studied operating conditions.
Dynamic positioning configuration and its first-order optimization
NASA Astrophysics Data System (ADS)
Xue, Shuqiang; Yang, Yuanxi; Dang, Yamin; Chen, Wu
2014-02-01
Traditional geodetic network optimization deals with static and discrete control points. The modern space geodetic network is, on the other hand, composed of moving control points in space (satellites) and on the Earth (ground stations). The network configuration composed of these facilities is essentially dynamic and continuous. Moreover, besides the position parameter which needs to be estimated, other geophysical information or signals can also be extracted from the continuous observations. The dynamic (continuous) configuration of the space network determines whether a particular frequency of signals can be identified by this system. In this paper, we employ the functional analysis and graph theory to study the dynamic configuration of space geodetic networks, and mainly focus on the optimal estimation of the position and clock-offset parameters. The principle of the D-optimization is introduced in the Hilbert space after the concept of the traditional discrete configuration is generalized from the finite space to the infinite space. It shows that the D-optimization developed in the discrete optimization is still valid in the dynamic configuration optimization, and this is attributed to the natural generalization of least squares from the Euclidean space to the Hilbert space. Then, we introduce the principle of D-optimality invariance under the combination operation and rotation operation, and propose some D-optimal simplex dynamic configurations: (1) (Semi) circular configuration in 2-dimensional space; (2) the D-optimal cone configuration and D-optimal helical configuration which is close to the GPS constellation in 3-dimensional space. The initial design of GPS constellation can be approximately treated as a combination of 24 D-optimal helixes by properly adjusting the ascending node of different satellites to realize a so-called Walker constellation. In the case of estimating the receiver clock-offset parameter, we show that the circular configuration, the symmetrical cone configuration and helical curve configuration are still D-optimal. It shows that the given total observation time determines the optimal frequency (repeatability) of moving known points and vice versa, and one way to improve the repeatability is to increase the rotational speed. Under the Newton's law of motion, the frequency of satellite motion determines the orbital altitude. Furthermore, we study three kinds of complex dynamic configurations, one of which is the combination of D-optimal cone configurations and a so-called Walker constellation composed of D-optimal helical configuration, the other is the nested cone configuration composed of n cones, and the last is the nested helical configuration composed of n orbital planes. It shows that an effective way to achieve high coverage is to employ the configuration composed of a certain number of moving known points instead of the simplex configuration (such as D-optimal helical configuration), and one can use the D-optimal simplex solutions or D-optimal complex configurations in any combination to achieve powerful configurations with flexile coverage and flexile repeatability. Alternately, how to optimally generate and assess the discrete configurations sampled from the continuous one is discussed. The proposed configuration optimization framework has taken the well-known regular polygons (such as equilateral triangle and quadrangular) in two-dimensional space and regular polyhedrons (regular tetrahedron, cube, regular octahedron, regular icosahedron, or regular dodecahedron) into account. It shows that the conclusions made by the proposed technique are more general and no longer limited by different sampling schemes. By the conditional equation of D-optimal nested helical configuration, the relevance issues of GNSS constellation optimization are solved and some examples are performed by GPS constellation to verify the validation of the newly proposed optimization technique. The proposed technique is potentially helpful in maintenance and quadratic optimization of single GNSS of which the orbital inclination and the orbital altitude change under the precession, as well as in optimally nesting GNSSs to perform global homogeneous coverage of the Earth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei
One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms ofmore » a suggested framework model based on discrete event simulation.« less
Integrated solar energy system optimization
NASA Astrophysics Data System (ADS)
Young, S. K.
1982-11-01
The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.
Safe-trajectory optimization and tracking control in ultra-close proximity to a failed satellite
NASA Astrophysics Data System (ADS)
Zhang, Jingrui; Chu, Xiaoyu; Zhang, Yao; Hu, Quan; Zhai, Guang; Li, Yanyan
2018-03-01
This paper presents a trajectory-optimization method for a chaser spacecraft operating in ultra-close proximity to a failed satellite. Based on the combination of active and passive trajectory protection, the constraints in the optimization framework are formulated for collision avoidance and successful docking in the presence of any thruster failure. The constraints are then handled by an adaptive Gauss pseudospectral method, in which the dynamic residuals are used as the metric to determine the distribution of collocation points. A finite-time feedback control is further employed in tracking the optimized trajectory. In particular, the stability and convergence of the controller are proved. Numerical results are given to demonstrate the effectiveness of the proposed methods.
Methodology and Method and Apparatus for Signaling With Capacity Optimized Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2014-01-01
Communication systems are described that use geometrically shaped constellations that have increased capacity compared to conventional constellations operating within a similar SNR band. In several embodiments, the geometrically shaped is optimized based upon a capacity measure such as parallel decoding capacity or joint capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel. In numerous embodiments, the communication uses adaptive rate encoding and the location of points within the geometrically shaped constellation changes as the code rate changes.
Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation
NASA Astrophysics Data System (ADS)
Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari
2016-07-01
In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.
Beam pointing angle optimization and experiments for vehicle laser Doppler velocimetry
NASA Astrophysics Data System (ADS)
Fan, Zhe; Hu, Shuling; Zhang, Chunxi; Nie, Yanju; Li, Jun
2015-10-01
Beam pointing angle (BPA) is one of the key parameters that affects the operation performance of the laser Doppler velocimetry (LDV) system. By considering velocity sensitivity and echo power, for the first time, the optimized BPA of vehicle LDV is analyzed. Assuming mounting error is within ±1.0 deg, the reflectivity and roughness are variable for different scenarios, the optimized BPA is obtained in the range from 29 to 43 deg. Therefore, velocity sensitivity is in the range of 1.25 to 1.76 MHz/(m/s), and the percentage of normalized echo power at optimized BPA with respect to that at 0 deg is greater than 53.49%. Laboratory experiments with a rotating table are done with different BPAs of 10, 35, and 66 deg, and the results coincide with the theoretical analysis. Further, vehicle experiment with optimized BPA of 35 deg is conducted by comparison with microwave radar (accuracy of ±0.5% full scale output). The root-mean-square error of LDV's results is smaller than the Microstar II's, 0.0202 and 0.1495 m/s, corresponding to LDV and Microstar II, respectively, and the mean velocity discrepancy is 0.032 m/s. It is also proven that with the optimized BPA both high velocity sensitivity and acceptable echo power can simultaneously be guaranteed.
Runway Operations Planning: A Two-Stage Solution Methodology
NASA Technical Reports Server (NTRS)
Anagnostakis, Ioannis; Clarke, John-Paul
2003-01-01
The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. Thus, Runway Operations Planning (ROP) is a critical component of airport operations planning in general and surface operations planning in particular. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, may be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. Generating optimal runway operations plans was approached in with a 'one-stage' optimization routine that considered all the desired objectives and constraints, and the characteristics of each aircraft (weight class, destination, Air Traffic Control (ATC) constraints) at the same time. Since, however, at any given point in time, there is less uncertainty in the predicted demand for departure resources in terms of weight class than in terms of specific aircraft, the ROP problem can be parsed into two stages. In the context of the Departure Planner (OP) research project, this paper introduces Runway Operations Planning (ROP) as part of the wider Surface Operations Optimization (SOO) and describes a proposed 'two stage' heuristic algorithm for solving the Runway Operations Planning (ROP) problem. Focus is specifically given on including runway crossings in the planning process of runway operations. In the first stage, sequences of departure class slots and runwy crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program. Preliminary results from the algorithm implementation on real-world traffic data are included.
NASA Technical Reports Server (NTRS)
1981-01-01
The objective of the study was to generate the system design of a performance-optimized, advanced LOX/hydrogen expander cycle space engine. The engine requirements are summarized, and the development and operational experience with the expander cycle RL10 engine were reviewed. The engine development program is outlined.
40 CFR 89.324 - Calibration of other equipment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... and operation. Adjust the analyzer to optimize performance. (2) Zero the methane analyzer with zero...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3 percent of full scale on the zero, concentration values may be calculated by use of a single calibration...
40 CFR 89.324 - Calibration of other equipment.
Code of Federal Regulations, 2013 CFR
2013-07-01
... and operation. Adjust the analyzer to optimize performance. (2) Zero the methane analyzer with zero...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3 percent of full scale on the zero, concentration values may be calculated by use of a single calibration...
40 CFR 89.324 - Calibration of other equipment.
Code of Federal Regulations, 2012 CFR
2012-07-01
... and operation. Adjust the analyzer to optimize performance. (2) Zero the methane analyzer with zero...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3 percent of full scale on the zero, concentration values may be calculated by use of a single calibration...
40 CFR 89.324 - Calibration of other equipment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... and operation. Adjust the analyzer to optimize performance. (2) Zero the methane analyzer with zero...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3 percent of full scale on the zero, concentration values may be calculated by use of a single calibration...
NASA Astrophysics Data System (ADS)
Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng
2018-02-01
Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.
Numerical Simulation of the Francis Turbine and CAD used to Optimized the Runner Design (2nd).
NASA Astrophysics Data System (ADS)
Sutikno, Priyono
2010-06-01
Hydro Power is the most important renewable energy source on earth. The water is free of charge and with the generation of electric energy in a Hydroelectric Power station the production of green house gases (mainly CO2) is negligible. Hydro Power Generation Stations are long term installations and can be used for 50 years and more, care must be taken to guarantee a smooth and safe operation over the years. Maintenance is necessary and critical parts of the machines have to be replaced if necessary. Within modern engineering the numerical flow simulation plays an important role in order to optimize the hydraulic turbine in conjunction with connected components of the plant. Especially for rehabilitation and upgrading existing Power Plants important point of concern are to predict the power output of turbine, to achieve maximum hydraulic efficiency, to avoid or to minimize cavitations, to avoid or to minimized vibrations in whole range operation. Flow simulation can help to solve operational problems and to optimize the turbo machinery for hydro electric generating stations or their component through, intuitive optimization, mathematical optimization, parametric design, the reduction of cavitations through design, prediction of draft tube vortex, trouble shooting by using the simulation. The classic design through graphic-analytical method is cumbersome and can't give in evidence the positive or negative aspects of the designing options. So it was obvious to have imposed as necessity the classical design methods to an adequate design method using the CAD software. There are many option chose during design calculus in a specific step of designing may be verified in ensemble and detail form a point of view. The final graphic post processing would be realized only for the optimal solution, through a 3 D representation of the runner as a whole for the final approval geometric shape. In this article it was investigated the redesign of the hydraulic turbine's runner, medium head Francis type, with following value for the most important parameter, the rated specific speed ns.
A new adaptive light beam focusing principle for scanning light stimulation systems.
Bitzer, L A; Meseth, M; Benson, N; Schmechel, R
2013-02-01
In this article a novel principle to achieve optimal focusing conditions or rather the smallest possible beam diameter for scanning light stimulation systems is presented. It is based on the following methodology: First, a reference point on a camera sensor is introduced where optimal focusing conditions are adjusted and the distance between the light focusing optic and the reference point is determined using a laser displacement sensor. In a second step, this displacement sensor is used to map the topography of the sample under investigation. Finally, the actual measurement is conducted, using optimal focusing conditions in each measurement point at the sample surface, that are determined by the height difference between camera sensor and the sample topography. This principle is independent of the measurement values, the optical or electrical properties of the sample, the used light source, or the selected wavelength. Furthermore, the samples can be tilted, rough, bent, or of different surface materials. In the following the principle is implemented using an optical beam induced current system, but basically it can be applied to any other scanning light stimulation system. Measurements to demonstrate its operation are shown, using a polycrystalline silicon solar cell.
Two-stage fan. 4: Performance data for stator setting angle optimization
NASA Technical Reports Server (NTRS)
Burger, G. D.; Keenan, M. J.
1975-01-01
Stator setting angle optimization tests were conducted on a two-stage fan to improve efficiency at overspeed, stall margin at design speed, and both efficiency and stall margin at partspeed. The fan has a design pressure ratio of 2.8, a flow rate of 184.2 lb/sec (83.55 kg/sec) and a 1st-stage rotor tip speed of 1450 ft/sec (441.96 in/sec). Performance was obtained at 70,100, and 105 percent of design speed with different combinations of 1st-stage and 2nd-stage stator settings. One combination of settings, other than design, was common to all three speeds. At design speed, a 2.0 percentage point increase in stall margin was obtained at the expense of a 1.3 percentage point efficiency decrease. At 105 percent speed, efficiency was improved by 1.8 percentage points but stall margin decreased 4.7 percentage points. At 70 percent speed, no change in stall margin or operating line efficiency was obtained with stator resets although considerable speed-flow requlation occurred.
Yu Wei; Matthew P. Thompson; Jessica R. Haas; Gregory K. Dillon; Christopher D. O’Connor
2018-01-01
This study introduces a large fire containment strategy that builds upon recent advances in spatial fire planning, notably the concept of potential wildland fire operation delineations (PODs). Multiple PODs can be clustered together to form a âboxâ that is referred as the âresponse PODâ (or rPOD). Fire lines would be built along the boundary of an rPOD to contain a...
NASA Astrophysics Data System (ADS)
Fink, Wolfgang; George, Thomas; Tarbell, Mark A.
2007-04-01
Robotic reconnaissance operations are called for in extreme environments, not only those such as space, including planetary atmospheres, surfaces, and subsurfaces, but also in potentially hazardous or inaccessible operational areas on Earth, such as mine fields, battlefield environments, enemy occupied territories, terrorist infiltrated environments, or areas that have been exposed to biochemical agents or radiation. Real time reconnaissance enables the identification and characterization of transient events. A fundamentally new mission concept for tier-scalable reconnaissance of operational areas, originated by Fink et al., is aimed at replacing the engineering and safety constrained mission designs of the past. The tier-scalable paradigm integrates multi-tier (orbit atmosphere surface/subsurface) and multi-agent (satellite UAV/blimp surface/subsurface sensing platforms) hierarchical mission architectures, introducing not only mission redundancy and safety, but also enabling and optimizing intelligent, less constrained, and distributed reconnaissance in real time. Given the mass, size, and power constraints faced by such a multi-platform approach, this is an ideal application scenario for a diverse set of MEMS sensors. To support such mission architectures, a high degree of operational autonomy is required. Essential elements of such operational autonomy are: (1) automatic mapping of an operational area from different vantage points (including vehicle health monitoring); (2) automatic feature extraction and target/region-of-interest identification within the mapped operational area; and (3) automatic target prioritization for close-up examination. These requirements imply the optimal deployment of MEMS sensors and sensor platforms, sensor fusion, and sensor interoperability.
NASA Astrophysics Data System (ADS)
Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.
2015-12-01
We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.
Optimized Dose Distribution of Gammamed Plus Vaginal Cylinders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Supe, Sanjay S.; Bijina, T.K.; Varatharaj, C.
2009-04-01
Endometrial carcinoma is the most common malignancy arising in the female genital tract. Intracavitary vaginal cuff irradiation may be given alone or with external beam irradiation in patients determined to be at risk for locoregional recurrence. Vaginal cylinders are often used to deliver a brachytherapy dose to the vaginal apex and upper vagina or the entire vaginal surface in the management of postoperative endometrial cancer or cervical cancer. The dose distributions of HDR vaginal cylinders must be evaluated carefully, so that clinical experiences with LDR techniques can be used in guiding optimal use of HDR techniques. The aim of thismore » study was to optimize dose distribution for Gammamed plus vaginal cylinders. Placement of dose optimization points was evaluated for its effect on optimized dose distributions. Two different dose optimization point models were used in this study, namely non-apex (dose optimization points only on periphery of cylinder) and apex (dose optimization points on periphery and along the curvature including the apex points). Thirteen dwell positions were used for the HDR dosimetry to obtain a 6-cm active length. Thus 13 optimization points were available at the periphery of the cylinder. The coordinates of the points along the curvature depended on the cylinder diameters and were chosen for each cylinder so that four points were distributed evenly in the curvature portion of the cylinder. Diameter of vaginal cylinders varied from 2.0 to 4.0 cm. Iterative optimization routine was utilized for all optimizations. The effects of various optimization routines (iterative, geometric, equal times) was studied for the 3.0-cm diameter vaginal cylinder. The effect of source travel step size on the optimized dose distributions for vaginal cylinders was also evaluated. All optimizations in this study were carried for dose of 6 Gy at dose optimization points. For both non-apex and apex models of vaginal cylinders, doses for apex point and three dome points were higher for the apex model compared with the non-apex model. Mean doses to the optimization points for both the cylinder models and all the cylinder diameters were 6 Gy, matching with the prescription dose of 6 Gy. Iterative optimization routine resulted in the highest dose to apex point and dome points. The mean dose for optimization point was 6.01 Gy for iterative optimization and was much higher than 5.74 Gy for geometric and equal times routines. Step size of 1 cm gave the highest dose to the apex point. This step size was superior in terms of mean dose to optimization points. Selection of dose optimization points for the derivation of optimized dose distributions for vaginal cylinders affects the dose distributions.« less
Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm
Yan, Li; Xie, Hong; Chen, Changjun
2017-01-01
Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%. PMID:28850100
Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm.
Yan, Li; Tan, Junxiang; Liu, Hua; Xie, Hong; Chen, Changjun
2017-08-29
Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%.
Welker, A; Wolcke, B; Schleppers, A; Schmeck, S B; Focke, U; Gervais, H W; Schmeck, J
2010-10-01
The introduction of the diagnosis-related groups reimbursement system has increased cost pressures. Due to the interaction of many different professional groups, analysis and optimization of internal coordination and scheduling in the operating room (OR) is mandatory. The aim of this study was to analyze the processes at a university hospital in order to optimize strategies by identifying potential weak points. Over a period 6 weeks before and 4 weeks after intervention processes time intervals in the OR of a tertiary care hospital (university hospital) were documented in a structured data collection sheet. The main reason for lack of efficiency of labor was underused OR utilization. Multifactorial reasons, particularly in the management of perioperative interfaces, led to vacant ORs. A significant deficit was in the use of OR capacity at the end of the daily OR schedule. After harmonization of working hours of different staff groups and implementation of several other changes an increase in efficiency could be verified. These results indicate that optimization of perioperative processes considerably contribute to the success of OR organization. Additionally, the implementation of standard operating procedures and a generally accepted OR statute are mandatory. In this way an efficient OR management can contribute to the economic success of a hospital.
Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen
In the traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of load forecasting technique can provide accurate prediction of load power that will happen in future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during the longer time period instead of using the snapshot of load at the time when the reconfiguration happens, and thus it can provide information to the distribution systemmore » operator (DSO) to better operate the system reconfiguration to achieve optimal solutions. Thus, this paper proposes a short-term load forecasting based approach for automatically reconfiguring distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with support vector regression (SVR) based forecaster and parallel parameters optimization. And the network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum loss at the future time. The simulation results validate and evaluate the proposed approach.« less
Cui, Borui; Gao, Dian-ce; Xiao, Fu; ...
2016-12-23
This article provides a method in comprehensive evaluation of cost-saving potential of active cool thermal energy storage (CTES) integrated with HVAC system for demand management in non-residential building. The active storage is beneficial by shifting peak demand for peak load management (PLM) as well as providing longer duration and larger capacity of demand response (DR). In this research, a model-based optimal design method using genetic algorithm is developed to optimize the capacity of active CTES aiming for maximizing the life-cycle cost saving concerning capital cost associated with storage capacity as well as incentives from both fast DR and PLM. Inmore » the method, the active CTES operates under a fast DR control strategy during DR events while under the storage-priority operation mode to shift peak demand during normal days. The optimal storage capacities, maximum annual net cost saving and corresponding power reduction set-points during DR event are obtained by using the proposed optimal design method. Lastly, this research provides guidance in comprehensive evaluation of cost-saving potential of CTES integrated with HVAC system for building demand management including both fast DR and PLM.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Borui; Gao, Dian-ce; Xiao, Fu
This article provides a method in comprehensive evaluation of cost-saving potential of active cool thermal energy storage (CTES) integrated with HVAC system for demand management in non-residential building. The active storage is beneficial by shifting peak demand for peak load management (PLM) as well as providing longer duration and larger capacity of demand response (DR). In this research, a model-based optimal design method using genetic algorithm is developed to optimize the capacity of active CTES aiming for maximizing the life-cycle cost saving concerning capital cost associated with storage capacity as well as incentives from both fast DR and PLM. Inmore » the method, the active CTES operates under a fast DR control strategy during DR events while under the storage-priority operation mode to shift peak demand during normal days. The optimal storage capacities, maximum annual net cost saving and corresponding power reduction set-points during DR event are obtained by using the proposed optimal design method. Lastly, this research provides guidance in comprehensive evaluation of cost-saving potential of CTES integrated with HVAC system for building demand management including both fast DR and PLM.« less
Study optimizes gas lift in Gulf of Suez field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Waly, A.A.; Darwish, T.A.; Osman Salama, A.
1996-06-24
A study using PVT data combined with fluid and multiphase flow correlations optimized gas lift in the Ramadan field, Nubia C, oil wells, in the Gulf of Suez. Selection of appropriate correlations followed by multiphase flow calculations at various points of injection (POI) were the first steps in the study. After determining the POI for each well from actual pressure and temperature surveys, the study constructed lift gas performance curves for each well. Actual and optimum operating conditions were compared to determine the optimal gas lift. The study indicated a net 2,115 bo/d could be gained from implementing its recommendations.more » The actual net oil gained as a result of this optimization and injected gas reallocation was 2,024 bo/d. The paper discusses the Ramadan field, fluid properties, multiphase flow, production optimization, and results.« less
Switching and optimizing control for coal flotation process based on a hybrid model
Dong, Zhiyong; Wang, Ranfeng; Fan, Minqiang; Fu, Xiang
2017-01-01
Flotation is an important part of coal preparation, and the flotation column is widely applied as efficient flotation equipment. This process is complex and affected by many factors, with the froth depth and reagent dosage being two of the most important and frequently manipulated variables. This paper proposes a new method of switching and optimizing control for the coal flotation process. A hybrid model is built and evaluated using industrial data. First, wavelet analysis and principal component analysis (PCA) are applied for signal pre-processing. Second, a control model for optimizing the set point of the froth depth is constructed based on fuzzy control, and a control model is designed to optimize the reagent dosages based on expert system. Finally, the least squares-support vector machine (LS-SVM) is used to identify the operating conditions of the flotation process and to select one of the two models (froth depth or reagent dosage) for subsequent operation according to the condition parameters. The hybrid model is developed and evaluated on an industrial coal flotation column and exhibits satisfactory performance. PMID:29040305
Optimizing the Attitude Control of Small Satellite Constellations for Rapid Response Imaging
NASA Astrophysics Data System (ADS)
Nag, S.; Li, A.
2016-12-01
Distributed Space Missions (DSMs) such as formation flight and constellations, are being recognized as important solutions to increase measurement samples over space and time. Given the increasingly accurate attitude control systems emerging in the commercial market, small spacecraft now have the ability to slew and point within few minutes of notice. In spite of hardware development in CubeSats at the payload (e.g. NASA InVEST) and subsystems (e.g. Blue Canyon Technologies), software development for tradespace analysis in constellation design (e.g. Goddard's TAT-C), planning and scheduling development in single spacecraft (e.g. GEO-CAPE) and aerial flight path optimizations for UAVs (e.g. NASA Sensor Web), there is a gap in open-source, open-access software tools for planning and scheduling distributed satellite operations in terms of pointing and observing targets. This paper will demonstrate results from a tool being developed for scheduling pointing operations of narrow field-of-view (FOV) sensors over mission lifetime to maximize metrics such as global coverage and revisit statistics. Past research has shown the need for at least fourteen satellites to cover the Earth globally everyday using a LandSat-like sensor. Increasing the FOV three times reduces the need to four satellites, however adds image distortion and BRDF complexities to the observed reflectance. If narrow FOV sensors on a small satellite constellation were commanded using robust algorithms to slew their sensor dynamically, they would be able to coordinately cover the global landmass much faster without compensating for spatial resolution or BRDF effects. Our algorithm to optimize constellation satellite pointing is based on a dynamic programming approach under the constraints of orbital mechanics and existing attitude control systems for small satellites. As a case study for our algorithm, we minimize the time required to cover the 17000 Landsat images with maximum signal to noise ratio fall-off and minimum image distortion among the satellites, using Landsat's specifications. Attitude-specific constraints such as power consumption, response time, and stability were factored into the optimality computations. The algorithm can integrate cloud cover predictions, specific ground and air assets and angular constraints.
Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan
2015-01-01
To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585
NASA Astrophysics Data System (ADS)
Kanka, Jiri
2012-06-01
Fiber-optic long-period grating (LPG) operating near the dispersion turning point in its phase matching curve (PMC), referred to as a Turn Around Point (TAP) LPG, is known to be extremely sensitive to external parameters. Moreover, in a TAP LPG the phase matching condition can be almost satisfied over large spectral range, yielding a broadband LPG operation. TAP LPGs have been investigated, namely for use as broadband mode convertors and biosensors. So far TAP LPGs have been realized in specially designed or post-processed conventional fibers, not yet in PCFs, which allow a great degree of freedom in engineering the fiber's dispersion properties through the control of the PCF structural parameters. We have developed the design optimization technique for TAP PCF LPGs employing the finite element method for PCF modal analysis in a combination with the Nelder-Mead simplex method for minimizing the objective function based on target-specific PCF properties. Using this tool we have designed TAP PCF LPGs for specified wavelength ranges and refractive indices of medium in the air holes. Possible TAP PCF-LPG operational regimes - dual-resonance, broadband mode conversion and transmitted intensity-based operation - will be demonstrated numerically. Potential and limitations of TAP PCF-LPGs for evanescent chemical and biochemical sensing will be assessed.
Pressure Pulsation in a High Head Francis Turbine Operating at Variable Speed
NASA Astrophysics Data System (ADS)
Sannes, D. B.; Iliev, I.; Agnalt, E.; Dahlhaug, O. G.
2018-06-01
This paper presents the preliminary work of the master thesis of the author, written at the Norwegian University of Science and Technology. Today, many Francis turbines experience formations of cracks in the runner due to pressure pulsations. This can eventually cause failure. One way to reduce this effect is to change the operation point of the turbine, by utilizing variable speed technology. This work presents the results from measurements of the Francis turbine at the Waterpower Laboratory at NTNU. Measurements of pressure pulsations and efficiency were done for the whole operating range of a high head Francis model turbine. The results will be presented in a similar diagram as the Hill Chart, but instead of constant efficiency curves there will be curves of constant peak-peak values. This way, it is possible to find an optimal operation point for the same power production, were the pressure pulsations are at its lowest. Six points were chosen for further analysis to instigate the effect of changing the speed by ±50 rpm. The analysis shows best results for operation below BEP when the speed was reduced. The change in speed also introduced the possibility to have other frequencies in the system. It is therefore important avoid runner speeds that can cause resonance in the system.
Fundamental procedures of geographic information analysis
NASA Technical Reports Server (NTRS)
Berry, J. K.; Tomlin, C. D.
1981-01-01
Analytical procedures common to most computer-oriented geographic information systems are composed of fundamental map processing operations. A conceptual framework for such procedures is developed and basic operations common to a broad range of applications are described. Among the major classes of primitive operations identified are those associated with: reclassifying map categories as a function of the initial classification, the shape, the position, or the size of the spatial configuration associated with each category; overlaying maps on a point-by-point, a category-wide, or a map-wide basis; measuring distance; establishing visual or optimal path connectivity; and characterizing cartographic neighborhoods based on the thematic or spatial attributes of the data values within each neighborhood. By organizing such operations in a coherent manner, the basis for a generalized cartographic modeling structure can be developed which accommodates a variety of needs in a common, flexible and intuitive manner. The use of each is limited only by the general thematic and spatial nature of the data to which it is applied.
A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study
NASA Astrophysics Data System (ADS)
Onut, S.; Kamber, M. R.; Altay, G.
2014-03-01
Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time.
The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware
NASA Astrophysics Data System (ADS)
Kathiara, Jainik
There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.
An inverter/controller subsystem optimized for photovoltaic applications
NASA Technical Reports Server (NTRS)
Pickrell, R. L.; Merrill, W. C.; Osullivan, G.
1978-01-01
Conversion of solar array dc power to ac power stimulated the specification, design, and simulation testing of an inverter/controller subsystem tailored to the photovoltaic power source characteristics. This paper discusses the optimization of the inverter/controller design as part of an overall Photovoltaic Power System (PPS) designed for maximum energy extraction from the solar array. The special design requirements for the inverter/controller include: (1) a power system controller (PSC) to control continuously the solar array operating point at the maximum power level based on variable solar insolation and cell temperatures; and (2) an inverter designed for high efficiency at rated load and low losses at light loadings to conserve energy. It must be capable of operating connected to the utility line at a level set by an external controller (PSC).
Whole Device Modeling of Compact Tori: Stability and Transport Modeling of C-2W
NASA Astrophysics Data System (ADS)
Dettrick, Sean; Fulton, Daniel; Lau, Calvin; Lin, Zhihong; Ceccherini, Francesco; Galeotti, Laura; Gupta, Sangeeta; Onofri, Marco; Tajima, Toshiki; TAE Team
2017-10-01
Recent experimental evidence from the C-2U FRC experiment shows that the confinement of energy improves with inverse collisionality, similar to other high beta toroidal devices, NSTX and MAST. This motivated the construction of a new FRC experiment, C-2W, to study the energy confinement scaling at higher electron temperature. Tri Alpha Energy is working towards catalysing a community-wide collaboration to develop a Whole Device Model (WDM) of Compact Tori. One application of the WDM is the study of stability and transport properties of C-2W using two particle-in-cell codes, ANC and FPIC. These codes can be used to find new stable operating points, and to make predictions of the turbulent transport at those points. They will be used in collaboration with the C-2W experimental program to validate the codes against C-2W, mitigate experimental risk inherent in the exploration of new parameter regimes, accelerate the optimization of experimental operating scenarios, and to find operating points for future FRC reactor designs.
Body mass index cut-points to identify cardiometabolic risk in black South Africans.
Kruger, H Salome; Schutte, Aletta E; Walsh, Corinna M; Kruger, Annamarie; Rennie, Kirsten L
2017-02-01
To determine optimal body mass index (BMI) cut-points for the identification of cardiometabolic risk in black South African adults. We performed a cross-sectional study of a weighted sample of healthy black South Africans aged 25-65 years (721 men, 1386 women) from the North West and Free State Provinces. Demographic, lifestyle and anthropometric measures were taken, and blood pressure, fasting serum triglycerides, high-density lipoprotein (HDL) cholesterol and blood glucose were measured. We defined elevated cardiometabolic risk as having three or more risk factors according to international metabolic syndrome criteria. Receiver operating characteristic curves were applied to identify an optimal BMI cut-point for men and women. BMI had good diagnostic performance to identify clustering of three or more risk factors, as well as individual risk factors: low HDL-cholesterol, elevated fasting glucose and triglycerides, with areas under the curve >.6, but not for high blood pressure. Optimal BMI cut-points averaged 22 kg/m 2 for men and 28 kg/m 2 for women, respectively, with better sensitivity in men (44.0-71.9 %), and in women (60.6-69.8 %), compared to a BMI of 30 kg/m 2 (17-19.1, 53-61.4 %, respectively). Men and women with a BMI >22 and >28 kg/m 2 , respectively, had significantly increased probability of elevated cardiometabolic risk after adjustment for age, alcohol use and smoking. In black South African men, a BMI cut-point of 22 kg/m 2 identifies those at cardiometabolic risk, whereas a BMI of 30 kg/m 2 underestimates risk. In women, a cut-point of 28 kg/m 2 , approaching the WHO obesity cut-point, identifies those at risk.
Bantis, Leonidas E; Nakas, Christos T; Reiser, Benjamin; Myall, Daniel; Dalrymple-Alford, John C
2017-06-01
The three-class approach is used for progressive disorders when clinicians and researchers want to diagnose or classify subjects as members of one of three ordered categories based on a continuous diagnostic marker. The decision thresholds or optimal cut-off points required for this classification are often chosen to maximize the generalized Youden index (Nakas et al., Stat Med 2013; 32: 995-1003). The effectiveness of these chosen cut-off points can be evaluated by estimating their corresponding true class fractions and their associated confidence regions. Recently, in the two-class case, parametric and non-parametric methods were investigated for the construction of confidence regions for the pair of the Youden-index-based optimal sensitivity and specificity fractions that can take into account the correlation introduced between sensitivity and specificity when the optimal cut-off point is estimated from the data (Bantis et al., Biomet 2014; 70: 212-223). A parametric approach based on the Box-Cox transformation to normality often works well while for markers having more complex distributions a non-parametric procedure using logspline density estimation can be used instead. The true class fractions that correspond to the optimal cut-off points estimated by the generalized Youden index are correlated similarly to the two-class case. In this article, we generalize these methods to the three- and to the general k-class case which involves the classification of subjects into three or more ordered categories, where ROC surface or ROC manifold methodology, respectively, is typically employed for the evaluation of the discriminatory capacity of a diagnostic marker. We obtain three- and multi-dimensional joint confidence regions for the optimal true class fractions. We illustrate this with an application to the Trail Making Test Part A that has been used to characterize cognitive impairment in patients with Parkinson's disease.
SymPix: A Spherical Grid for Efficient Sampling of Rotationally Invariant Operators
NASA Astrophysics Data System (ADS)
Seljebotn, D. S.; Eriksen, H. K.
2016-02-01
We present SymPix, a special-purpose spherical grid optimized for efficiently sampling rotationally invariant linear operators. This grid is conceptually similar to the Gauss-Legendre (GL) grid, aligning sample points with iso-latitude rings located on Legendre polynomial zeros. Unlike the GL grid, however, the number of grid points per ring varies as a function of latitude, avoiding expensive oversampling near the poles and ensuring nearly equal sky area per grid point. The ratio between the number of grid points in two neighboring rings is required to be a low-order rational number (3, 2, 1, 4/3, 5/4, or 6/5) to maintain a high degree of symmetries. Our main motivation for this grid is to solve linear systems using multi-grid methods, and to construct efficient preconditioners through pixel-space sampling of the linear operator in question. As a benchmark and representative example, we compute a preconditioner for a linear system that involves the operator \\widehat{{\\boldsymbol{D}}}+{\\widehat{{\\boldsymbol{B}}}}T{{\\boldsymbol{N}}}-1\\widehat{{\\boldsymbol{B}}}, where \\widehat{{\\boldsymbol{B}}} and \\widehat{{\\boldsymbol{D}}} may be described as both local and rotationally invariant operators, and {\\boldsymbol{N}} is diagonal in the pixel domain. For a bandwidth limit of {{\\ell }}{max} = 3000, we find that our new SymPix implementation yields average speed-ups of 360 and 23 for {\\widehat{{\\boldsymbol{B}}}}T{{\\boldsymbol{N}}}-1\\widehat{{\\boldsymbol{B}}} and \\widehat{{\\boldsymbol{D}}}, respectively, compared with the previous state-of-the-art implementation.
Preparing GMAT for Operational Maneuver Planning of the Advanced Composition Explorer (ACE)
NASA Technical Reports Server (NTRS)
Qureshi, Rizwan Hamid; Hughes, Steven P.
2014-01-01
The General Mission Analysis Tool (GMAT) is an open-source space mission design, analysis and trajectory optimization tool. GMAT is developed by a team of NASA, private industry, public and private contributors. GMAT is designed to model, optimize and estimate spacecraft trajectories in flight regimes ranging from low Earth orbit to lunar applications, interplanetary trajectories and other deep space missions. GMAT has also been flight qualified to support operational maneuver planning for the Advanced Composition Explorer (ACE) mission. ACE was launched in August, 1997 and is orbiting the Sun-Earth L1 libration point. The primary science objective of ACE is to study the composition of both the solar wind and the galactic cosmic rays. Operational orbit determination, maneuver operations and product generation for ACE are conducted by NASA Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF). This paper discusses the entire engineering lifecycle and major operational certification milestones that GMAT successfully completed to obtain operational certification for the ACE mission. Operational certification milestones such as gathering of the requirements for ACE operational maneuver planning, gap analysis, test plans and procedures development, system design, pre-shadow operations, training to FDF ACE maneuver planners, shadow operations, Test Readiness Review (TRR) and finally Operational Readiness Review (ORR) are discussed. These efforts have demonstrated that GMAT is flight quality software ready to support ACE mission operations in the FDF.
State-of-The-Art of Modeling Methodologies and Optimization Operations in Integrated Energy System
NASA Astrophysics Data System (ADS)
Zheng, Zhan; Zhang, Yongjun
2017-08-01
Rapid advances in low carbon technologies and smart energy communities are reshaping future patterns. Uncertainty in energy productions and demand sides are paving the way towards decentralization management. Current energy infrastructures could not meet with supply and consumption challenges, along with emerging environment and economic requirements. Integrated Energy System(IES) whereby electric power, natural gas, heating couples with each other demonstrates that such a significant technique would gradually become one of main comprehensive and optimal energy solutions with high flexibility, friendly renewables absorption and improving efficiency. In these global energy trends, we summarize this literature review. Firstly the accurate definition and characteristics of IES have been presented. Energy subsystem and coupling elements modeling issues are analyzed. It is pointed out that decomposed and integrated analysis methods are the key algorithms for IES optimization operations problems, followed by exploring the IES market mechanisms. Finally several future research tendencies of IES, such as dynamic modeling, peer-to-peer trading, couple market design, sare under discussion.
Optimizing the physical ergonomics indices for the use of partial pressure suits.
Ding, Li; Li, Xianxue; Hedge, Alan; Hu, Huimin; Feathers, David; Qin, Zhifeng; Xiao, Huajun; Xue, Lihao; Zhou, Qianxiang
2015-03-01
This study developed an ergonomic evaluation system for the design of high-altitude partial pressure suits (PPSs). A total of twenty-one Chinese males participated in the experiment which tested three types of ergonomics indices (manipulative mission, operational reach and operational strength) were studied using a three-dimensional video-based motion capture system, a target-pointing board, a hand dynamometer, and a step-tread apparatus. In total, 36 ergonomics indices were evaluated and optimized using regression and fitting analysis. Some indices that were found to be linearly related and redundant were removed from the study. An optimal ergonomics index system was established that can be used to conveniently and quickly evaluate the performance of different pressurized/non-pressurized suit designs. The resulting ergonomics index system will provide a theoretical basis and practical guidance for mission planners, suit designers and engineers to design equipment for human use, and to aid in assessing partial pressure suits. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Innovative model-based flow rate optimization for vanadium redox flow batteries
NASA Astrophysics Data System (ADS)
König, S.; Suriyah, M. R.; Leibfried, T.
2016-11-01
In this paper, an innovative approach is presented to optimize the flow rate of a 6-kW vanadium redox flow battery with realistic stack dimensions. Efficiency is derived using a multi-physics battery model and a newly proposed instantaneous efficiency determination technique. An optimization algorithm is applied to identify optimal flow rates for operation points defined by state-of-charge (SoC) and current. The proposed method is evaluated against the conventional approach of applying Faraday's first law of electrolysis, scaled to the so-called flow factor. To make a fair comparison, the flow factor is also optimized by simulating cycles with different charging/discharging currents. It is shown through the obtained results that the efficiency is increased by up to 1.2% points; in addition, discharge capacity is also increased by up to 1.0 kWh or 5.4%. Detailed loss analysis is carried out for the cycles with maximum and minimum charging/discharging currents. It is shown that the proposed method minimizes the sum of losses caused by concentration over-potential, pumping and diffusion. Furthermore, for the deployed Nafion 115 membrane, it is observed that diffusion losses increase with stack SoC. Therefore, to decrease stack SoC and lower diffusion losses, a higher flow rate during charging than during discharging is reasonable.
Spacecraft Status Report: 2001 Mars Odyssey
NASA Technical Reports Server (NTRS)
Boyles, Carole
2012-01-01
Fourth extension of Odyssey mission continues, with orbital science investigations and relay services for landed assets. Mitigation of aging IMU and UHF transceiver. ODY has responded to Program Office/board recommendations. All Stellar mode has been certified for flight operations and is now standard for nadir point operations on the A-side. Investigating options to mitigate aging Battery. Gradual transfer to a later LMST orbit node to shorten eclipse durations. Reduce spacecraft loads during the longer eclipses. Optimize battery performance. ODY is preparing for E5 Proposal and Planetary Science Division FY12 Senior Review activities. ODY is on track to support MSL EDL and surface operations. ODY is managing consumables in order to remain in operations until 2020.
DOE Office of Scientific and Technical Information (OSTI.GOV)
B.R. Westphal; J.C. Price; R.D. Mariani
The pyroprocessing of used nuclear fuel via electrorefining requires the continued addition of uranium trichloride to sustain operations. Uranium trichloride is utilized as an oxidant in the system to allow separation of uranium metal from the minor actinides and fission products. The inventory of uranium trichloride had diminished to a point that production was necessary to continue electrorefiner operations. Following initial experimentation, cupric chloride was chosen as a reactant with uranium metal to synthesize uranium trichloride. Despite the variability in equipment and charge characteristics, uranium trichloride was produced in sufficient quantities to maintain operations in the electrorefiner. The results andmore » conclusions from several experiments are presented along with a set of optimized operating conditions for the synthesis of uranium trichloride.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.
2015-07-01
This paper considers a collection of networked nonlinear dynamical systems, and addresses the synthesis of feedback controllers that seek optimal operating points corresponding to the solution of pertinent network-wide optimization problems. Particular emphasis is placed on the solution of semidefinite programs (SDPs). The design of the feedback controller is grounded on a dual e-subgradient approach, with the dual iterates utilized to dynamically update the dynamical-system reference signals. Global convergence is guaranteed for diminishing stepsize rules, even when the reference inputs are updated at a faster rate than the dynamical-system settling time. The application of the proposed framework to the controlmore » of power-electronic inverters in AC distribution systems is discussed. The objective is to bridge the time-scale separation between real-time inverter control and network-wide optimization. Optimization objectives assume the form of SDP relaxations of prototypical AC optimal power flow problems.« less
Stochastic optimization of GeantV code by use of genetic algorithms
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...
2017-10-01
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less
Stochastic optimization of GeantV code by use of genetic algorithms
NASA Astrophysics Data System (ADS)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.
2017-10-01
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.
Stochastic optimization of GeantV code by use of genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less
Design principles and operating principles: the yin and yang of optimal functioning.
Voit, Eberhard O
2003-03-01
Metabolic engineering has as a goal the improvement of yield of desired products from microorganisms and cell lines. This goal has traditionally been approached with experimental biotechnological methods, but it is becoming increasingly popular to precede the experimental phase by a mathematical modeling step that allows objective pre-screening of possible improvement strategies. The models are either linear and represent the stoichiometry and flux distribution in pathways or they are non-linear and account for the full kinetic behavior of the pathway, which is often significantly effected by regulatory signals. Linear flux analysis is simpler and requires less input information than a full kinetic analysis, and the question arises whether the consideration of non-linearities is really necessary for devising optimal strategies for yield improvements. The article analyzes this question with a generic, representative pathway. It shows that flux split ratios, which are the key criterion for linear flux analysis, are essentially sufficient for unregulated, but not for regulated branch points. The interrelationships between regulatory design on one hand and optimal patterns of operation on the other suggest the investigation of operating principles that complement design principles, like a user's manual complements the hardwiring of electronic equipment.
Modelisation et optimisation des systemes energetiques a l'aide d'algorithmes evolutifs
NASA Astrophysics Data System (ADS)
Hounkonnou, Sessinou M. William
Optimization of thermal and nuclear plant has many economics advantages as well as environmentals. Therefore new operating points research and use of new tools to achieve those kind of optimization are the subject of many studies. In this momentum, this project is intended to optimize energetic systems precisely the secondary loop of Gentilly 2 nuclear plant using both the extraction of the high and low pressure turbine as well as the extraction of the mixture coming from the steam generator. A detailed thermodynamic model of the various equipment of the secondary loop such as the feed water heaters, the moisture separator-reheater, the dearator, the condenser and the turbine is carried out. We use Matlab software (version R2007b, 2007) with the library for the thermodynamic properties of water and steam (XSteam pour Matlab, Holmgren, 2006). A model of the secondary loop is than obtained thanks to the assembly of the different equipments. A simulation of the equipment and the complete cycle enabled us to release two objectifs functions knowing as the net output and the efficiency which evolve in an opposite way according to the variation of the extractions. Due to the complexity of the problem, we use a method based on the genetic algorithms for the optimization. More precisely we used a tool which was developed at the "Institut de genie nucleaire" named BEST (Boundary Exploration Search Technique) developed in VBA* (Visual BASIC for Application) for its ability to converge more quickly and to carry out a more exhaustive search at the border of the optimal solutions. The use of the DDE (Dynamic Data Exchange) enables us to link the simulator and the optimizer. The results obtained show us that they still exists several combinations of extractions which make it possible to obtain a better point of operation for the improvement of the performance of Gentilly 2 power station secondary loop. *Trademark of Microsoft
Air Traffic Management Technology Demonstration-1 Concept of Operations (ATD-1 ConOps), Version 2.0
NASA Technical Reports Server (NTRS)
Baxley, Brian T.; Johnson, William C.; Swenson, Harry N.; Robinson, John E.; Prevot, Tom; Callantine, Todd J.; Scardina, John; Greene, Michael
2013-01-01
This document is an update to the operations and procedures envisioned for NASA s Air Traffic Management (ATM) Technology Demonstration #1 (ATD-1). The ATD-1 Concept of Operations (ConOps) integrates three NASA technologies to achieve high throughput, fuel-efficient arrival operations into busy terminal airspace. They are Traffic Management Advisor with Terminal Metering (TMA-TM) for precise time-based schedules to the runway and points within the terminal area, Controller-Managed Spacing (CMS) decision support tools for terminal controllers to better manage aircraft delay using speed control, and Flight deck Interval Management (FIM) avionics and flight crew procedures to conduct airborne spacing operations. The ATD-1 concept provides de-conflicted and efficient operations of multiple arrival streams of aircraft, passing through multiple merge points, from top-of-descent (TOD) to the Final Approach Fix. These arrival streams are Optimized Profile Descents (OPDs) from en route altitude to the runway, using primarily speed control to maintain separation and schedule. The ATD-1 project is currently addressing the challenges of integrating the three technologies, and their implantation into an operational environment. The ATD-1 goals include increasing the throughput of high-density airports, reducing controller workload, increasing efficiency of arrival operations and the frequency of trajectory-based operations, and promoting aircraft ADS-B equipage.
NASA's ATM Technology Demonstration-1: Integrated Concept of Arrival Operations
NASA Technical Reports Server (NTRS)
Baxley, Brian T.; Swenson, Harry N.; Prevot, Thomas; Callantine, Todd J.
2012-01-01
This paper describes operations and procedures envisioned for NASA s Air Traffic Management (ATM) Technology Demonstration #1 (ATD-1). The ATD-1 Concept of Operations (ConOps) demonstration will integrate three NASA technologies to achieve high throughput, fuel-efficient arrival operations into busy terminal airspace. They are Traffic Management Advisor with Terminal Metering (TMA-TM) for precise time-based schedules to the runway and points within the terminal area, Controller-Managed Spacing (CMS) decision support tools for terminal controllers to better manage aircraft delay using speed control, and Flight deck Interval Management (FIM) avionics and flight crew procedures to conduct airborne spacing operations. The ATD-1 concept provides de-conflicted and efficient operations of multiple arrival streams of aircraft, passing through multiple merge points, from top-of-descent (TOD) to touchdown. It also enables aircraft to conduct Optimized Profile Descents (OPDs) from en route altitude to the runway, using primarily speed control to maintain separation and schedule. The ATD-1 project is currently addressing the challenges of integrating the three technologies, and implantation into an operational environment. Goals of the ATD-1 demonstration include increasing the throughput of high-density airports, reducing controller workload, increasing efficiency of arrival operations and the frequency of trajectory-based operations, and promoting aircraft ADS-B equipage.
Optimization of the Switch Mechanism in a Circuit Breaker Using MBD Based Simulation
Jang, Jin-Seok; Yoon, Chang-Gyu; Ryu, Chi-Young; Kim, Hyun-Woo; Bae, Byung-Tae; Yoo, Wan-Suk
2015-01-01
A circuit breaker is widely used to protect electric power system from fault currents or system errors; in particular, the opening mechanism in a circuit breaker is important to protect current overflow in the electric system. In this paper, multibody dynamic model of a circuit breaker including switch mechanism was developed including the electromagnetic actuator system. Since the opening mechanism operates sequentially, optimization of the switch mechanism was carried out to improve the current breaking time. In the optimization process, design parameters were selected from length and shape of each latch, which changes pivot points of bearings to shorten the breaking time. To validate optimization results, computational results were compared to physical tests with a high speed camera. Opening time of the optimized mechanism was decreased by 2.3 ms, which was proved by experiments. Switch mechanism design process can be improved including contact-latch system by using this process. PMID:25918740
Khanna, Sankalp; Boyle, Justin; Good, Norm; Lind, James
2012-10-01
To investigate the effect of hospital occupancy levels on inpatient and ED patient flow parameters, and to simulate the impact of shifting discharge timing on occupancy levels. Retrospective analysis of hospital inpatient data and ED data from 23 reporting public hospitals in Queensland, Australia, across 30 months. Relationships between outcome measures were explored through the aggregation of the historic data into 21 912 hourly intervals. Main outcome measures included admission and discharge rates, occupancy levels, length of stay for admitted and emergency patients, and the occurrence of access block. The impact of shifting discharge timing on occupancy levels was quantified using observed and simulated data. The study identified three stages of system performance decline, or choke points, as hospital occupancy increased. These choke points were found to be dependent on hospital size, and reflect a system change from 'business-as-usual' to 'crisis'. Effecting early discharge of patients was also found to significantly (P < 0.001) impact overcrowding levels and improve patient flow. Modern hospital systems have the ability to operate efficiently above an often-prescribed 85% occupancy level, with optimal levels varying across hospitals of different size. Operating over these optimal levels leads to performance deterioration defined around occupancy choke points. Understanding these choke points and designing strategies around alleviating these flow bottlenecks would improve capacity management, reduce access block and improve patient outcomes. Effecting early discharge also helps alleviate overcrowding and related stress on the system. © 2012 CSIRO. EMA © 2012 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
NASA Astrophysics Data System (ADS)
McCurdy, David R.; Krivanek, Thomas M.; Roche, Joseph M.; Zinolabedini, Reza
2006-01-01
The concept of a human rated transport vehicle for various near earth missions is evaluated using a liquid hydrogen fueled Bimodal Nuclear Thermal Propulsion (BNTP) approach. In an effort to determine the preliminary sizing and optimal propulsion system configuration, as well as the key operating design points, an initial investigation into the main system level parameters was conducted. This assessment considered not only the performance variables but also the more subjective reliability, operability, and maintainability attributes. The SIZER preliminary sizing tool was used to facilitate rapid modeling of the trade studies, which included tank materials, propulsive versus an aero-capture trajectory, use of artificial gravity, reactor chamber operating pressure and temperature, fuel element scaling, engine thrust rating, engine thrust augmentation by adding oxygen to the flow in the nozzle for supersonic combustion, and the baseline turbopump configuration to address mission redundancy and safety requirements. A high level system perspective was maintained to avoid focusing solely on individual component optimization at the expense of system level performance, operability, and development cost.
[Operation room management in quality control certification of a mainstream hospital].
Leidinger, W; Meierhofer, J N; Schüpfer, G
2006-11-01
We report the results of our study concerning the organisation of operating room (OR) capacity planned 1 year in advance. The use of OR is controlled using 2 global controlling numbers: a) the actual time difference between the expected optimal and previously calculated OR running time and b) the punctuality of starting the first operation in each OR. The focal point of the presented OR management concept is a consensus-oriented decision-making and steering process led by a coordinator who achieves a high degree of acceptance by means of comprehensive transparency. Based on the accepted running time, the optimal productivity of OR's (OP_A(%) can be calculated. In this way an increase of the overall capacity (actual running time) of ORs was from 40% to over 55% was achieved. Nevertheless, enthusiasm and teamwork from all persons involved in the system are vital for success as well as a completely independent operating theatre manager. Using this concept over 90% of the requirements for the new certification catalogue for hospitals in Germany was achieved.
Enhanced intelligence through optimized TCPED concepts for airborne ISR
NASA Astrophysics Data System (ADS)
Spitzer, M.; Kappes, E.; Böker, D.
2012-06-01
Current multinational operations show an increased demand for high quality actionable intelligence for different operational levels and users. In order to achieve sufficient availability, quality and reliability of information, various ISR assets are orchestrated within operational theatres. Especially airborne Intelligence, Surveillance and Reconnaissance (ISR) assets provide - due to their endurance, non-intrusiveness, robustness, wide spectrum of sensors and flexibility to mission changes - significant intelligence coverage of areas of interest. An efficient and balanced utilization of airborne ISR assets calls for advanced concepts for the entire ISR process framework including the Tasking, Collection, Processing, Exploitation and Dissemination (TCPED). Beyond this, the employment of current visualization concepts, shared information bases and information customer profiles, as well as an adequate combination of ISR sensors with different information age and dynamic (online) retasking process elements provides the optimization of interlinked TCPED processes towards higher process robustness, shorter process duration, more flexibility between ISR missions and, finally, adequate "entry points" for information requirements by operational users and commands. In addition, relevant Trade-offs of distributed and dynamic TCPED processes are examined and future trends are depicted.
Modelling and optimization of a wellhead gas flowmeter using concentric pipes
NASA Astrophysics Data System (ADS)
Nec, Yana; Huculak, Greg
2017-09-01
A novel configuration of a landfill wellhead was analysed to measure the flow rate of gas extracted from sanitary landfills. The device provides access points for pressure measurement integral to flow rate computation similarly to orifice and Venturi meters, and has the advantage of eliminating the problem of water condensation often impairing the accuracy thereof. It is proved that the proposed configuration entails comparable computational complexity and negligible sensitivity to geometric parameters. Calibration for the new device was attained using a custom optimization procedure, operating on a quadri-dimensional parameter surface evincing discontinuity and non-smoothness.
Optimized positioning of autonomous surgical lamps
NASA Astrophysics Data System (ADS)
Teuber, Jörn; Weller, Rene; Kikinis, Ron; Oldhafer, Karl-Jürgen; Lipp, Michael J.; Zachmann, Gabriel
2017-03-01
We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.
Adelmann, S; Baldhoff, T; Koepcke, B; Schembecker, G
2013-01-25
The selection of solvent systems in centrifugal partition chromatography (CPC) is the most critical point in setting up a separation. Therefore, lots of research was done on the topic in the last decades. But the selection of suitable operating parameters (mobile phase flow rate, rotational speed and mode of operation) with respect to hydrodynamics and pressure drop limit in CPC is still mainly driven by experience of the chromatographer. In this work we used hydrodynamic analysis for the prediction of most suitable operating parameters. After selection of different solvent systems with respect to partition coefficients for the target compound the hydrodynamics were visualized. Based on flow pattern and retention the operating parameters were selected for the purification runs of nybomycin derivatives that were carried out with a 200 ml FCPC(®) rotor. The results have proven that the selection of optimized operating parameters by analysis of hydrodynamics only is possible. As the hydrodynamics are predictable by the physical properties of the solvent system the optimized operating parameters can be estimated, too. Additionally, we found that dispersion and especially retention are improved if the less viscous phase is mobile. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
Gaveau, Jérémie; Paizis, Christos; Berret, Bastien; Pozzo, Thierry; Papaxanthis, Charalambos
2011-08-01
After an exposure to weightlessness, the central nervous system operates under new dynamic and sensory contexts. To find optimal solutions for rapid adaptation, cosmonauts have to decide whether parameters from the world or their body have changed and to estimate their properties. Here, we investigated sensorimotor adaptation after a spaceflight of 10 days. Five cosmonauts performed forward point-to-point arm movements in the sagittal plane 40 days before and 24 and 72 h after the spaceflight. We found that, whereas the shape of hand velocity profiles remained unaffected after the spaceflight, hand path curvature significantly increased 1 day after landing and returned to the preflight level on the third day. Control experiments, carried out by 10 subjects under normal gravity conditions, showed that loading the arm with varying loads (from 0.3 to 1.350 kg) did not affect path curvature. Therefore, changes in path curvature after spaceflight cannot be the outcome of a control process based on the subjective feeling that arm inertia was increased. By performing optimal control simulations, we found that arm kinematics after exposure to microgravity corresponded to a planning process that overestimated the gravity level and optimized movements in a hypergravity environment (∼1.4 g). With time and practice, the sensorimotor system was recalibrated to Earth's gravity conditions, and cosmonauts progressively generated accurate estimations of the body state, gravity level, and sensory consequences of the motor commands (72 h). These observations provide novel insights into how the central nervous system evaluates body (inertia) and environmental (gravity) states during sensorimotor adaptation of point-to-point arm movements after an exposure to weightlessness.
Decentralized DC Microgrid Monitoring and Optimization via Primary Control Perturbations
NASA Astrophysics Data System (ADS)
Angjelichinoski, Marko; Scaglione, Anna; Popovski, Petar; Stefanovic, Cedomir
2018-06-01
We treat the emerging power systems with direct current (DC) MicroGrids, characterized with high penetration of power electronic converters. We rely on the power electronics to propose a decentralized solution for autonomous learning of and adaptation to the operating conditions of the DC Mirogrids; the goal is to eliminate the need to rely on an external communication system for such purpose. The solution works within the primary droop control loops and uses only local bus voltage measurements. Each controller is able to estimate (i) the generation capacities of power sources, (ii) the load demands, and (iii) the conductances of the distribution lines. To define a well-conditioned estimation problem, we employ decentralized strategy where the primary droop controllers temporarily switch between operating points in a coordinated manner, following amplitude-modulated training sequences. We study the use of the estimator in a decentralized solution of the Optimal Economic Dispatch problem. The evaluations confirm the usefulness of the proposed solution for autonomous MicroGrid operation.
Homodyning and heterodyning the quantum phase
NASA Technical Reports Server (NTRS)
Dariano, Giacomo M.; Macchiavello, C.; Paris, M. G. A.
1994-01-01
The double-homodyne and the heterodyne detection schemes for phase shifts between two synchronous modes of the electromagnetic field are analyzed in the framework of quantum estimation theory. The probability operator-valued measures (POM's) of the detectors are evaluated and compared with the ideal one in the limit of strong local reference oscillator. The present operational approach leads to a reasonable definition of phase measurement, whose sensitivity is actually related to the output r.m.s. noise of the photodetector. We emphasize that the simple-homodyne scheme does not correspond to a proper phase-shift measurements as it is just a zero-point detector. The sensitivity of all detection schemes are optimized at fixed energy with respect to the input state of radiation. It is shown that the optimal sensitivity can be actually achieved using suited squeezed states.
Improved imaging algorithm for bridge crack detection
NASA Astrophysics Data System (ADS)
Lu, Jingxiao; Song, Pingli; Han, Kaihong
2012-04-01
This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.
Brunton, Steven L.; Brunton, Bingni W.; Proctor, Joshua L.; Kutz, J. Nathan
2016-01-01
In this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control. PMID:26919740
Non-traditional Sensor Tasking for SSA: A Case Study
NASA Astrophysics Data System (ADS)
Herz, A.; Herz, E.; Center, K.; Martinez, I.; Favero, N.; Clark, C.; Therien, W.; Jeffries, M.
Industry has recognized that maintaining SSA of the orbital environment going forward is too challenging for the government alone. Consequently there are a significant number of commercial activities in various stages of development standing-up novel sensors and sensor networks to assist in SSA gathering and dissemination. Use of these systems will allow government and military operators to focus on the most sensitive space control issues while allocating routine or lower priority data gathering responsibility to the commercial side. The fact that there will be multiple (perhaps many) commercial sensor capabilities available in this new operational model begets a common access solution. Absent a central access point to assert data needs, optimized use of all commercial sensor resources is not possible and the opportunity for coordinated collections satisfying overarching SSA-elevating objectives is lost. Orbit Logic is maturing its Heimdall Web system - an architecture facilitating “data requestor” perspectives (allowing government operations centers to assert SSA data gathering objectives) and “sensor operator” perspectives (through which multiple sensors of varying phenomenology and capability are integrated via machine -machine interfaces). When requestors submit their needs, Heimdall’s planning engine determines tasking schedules across all sensors, optimizing their use via an SSA-specific figure-of-merit. ExoAnalytic was a key partner in refining the sensor operator interfaces, working with Orbit Logic through specific details of sensor tasking schedule delivery and the return of observation data. Scant preparation on both sides preceded several integration exercises (walk-then-run style), which culminated in successful demonstration of the ability to supply optimized schedules for routine public catalog data collection – then adapt sensor tasking schedules in real-time upon receipt of urgent data collection requests. This paper will provide a narrative of the joint integration process - detailing decision points, compromises, and results obtained on the road toward a set of interoperability standards for commercial sensor accommodation.
NASA Astrophysics Data System (ADS)
Trujillo Bueno, J.; Fabiani Bendicho, P.
1995-12-01
Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.
First and second order derivatives for optimizing parallel RF excitation waveforms.
Majewski, Kurt; Ritter, Dieter
2015-09-01
For piecewise constant magnetic fields, the Bloch equations (without relaxation terms) can be solved explicitly. This way the magnetization created by an excitation pulse can be written as a concatenation of rotations applied to the initial magnetization. For fixed gradient trajectories, the problem of finding parallel RF waveforms, which minimize the difference between achieved and desired magnetization on a number of voxels, can thus be represented as a finite-dimensional minimization problem. We use quaternion calculus to formulate this optimization problem in the magnitude least squares variant and specify first and second order derivatives of the objective function. We obtain a small tip angle approximation as first order Taylor development from the first order derivatives and also develop algorithms for first and second order derivatives for this small tip angle approximation. All algorithms are accompanied by precise floating point operation counts to assess and compare the computational efforts. We have implemented these algorithms as callback functions of an interior-point solver. We have applied this numerical optimization method to example problems from the literature and report key observations. Copyright © 2015 Elsevier Inc. All rights reserved.
First and second order derivatives for optimizing parallel RF excitation waveforms
NASA Astrophysics Data System (ADS)
Majewski, Kurt; Ritter, Dieter
2015-09-01
For piecewise constant magnetic fields, the Bloch equations (without relaxation terms) can be solved explicitly. This way the magnetization created by an excitation pulse can be written as a concatenation of rotations applied to the initial magnetization. For fixed gradient trajectories, the problem of finding parallel RF waveforms, which minimize the difference between achieved and desired magnetization on a number of voxels, can thus be represented as a finite-dimensional minimization problem. We use quaternion calculus to formulate this optimization problem in the magnitude least squares variant and specify first and second order derivatives of the objective function. We obtain a small tip angle approximation as first order Taylor development from the first order derivatives and also develop algorithms for first and second order derivatives for this small tip angle approximation. All algorithms are accompanied by precise floating point operation counts to assess and compare the computational efforts. We have implemented these algorithms as callback functions of an interior-point solver. We have applied this numerical optimization method to example problems from the literature and report key observations.
Relationship between Aircraft Noise Contour Area and Noise Levels at Certification Points
NASA Technical Reports Server (NTRS)
Powell, Clemans A.
2003-01-01
The use of sound exposure level contour area reduction has been proposed as an alternative or supplemental metric of progress and success for the NASA Quiet Aircraft Technology program, which currently uses the average of predicted noise reductions at three community locations. As the program has expanded to include reductions in airframe noise as well as reduction due to optimization of operating procedures for lower noise, there is concern that the three-point methodology may not represent a fair measure of benefit to airport communities. This paper addresses several topics related to this proposal: (1) an analytical basis for a relationship between certification noise levels and noise contour areas for departure operations is developed, (2) the relationship between predicted noise contour area and the noise levels measured or predicted at the certification measurement points is examined for a wide range of commercial and business aircraft, and (3) reductions in contour area for low-noise approach scenarios are predicted and equivalent reductions in source noise are determined.
Performance Gains of Propellant Management Devices for Liquid Hydrogen Depots
NASA Technical Reports Server (NTRS)
Hartwig, Jason W.; McQuillen, John B.; Chato, David J.
2013-01-01
This paper presents background, experimental design, and preliminary experimental results for the liquid hydrogen bubble point tests conducted at the Cryogenic Components Cell 7 facility at the NASA Glenn Research Center in Cleveland, Ohio. The purpose of the test series was to investigate the parameters that affect liquid acquisition device (LAD) performance in a liquid hydrogen (LH2) propellant tank, to mitigate risk in the final design of the LAD for the Cryogenic Propellant Storage and Transfer Technology Demonstration Mission, and to provide insight into optimal LAD operation for future LH2 depots. Preliminary test results show an increase in performance and screen retention over the low reference LH2 bubble point value for a 325 2300 screen in three separate ways, thus improving fundamental LH2 LAD performance. By using a finer mesh screen, operating at a colder liquid temperature, and pressurizing with a noncondensible pressurant gas, a significant increase in margin is achieved in bubble point pressure for LH2 screen channel LADs.
NASA Astrophysics Data System (ADS)
Farhat, I. A. H.; Alpha, C.; Gale, E.; Atia, D. Y.; Stein, A.; Isakovic, A. F.
The scaledown of magnetic tunnel junctions (MTJ) and related nanoscale spintronics devices poses unique challenges for energy optimization of their performance. We demonstrate the dependence of the switching current on the scaledown variable, while considering the influence of geometric parameters of MTJ, such as the free layer thickness, tfree, lateral size of the MTJ, w, and the anisotropy parameter of the MTJ. At the same time, we point out which values of the saturation magnetization, Ms, and anisotropy field, Hk, can lead to lowering the switching current and overall decrease of the energy needed to operate an MTJ. It is demonstrated that scaledown via decreasing the lateral size of the MTJ, while allowing some other parameters to be unconstrained, can improve energy performance by a measurable factor, shown to be the function of both geometric and physical parameters above. Given the complex interdependencies among both families of parameters, we developed a particle swarm optimization (PSO) algorithm that can simultaneously lower energy of operation and the switching current density. Results we obtained in scaledown study and via PSO optimization are compared to experimental results. Support by Mubadala-SRC 2012-VJ-2335 is acknowledged, as are staff at Cornell-CNF and BNL-CFN.
NASA Technical Reports Server (NTRS)
Mehr, Ali Farhang; Tumer, Irem
2005-01-01
In this paper, we will present a new methodology that measures the "worth" of deploying an additional testing instrument (sensor) in terms of the amount of information that can be retrieved from such measurement. This quantity is obtained using a probabilistic model of RLV's that has been partially developed in the NASA Ames Research Center. A number of correlated attributes are identified and used to obtain the worth of deploying a sensor in a given test point from an information-theoretic viewpoint. Once the information-theoretic worth of sensors is formulated and incorporated into our general model for IHM performance, the problem can be formulated as a constrained optimization problem where reliability and operational safety of the system as a whole is considered. Although this research is conducted specifically for RLV's, the proposed methodology in its generic form can be easily extended to other domains of systems health monitoring.
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho
2007-03-01
The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.
AN OPTIMIZED 64X64 POINT TWO-DIMENSIONAL FAST FOURIER TRANSFORM
NASA Technical Reports Server (NTRS)
Miko, J.
1994-01-01
Scientists at Goddard have developed an efficient and powerful program-- An Optimized 64x64 Point Two-Dimensional Fast Fourier Transform-- which combines the performance of real and complex valued one-dimensional Fast Fourier Transforms (FFT's) to execute a two-dimensional FFT and its power spectrum coefficients. These coefficients can be used in many applications, including spectrum analysis, convolution, digital filtering, image processing, and data compression. The program's efficiency results from its technique of expanding all arithmetic operations within one 64-point FFT; its high processing rate results from its operation on a high-speed digital signal processor. For non-real-time analysis, the program requires as input an ASCII data file of 64x64 (4096) real valued data points. As output, this analysis produces an ASCII data file of 64x64 power spectrum coefficients. To generate these coefficients, the program employs a row-column decomposition technique. First, it performs a radix-4 one-dimensional FFT on each row of input, producing complex valued results. Then, it performs a one-dimensional FFT on each column of these results to produce complex valued two-dimensional FFT results. Finally, the program sums the squares of the real and imaginary values to generate the power spectrum coefficients. The program requires a Banshee accelerator board with 128K bytes of memory from Atlanta Signal Processors (404/892-7265) installed on an IBM PC/AT compatible computer (DOS ver. 3.0 or higher) with at least one 16-bit expansion slot. For real-time operation, an ASPI daughter board is also needed. The real-time configuration reads 16-bit integer input data directly into the accelerator board, operating on 64x64 point frames of data. The program's memory management also allows accumulation of the coefficient results. The real-time processing rate to calculate and accumulate the 64x64 power spectrum output coefficients is less than 17.0 mSec. Documentation is included in the price of the program. Source code is written in C, 8086 Assembly, and Texas Instruments TMS320C30 Assembly Languages. This program is available on a 5.25 inch 360K MS-DOS format diskette. IBM and IBM PC are registered trademarks of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation.
Point-and-stare operation and high-speed image acquisition in real-time hyperspectral imaging
NASA Astrophysics Data System (ADS)
Driver, Richard D.; Bannon, David P.; Ciccone, Domenic; Hill, Sam L.
2010-04-01
The design and optical performance of a small-footprint, low-power, turnkey, Point-And-Stare hyperspectral analyzer, capable of fully automated field deployment in remote and harsh environments, is described. The unit is packaged for outdoor operation in an IP56 protected air-conditioned enclosure and includes a mechanically ruggedized fully reflective, aberration-corrected hyperspectral VNIR (400-1000 nm) spectrometer with a board-level detector optimized for point and stare operation, an on-board computer capable of full system data-acquisition and control, and a fully functioning internal hyperspectral calibration system for in-situ system spectral calibration and verification. Performance data on the unit under extremes of real-time survey operation and high spatial and high spectral resolution will be discussed. Hyperspectral acquisition including full parameter tracking is achieved by the addition of a fiber-optic based downwelling spectral channel for solar illumination tracking during hyperspectral acquisition and the use of other sensors for spatial and directional tracking to pinpoint view location. The system is mounted on a Pan-And-Tilt device, automatically controlled from the analyzer's on-board computer, making the HyperspecTM particularly adaptable for base security, border protection and remote deployments. A hyperspectral macro library has been developed to control hyperspectral image acquisition, system calibration and scene location control. The software allows the system to be operated in a fully automatic mode or under direct operator control through a GigE interface.
Point Cloud Based Relative Pose Estimation of a Satellite in Close Range
Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming
2016-01-01
Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633
A new algorithm for agile satellite-based acquisition operations
NASA Astrophysics Data System (ADS)
Bunkheila, Federico; Ortore, Emiliano; Circi, Christian
2016-06-01
Taking advantage of the high manoeuvrability and the accurate pointing of the so-called agile satellites, an algorithm which allows efficient management of the operations concerning optical acquisitions is described. Fundamentally, this algorithm can be subdivided into two parts: in the first one the algorithm operates a geometric classification of the areas of interest and a partitioning of these areas into stripes which develop along the optimal scan directions; in the second one it computes the succession of the time windows in which the acquisition operations of the areas of interest are feasible, taking into consideration the potential restrictions associated with these operations and with the geometric and stereoscopic constraints. The results and the performances of the proposed algorithm have been determined and discussed considering the case of the Periodic Sun-Synchronous Orbits.
Depth Perception In Remote Stereoscopic Viewing Systems
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Von Sydow, Marika
1989-01-01
Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Brennan T; Jager, Yetta; March, Patrick
Reservoir releases are typically operated to maximize the efficiency of hydropower production and the value of hydropower produced. In practice, ecological considerations are limited to those required by law. We first describe reservoir optimization methods that include mandated constraints on environmental and other water uses. Next, we describe research to formulate and solve reservoir optimization problems involving both energy and environmental water needs as objectives. Evaluating ecological objectives is a challenge in these problems for several reasons. First, it is difficult to predict how biological populations will respond to flow release patterns. This problem can be circumvented by using ecologicalmore » models. Second, most optimization methods require complex ecological responses to flow to be quantified by a single metric, preferably a currency that can also represent hydropower benefits. Ecological valuation of instream flows can make optimization methods that require a single currency for the effects of flow on energy and river ecology possible. Third, holistic reservoir optimization problems are unlikely to be structured such that simple solution methods can be used, necessitating the use of flexible numerical methods. One strong advantage of optimal control is the ability to plan for the effects of climate change. We present ideas for developing holistic methods to the point where they can be used for real-time operation of reservoirs. We suggest that developing ecologically sound optimization tools should be a priority for hydropower in light of the increasing value placed on sustaining both the ecological and energy benefits of riverine ecosystems long into the future.« less
Perel, A; Berkenstadt, H; Ziv, A; Katzenelson, R; Aitkenhead, A
2004-11-01
In this preliminary study we wanted to explore the attitudes of anaesthesiologists to a point-of-care information system in the operating room. The study was conducted as a preliminary step in the process of developing such a system by the European Society of Anaesthesiologists (ESA). A questionnaire was distributed to all 2240 attendees of the ESA's annual meeting in Gothenburg, Sweden, which took place in April 2001. Of the 329 responders (response rate of 14.6%), 79% were qualified specialists with more than 10 yr of experience (68%), mostly from Western Europe. Most responders admitted to regularly experiencing lack of medical knowledge relating to real-time patient care at least once a month (74%) or at least once a week (46%), and 39% admitted to having made errors during anaesthesia due to lack of medical information that can be otherwise found in a handbook. The choice ofa less optimal but more familiar approach to patient management due to lack of knowledge was reported by 37%. Eighty-eight percent of responders believe that having a point-of-care information system for the anaesthesiologists in the operating room is either important or very important. This preliminary survey demonstrates that lack of knowledge of anaesthesiologists may be a significant source of medical errors in the operating room, and suggests that a point-of-care information system for the anaesthesiologist may be of value.
Laser scanning measurements on trees for logging harvesting operations.
Zheng, Yili; Liu, Jinhao; Wang, Dian; Yang, Ruixi
2012-01-01
Logging harvesters represent a set of high-performance modern forestry machinery, which can finish a series of continuous operations such as felling, delimbing, peeling, bucking and so forth with human intervention. It is found by experiment that during the process of the alignment of the harvesting head to capture the trunk, the operator needs a lot of observation, judgment and repeated operations, which lead to the time and fuel losses. In order to improve the operation efficiency and reduce the operating costs, the point clouds for standing trees are collected with a low-cost 2D laser scanner. A cluster extracting algorithm and filtering algorithm are used to classify each trunk from the point cloud. On the assumption that every cross section of the target trunk is approximate a standard circle and combining the information of an Attitude and Heading Reference System, the radii and center locations of the trunks in the scanning range are calculated by the Fletcher-Reeves conjugate gradient algorithm. The method is validated through experiments in an aspen forest, and the optimized calculation time consumption is compared with the previous work of other researchers. Moreover, the implementation of the calculation result for automotive capturing trunks by the harvesting head during the logging operation is discussed in particular.
Definition of a Robust Supervisory Control Scheme for Sodium-Cooled Fast Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponciroli, R.; Passerini, S.; Vilim, R. B.
In this work, an innovative control approach for metal-fueled Sodium-cooled Fast Reactors is proposed. With respect to the classical approach adopted for base-load Nuclear Power Plants, an alternative control strategy for operating the reactor at different power levels by respecting the system physical constraints is presented. In order to achieve a higher operational flexibility along with ensuring that the implemented control loops do not influence the system inherent passive safety features, a dedicated supervisory control scheme for the dynamic definition of the corresponding set-points to be supplied to the PID controllers is designed. In particular, the traditional approach based onmore » the adoption of tabulated lookup tables for the set-point definition is found not to be robust enough when failures of the implemented SISO (Single Input Single Output) actuators occur. Therefore, a feedback algorithm based on the Reference Governor approach, which allows for the optimization of reference signals according to the system operating conditions, is proposed.« less
It's time to reinvent the general aviation airplane
NASA Technical Reports Server (NTRS)
Stengel, Robert F.
1988-01-01
Current designs for general aviation airplanes have become obsolete, and avenues for major redesign must be considered. New designs should incorporate recent advances in electronics, aerodynamics, structures, materials, and propulsion. Future airplanes should be optimized to operate satisfactorily in a positive air traffic control environment, to afford safety and comfort for point-to-point transportation, and to take advantage of automated manufacturing techniques and high production rates. These requirements have broad implications for airplane design and flying qualities, leading to a concept for the Modern Equipment General Aviation (MEGA) airplane. Synergistic improvements in design, production, and operation can provide a much needed fresh start for the general aviation industry and the traveling public. In this investigation a small four place airplane is taken as the reference, although the proposed philosophy applies across the entire spectrum of general aviation.
Thrust Augmentation Study of Cross-Flow Fan for Vertical Take-Off and Landing Aircraft
2012-09-01
configuration by varying the gap between the CFFs. Computational fluid simulations of the dual CFF configuration was performed using ANSYS CFX to find the...Computational fluid simulations of the dual CFF configuration was performed using ANSYS CFX to find the thrust generated as well as the optimal operating point...RECOMMENDATIONS ...............................................................................43 APPENDIX A. ANSYS CFX SETTINGS FOR DUAL CFF (8,000
Pipeline Optimization Program (PLOP)
2006-08-01
the framework of the Dredging Operations Decision Support System (DODSS, https://dodss.wes.army.mil/wiki/0). PLOP compiles industry standards and...efficiency point ( BEP ). In the interest of acceptable wear rate on the pump, industrial standards dictate that the flow Figure 2. Pump class as a function of...percentage of the flow rate corresponding to the BEP . Pump Acceptability Rules. The facts for pump performance, industrial standards and pipeline and
An Optimized Configuration for the Brazilian Decimetric Array
NASA Astrophysics Data System (ADS)
Sawant, Hanumant; Faria, Claudio; Stephany, Stephan
The Brazilian Decimetric Array (BDA) is a radio interferometer designed to operate in the frequency range of 1.2-1.7, 2.8 and 5.6 GHz and to obtain images of radio sources with high dynamic range. A 5-antenna configuration is already operational being implemented in BDA phase I. Phase II will provide a 26-antenna configuration forming a compact T-array, whereas phase III will include further 12 antennas. However, the BDA site has topographic constraints that preclude the placement of these antennas along the lines defined by the 3 arms of the T-array. Therefore, some antennas must be displaced in a direction that is slightly transverse tothese lines. This work presents the investigation of possible optimized configurations for all 38 antennas spread over the distances of 2.5 x 1.25 km. It was required to determine the optimal position of the last 12 antennas.A new optimization strategy was then proposed in order to obtain the optimal array configuration. It is based on the entropy of the distribution of the sampled points in the Fourier plane. A stochastic model, Ant Colony Optimization, uses the entropy of the such distribution to iteratively refine the candidate solutions. The proposed strategy can be used to determine antenna locations for free-shape arrays in order to provide uniform u-v coverage with minimum redundancy of sampled points in u-v plane that are less susceptible to errors due to unmeasured Fourier components. A different distribution could be chosen for the coverage. It also allows to consider the topographical constraints of the available site. Furthermore, it provides an optimal configuration even considering the predetermined placement of the 26 antennas that compose the central T-array. In this case, the optimal location of the last 12 antennas was determined. Performance results corresponding to the Fourier plane coverage, synthesized beam and sidelobes levels are shown for this optimized BDA configuration and are compared to the results of the standard T-array configuration that cannot be implemented due to site constraints. —————————————————————————————-
Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range
NASA Technical Reports Server (NTRS)
Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, R.S.
1989-06-01
For a vehicle operating across arbitrarily-contoured terrain, finding the most fuel-efficient route between two points can be viewed as a high-level global path-planning problem with traversal costs and stability dependent on the direction of travel (anisotropic). The problem assumes a two-dimensional polygonal map of homogeneous cost regions for terrain representation constructed from elevation information. The anisotropic energy cost of vehicle motion has a non-braking component dependent on horizontal distance, a braking component dependent on vertical distance, and a constant path-independent component. The behavior of minimum-energy paths is then proved to be restricted to a small, but optimal set of traversalmore » types. An optimal-path-planning algorithm, using a heuristic search technique, reduces the infinite number of paths between the start and goal points to a finite number by generating sequences of goal-feasible window lists from analyzing the polygonal map and applying pruning criteria. The pruning criteria consist of visibility analysis, heading analysis, and region-boundary constraints. Each goal-feasible window lists specifies an associated convex optimization problem, and the best of all locally-optimal paths through the goal-feasible window lists is the globally-optimal path. These ideas have been implemented in a computer program, with results showing considerably better performance than the exponential average-case behavior predicted.« less
Assessing Performance of Multipurpose Reservoir System Using Two-Point Linear Hedging Rule
NASA Astrophysics Data System (ADS)
Sasireka, K.; Neelakantan, T. R.
2017-07-01
Reservoir operation is the one of the important filed of water resource management. Innovative techniques in water resource management are focussed at optimizing the available water and in decreasing the environmental impact of water utilization on the natural environment. In the operation of multi reservoir system, efficient regulation of the release to satisfy the demand for various purpose like domestic, irrigation and hydropower can lead to increase the benefit from the reservoir as well as significantly reduces the damage due to floods. Hedging rule is one of the emerging techniques in reservoir operation, which reduce the severity of drought by accepting number of smaller shortages. The key objective of this paper is to maximize the minimum power production and improve the reliability of water supply for municipal and irrigation purpose by using hedging rule. In this paper, Type II two-point linear hedging rule is attempted to improve the operation of Bargi reservoir in the Narmada basin in India. The results obtained from simulation of hedging rule is compared with results from Standard Operating Policy, the result shows that the application of hedging rule significantly improved the reliability of water supply and reliability of irrigation release and firm power production.
Sanchis-Cano, Angel; Romero, Julián; Sacoto-Cabrera, Erwin J; Guijarro, Luis
2017-11-25
We analyze the feasibility of providing Wireless Sensor Network-data-based services in an Internet of Things scenario from an economical point of view. The scenario has two competing service providers with their own private sensor networks, a network operator and final users. The scenario is analyzed as two games using game theory. In the first game, sensors decide to subscribe or not to the network operator to upload the collected sensing-data, based on a utility function related to the mean service time and the price charged by the operator. In the second game, users decide to subscribe or not to the sensor-data-based service of the service providers based on a Logit discrete choice model related to the quality of the data collected and the subscription price. The sinks and users subscription stages are analyzed using population games and discrete choice models, while network operator and service providers pricing stages are analyzed using optimization and Nash equilibrium concepts respectively. The model is shown feasible from an economic point of view for all the actors if there are enough interested final users and opens the possibility of developing more efficient models with different types of services.
NASA Astrophysics Data System (ADS)
Heitzman, Nicholas
There are significant fuel consumption consequences for non-optimal flight operations. This study is intended to analyze and highlight areas of interest that affect fuel consumption in typical flight operations. By gathering information from actual flight operators (pilots, dispatch, performance engineers, and air traffic controllers), real performance issues can be addressed and analyzed. A series of interviews were performed with various individuals in the industry and organizations. The wide range of insight directed this study to focus on FAA regulations, airline policy, the ATC system, weather, and flight planning. The goal is to highlight where operational performance differs from design intent in order to better connect optimization with actual flight operations. After further investigation and consensus from the experienced participants, the FAA regulations do not need any serious attention until newer technologies and capabilities are implemented. The ATC system is severely out of date and is one of the largest limiting factors in current flight operations. Although participants are pessimistic about its timely implementation, the FAA's NextGen program for a future National Airspace System should help improve the efficiency of flight operations. This includes situational awareness, weather monitoring, communication, information management, optimized routing, and cleaner flight profiles like Required Navigation Performance (RNP) and Continuous Descent Approach (CDA). Working off the interview results, trade-studies were performed using an in-house flight profile simulation of a Boeing 737-300, integrating NASA legacy codes EDET and NPSS with a custom written mission performance and point-performance "Skymap" calculator. From these trade-studies, it was found that certain flight conditions affect flight operations more than others. With weather, traffic, and unforeseeable risks, flight planning is still limited by its high level of precaution. From this study, it is recommended that air carriers increase focus on defining policies like load scheduling, CG management, reduction in zero fuel weight, inclusion of performance measurement systems, and adapting to the regulations to best optimize the spirit of the requirement.. As well, air carriers should create a larger drive to implement the FAA's NextGen system and move the industry into the future.
Self-learning control system for plug-in hybrid vehicles
DeVault, Robert C [Knoxville, TN
2010-12-14
A system is provided to instruct a plug-in hybrid electric vehicle how optimally to use electric propulsion from a rechargeable energy storage device to reach an electric recharging station, while maintaining as high a state of charge (SOC) as desired along the route prior to arriving at the recharging station at a minimum SOC. The system can include the step of calculating a straight-line distance and/or actual distance between an orientation point and the determined instant present location to determine when to initiate optimally a charge depleting phase. The system can limit extended driving on a deeply discharged rechargeable energy storage device and reduce the number of deep discharge cycles for the rechargeable energy storage device, thereby improving the effective lifetime of the rechargeable energy storage device. This "Just-in-Time strategy can be initiated automatically without operator input to accommodate the unsophisticated operator and without needing a navigation system/GPS input.
Gonen, Eran; Grossman, Gershon
2015-09-01
Conventional reciprocating pistons, normally found in thermoacoustic engines, tend to introduce complex impedance characteristics, including acoustic, mechanical, and electrical portions. System behavior and performance usually rely on proper tuning processes and selection of an optimal point of operation, affected substantially by complementary hardware, typically adjusted for the specific application. The present study proposes an alternative perspective on the alternator behavior, by considering the relative motion between gas and piston during the engine mode of operation. Direct analytical derivation of the velocity distribution inside a tight seal gap and the associated impedance is employed to estimate the electro-acoustic conversion efficiency, thus indicating how to improve the system performance. The influence of acoustic phase, gap dimensions, and working conditions is examined, suggesting the need to develop tighter and longer seal gaps, having increased impedance, to allow optimization for use in upcoming sustainable power generation solutions and smart grids.
Optimal threshold estimation for binary classifiers using game theory.
Sanchez, Ignacio Enrique
2016-01-01
Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC ) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.
Electronic circuitry development in a micropyrotechnic system for micropropulsion applications
NASA Astrophysics Data System (ADS)
Puig-Vidal, Manuel; Lopez, Jaime; Miribel, Pere; Montane, Enric; Lopez-Villegas, Jose M.; Samitier, Josep; Rossi, Carole; Camps, Thierry; Dumonteuil, Maxime
2003-04-01
An electronic circuitry is proposed and implemented to optimize the ignition process and the robustness of a microthruster. The principle is based on the integration of propellant material within a micromachined system. The operational concept is simply based on the combustion of an energetic propellant stored in a micromachined chamber. Each thruster contains three parts (heater, chamber, nozzle). Due to the one shot characteristic, microthrusters are fabricated in 2D array configuration. For the functioning of this kind of system, one critical point is the optimization of the ignition process as a function of the power schedule delivered by electronic devices. One particular attention has been paid on the design and implementation of an electronic chip to control and optimize the system ignition. Ignition process is triggered by electrical power delivered to a polysilicon resistance in contact with the propellant. The resistance is used to sense the temperature on the propellant which is in contact. Temperature of the microthruster node before the ignition is monitored via the electronic circuitry. A pre-heating process before ignition seems to be a good methodology to optimize the ignition process. Pre-heating temperature and pre-heating time are critical parameters to be adjusted. Simulation and experimental results will deeply contribute to improve the micropyrotechnic system. This paper will discuss all these point.
Predictive optimized adaptive PSS in a single machine infinite bus.
Milla, Freddy; Duarte-Mermoud, Manuel A
2016-07-01
Power System Stabilizer (PSS) devices are responsible for providing a damping torque component to generators for reducing fluctuations in the system caused by small perturbations. A Predictive Optimized Adaptive PSS (POA-PSS) to improve the oscillations in a Single Machine Infinite Bus (SMIB) power system is discussed in this paper. POA-PSS provides the optimal design parameters for the classic PSS using an optimization predictive algorithm, which adapts to changes in the inputs of the system. This approach is part of small signal stability analysis, which uses equations in an incremental form around an operating point. Simulation studies on the SMIB power system illustrate that the proposed POA-PSS approach has better performance than the classical PSS. In addition, the effort in the control action of the POA-PSS is much less than that of other approaches considered for comparison. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Application of da Vinci(®) Robot in simple or radical hysterectomy: Tips and tricks.
Iavazzo, Christos; Gkegkes, Ioannis D
2016-01-01
The first robotic simple hysterectomy was performed more than 10 years ago. These days, robotic-assisted hysterectomy is accepted as an alternative surgical approach and is applied both in benign and malignant surgical entities. The two important points that should be taken into account to optimize postoperative outcomes in the early period of a surgeon's training are how to achieve optimal oncological and functional results. Overcoming any technical challenge, as with any innovative surgical method, leads to an improved surgical operation timewise as well as for patients' safety. The standardization of the technique and recognition of critical anatomical landmarks are essential for optimal oncological and clinical outcomes on both simple and radical robotic-assisted hysterectomy. Based on our experience, our intention is to present user-friendly tips and tricks to optimize the application of a da Vinci® robot in simple or radical hysterectomies.
Fuereder, Markus; Majeed, Imthiyas N; Panke, Sven; Bechtold, Matthias
2014-06-13
Teicoplanin aglycone columns allow efficient separation of amino acid enantiomers in aqueous mobile phases and enable robust and predictable simulated moving bed (SMB) separation of racemic methionine despite a dependency of the adsorption behavior on the column history (memory effect). In this work we systematically investigated the influence of the mobile phase (methanol content) and temperature on SMB performance using a model-based optimization approach that accounts for methionine solubility, adsorption behavior and back pressure. Adsorption isotherms became more favorable with increasing methanol content but methionine solubility was decreased and back pressure increased. Numerical optimization suggested a moderate methanol content (25-35%) for most efficient operation. Higher temperature had a positive effect on specific productivity and desorbent requirement due to higher methionine solubility, lower back pressure and virtually invariant selectivity at high loadings of racemic methionine. However, process robustness (defined as a difference in flow rate ratios) decreased strongly with increasing temperature to the extent that any significant increase in temperature over 32°C will likely result in operating points that cannot be realized technically even with the lab-scale piston pump SMB system employed in this study. Copyright © 2014. Published by Elsevier B.V.
An inverter/controller subsystem optimized for photovoltaic applications
NASA Technical Reports Server (NTRS)
Pickrell, R. L.; Osullivan, G.; Merrill, W. C.
1978-01-01
Conversion of solar array dc power to ac power stimulated the specification, design, and simulation testing of an inverter/controller subsystem tailored to the photovoltaic power source characteristics. Optimization of the inverter/controller design is discussed as part of an overall photovoltaic power system designed for maximum energy extraction from the solar array. The special design requirements for the inverter/ controller include: a power system controller (PSC) to control continuously the solar array operating point at the maximum power level based on variable solar insolation and cell temperatures; and an inverter designed for high efficiency at rated load and low losses at light loadings to conserve energy.
Optimizations on supply and distribution of dissolved oxygen in constructed wetlands: A review.
Liu, Huaqing; Hu, Zhen; Zhang, Jian; Ngo, Huu Hao; Guo, Wenshan; Liang, Shuang; Fan, Jinlin; Lu, Shaoyong; Wu, Haiming
2016-08-01
Dissolved oxygen (DO) is one of the most important factors that can influence pollutants removal in constructed wetlands (CWs). However, problems of insufficient oxygen supply and inappropriate oxygen distribution commonly exist in traditional CWs. Detailed analyses of DO supply and distribution characteristics in different types of CWs were introduced. It can be concluded that atmospheric reaeration (AR) served as the promising point on oxygen intensification. The paper summarized possible optimizations of DO in CWs to improve its decontamination performance. Process (tidal flow, drop aeration, artificial aeration, hybrid systems) and parameter (plant, substrate and operating) optimizations are particularly discussed in detail. Since economic and technical defects are still being cited in current studies, future prospects of oxygen research in CWs terminate this review. Copyright © 2016. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo; Savarese, Salvatore; Schipani, Pietro
2016-07-01
The communication presents an innovative method for the diagnosis of reflector antennas in radio astronomical applications. The approach is based on the optimization of the number and the distribution of the far field sampling points exploited to retrieve the antenna status in terms of feed misalignments, this to drastically reduce the time length of the measurement process and minimize the effects of variable environmental conditions and simplifying the tracking process of the source. The feed misplacement is modeled in terms of an aberration function of the aperture field. The relationship between the unknowns and the far field pattern samples is linearized thanks to a Principal Component Analysis. The number and the position of the field samples are then determined by optimizing the Singular Values behaviour of the relevant operator.
NASA Astrophysics Data System (ADS)
Giri Prasad, M. J.; Abhishek Raaj, A. S.; Rishi Kumar, R.; Gladson, Frank; M, Gautham
2016-09-01
The present study is concerned with resolving the problems pertaining to the conventional cutting fluids. Two samples of nano cutting fluids were prepared by dispersing 0.01 vol% of MWCNTs and a mixture of 0.01 vol% of MWCNTs and 0.01 vol% of nano ZnO in the soluble oil. The thermophysical properties such as the kinematic viscosity, density, flash point and the tribological properties of the prepared nano cutting fluid samples were experimentally investigated and were compared with those of plain soluble oil. In addition to this, a milling process was carried by varying the process parameters and by application of different samples of cutting fluids and an attempt was made to determine optimal cutting condition using the Taguchi optimization technique.
Cheung, Gary; Patrick, Colin; Sullivan, Glenda; Cooray, Manisha; Chang, Catherina L
2012-01-01
Anxiety and depression are prevalent in patients with chronic obstructive pulmonary disease (COPD). This study evaluates the sensitivity and specificity of two self-administered anxiety rating scales in older people with COPD. The Geriatric Anxiety Inventory (GAI) and the Hospital Anxiety and Depression Scale (HADS) are established useful screening tools but they have not been previously validated in this population. Older people with COPD completed the GAI and the HADS along with a structured diagnostic psychiatric interview, the Mini International Neuropsychiatric Interview (MINI). The outcomes of both rating scales were compared against the diagnosis of anxiety disorders based on the MINI. Receiver operating characteristic (ROC) curves were used to identify the optimal diagnostic cut points for each scale. Fourteen (25.5%) of the 55 participants, were diagnosed with an anxiety disorder. Mean GAI and HADS-anxiety subscale scores were significantly higher in subjects with an anxiety disorder than those without the diagnosis (p = 0.002 and 0.005 respectively). Both scales demonstrated moderate diagnostic value (area under the ROC curve was 0.83 for GAI and 0.79 for HADS). Optimal cut points were ≥3 (GAI) and ≥4 (HADS-anxiety subscale). At these cut-points, the GAI had a sensitivity of 85.7%, specificity of 78.0% and the HADS had a sensitivity of 78.6%, specificity 70.7%. Our results support the use of the GAI and HADS as screening instruments for anxiety disorders in older people with COPD. The optimal cut points in this population were lower than previously recommended for both rating scales. The results of this study should be replicated before these cut points can be recommended for general use in older people with COPD.
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.
2017-02-01
This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.
Preliminary design of a mobile lunar power supply
NASA Technical Reports Server (NTRS)
Schmitz, Paul C.; Kenny, Barbara H.; Fulmer, Christopher R.
1991-01-01
A preliminary design for a Stirling isotope power system for use as a mobile lunar power supply is presented. Performance and mass of the components required for the system are estimated. These estimates are based on power requirements and the operating environment. Optimizations routines are used to determine minimum mass operational points. Shielding for the isotope system are given as a function of the allowed dose, distance from the source, and the time spent near the source. The technologies used in the power conversion and radiator systems are taken from ongoing research in the Civil Space Technology Initiative (CSTI) program.
MST Pellet Injector Upgrades to Probe Beta and Density Limits and Impurity Particle Transport
NASA Astrophysics Data System (ADS)
Caspary, K. J.; Chapman, B. E.; Anderson, J. K.; Kumar, S. T. A.; Limbach, S. T.; Oliva, S. P.; Sarff, J. S.; Waksman, J.; Combs, S. K.; Foust, C. R.
2012-10-01
Upgrades to the pellet injector on MST will allow for significantly increased fueling capability enabling density limit studies for previously unavailable density regimes. Thus far, Greenwald fractions of 1.2 and 1.5 have been achieved in 500 kA and 200 kA improved confinement plasmas, respectively. The size of the pellet guide tubes, which constrain the lateral motion of the pellet in flight, was increased to accommodate pellets of up to 4.0 mm in diameter, capable of fueling to Greenwald fractions > 2.0 for MST's peak current of 600 kA. Exploring the effect of increased density on NBI deposition shows that for MST's NBI, core deposition of 25 keV neutrals is optimized for densities of 2 -- 3 x 10^19 m-3. This is key for beta limit studies in pellet fueled discharges with improved confinement where maximum NBI heating is desired. In addition, a modification to the injector has allowed operation using alternative pellet fuels with triple points significantly higher than that of deuterium (18.7 K). A small flow of helium into the pellet formation vacuum chamber introduces a controllable heat source capable of elevating the operating temperature of the injector. Injection of methane pellets with a triple point of 90.7 K results in a 12-fold increase in the core carbon impurity density. The flow rate is easily adjusted to optimize injector operating temperature for other fuel gases as well. Work supported by US DoE.
Policy Tree Optimization for Adaptive Management of Water Resources Systems
NASA Astrophysics Data System (ADS)
Herman, J. D.; Giuliani, M.
2016-12-01
Water resources systems must cope with irreducible uncertainty in supply and demand, requiring policy alternatives capable of adapting to a range of possible future scenarios. Recent studies have developed adaptive policies based on "signposts" or "tipping points", which are threshold values of indicator variables that signal a change in policy. However, there remains a need for a general method to optimize the choice of indicators and their threshold values in a way that is easily interpretable for decision makers. Here we propose a conceptual framework and computational algorithm to design adaptive policies as a tree structure (i.e., a hierarchical set of logical rules) using a simulation-optimization approach based on genetic programming. We demonstrate the approach using Folsom Reservoir, California as a case study, in which operating policies must balance the risk of both floods and droughts. Given a set of feature variables, such as reservoir level, inflow observations and forecasts, and time of year, the resulting policy defines the conditions under which flood control and water supply hedging operations should be triggered. Importantly, the tree-based rule sets are easy to interpret for decision making, and can be compared to historical operating policies to understand the adaptations needed under possible climate change scenarios. Several remaining challenges are discussed, including the empirical convergence properties of the method, and extensions to irreversible decisions such as infrastructure. Policy tree optimization, and corresponding open-source software, provide a generalizable, interpretable approach to designing adaptive policies under uncertainty for water resources systems.
NASA Astrophysics Data System (ADS)
Yoo, Sung-Moon; Song, Young-Joo; Park, Sang-Young; Choi, Kyu-Hong
2009-06-01
A formation flying strategy with an Earth-crossing object (ECO) is proposed to avoid the Earth collision. Assuming that a future conceptual spacecraft equipped with a powerful laser ablation tool already rendezvoused with a fictitious Earth collision object, the optimal required laser operating duration and direction histories are accurately derived to miss the Earth. Based on these results, the concept of formation flying between the object and the spacecraft is applied and analyzed as to establish the spacecraft's orbital motion design strategy. A fictitious "Apophis"-like object is established to impact with the Earth and two major deflection scenarios are designed and analyzed. These scenarios include the cases for the both short and long laser operating duration to avoid the Earth impact. Also, requirement of onboard laser tool's for both cases are discussed. As a result, the optimal initial conditions for the spacecraft to maintain its relative trajectory to the object are discovered. Additionally, the discovered optimal initial conditions also satisfied the optimal required laser operating conditions with no additional spacecraft's own fuel expenditure to achieve the spacecraft formation flying with the ECO. The initial conditions founded in the current research can be used as a spacecraft's initial rendezvous points with the ECO when designing the future deflection missions with laser ablation tools. The results with proposed strategy are expected to make more advances in the fields of the conceptual studies, especially for the future deflection missions using powerful laser ablation tools.
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Lee, Charles H.
2012-01-01
We developed framework and the mathematical formulation for optimizing communication network using mixed integer programming. The design yields a system that is much smaller, in search space size, when compared to the earlier approach. Our constrained network optimization takes into account the dynamics of link performance within the network along with mission and operation requirements. A unique penalty function is introduced to transform the mixed integer programming into the more manageable problem of searching in a continuous space. The constrained optimization problem was proposed to solve in two stages: first using the heuristic Particle Swarming Optimization algorithm to get a good initial starting point, and then feeding the result into the Sequential Quadratic Programming algorithm to achieve the final optimal schedule. We demonstrate the above planning and scheduling methodology with a scenario of 20 spacecraft and 3 ground stations of a Deep Space Network site. Our approach and framework have been simple and flexible so that problems with larger number of constraints and network can be easily adapted and solved.
Nussberger, G; Schädelin, S; Mayr, J; Studer, D; Zimmermann, P
2018-04-01
Traumatic elbow dislocation (TED) is the most common injury of large joints in children. There is an ongoing debate on the optimal treatment for TED. We aimed to assess the functional outcome after operative and nonoperative treatment of TED. We analysed the medical records of patients with TED treated at the University Children's Hospital, Basel, between March 2006 and June 2015. Functional outcome was assessed using the Mayo Elbow Performance Score (MEPS) and Quick Disabilities of the Arm, Shoulder and Hand (QuickDASH) Sport and Music Module score. These scores were compared between nonoperatively and operatively treated patients. A total of 37 patients (mean age 10.2 years, 5.2 to 15.3) were included. Of these, 21 (56.8%) children had undergone nonoperative treatment, with 16 (43.2%) patients having had operative treatment. After a mean follow-up of 5.6 years (1.2 to 5.9), MEPS and QuickDASH Sport and Music Module scores in the nonoperative group and operative group were similar: MEPS: 97.1 points (SD 4.6) versus 97.2 points (SD 2.6); 95% confidence interval (CI)-2.56 to 2.03); p = 0.53; QuickDASH Sport and Music Module score: 3.9 points (SD 6.1) versus 3.1 points (SD 4.6); 95% CI 2.60 to 4.17; p = 0.94. We noted no significant differences regarding the long-term functional outcome between the subgroup of children treated operatively versus those treated nonoperatively for TED with accompanying fractures of the medial epicondyle and medial condyle. Functional outcome after TED was excellent, independent of the treatment strategy. If clear indications for surgery are absent, a nonoperative approach for TED should be considered. Level III - therapeutic, retrospective, comparative study.
NASA Astrophysics Data System (ADS)
Latha, P. G.; Anand, S. R.; Imthias, Ahamed T. P.; Sreejith, P. S., Dr.
2013-06-01
This paper attempts to study the commercial impact of pumped storage hydro plant on the operation of a stressed power system. The paper further attempts to compute the optimum capacity of the pumped storage scheme that can be provided on commercial basis for a practical power system. Unlike the analysis of commercial aspects of pumped storage scheme attempted in several papers, this paper is presented from the point of view of power system management of a practical system considering the impact of the scheme on the economic operation of the system. A realistic case study is presented as the many factors that influence the pumped storage operation vary widely from one system to another. The suitability of pumped storage for the particular generation mix of a system is well explored in the paper. To substantiate the economic impact of pumped storage on the system, the problem is formulated as a short-term hydrothermal scheduling problem involving power purchase which optimizes the quantum of power to be scheduled and the duration of operation. The optimization model is formulated using an algebraic modeling language, AMPL, which is then solved using the advanced MILP solver CPLEX.
Enzyme reactor design under thermal inactivation.
Illanes, Andrés; Wilson, Lorena
2003-01-01
Temperature is a very relevant variable for any bioprocess. Temperature optimization of bioreactor operation is a key aspect for process economics. This is especially true for enzyme-catalyzed processes, because enzymes are complex, unstable catalysts whose technological potential relies on their operational stability. Enzyme reactor design is presented with a special emphasis on the effect of thermal inactivation. Enzyme thermal inactivation is a very complex process from a mechanistic point of view. However, for the purpose of enzyme reactor design, it has been oversimplified frequently, considering one-stage first-order kinetics of inactivation and data gathered under nonreactive conditions that poorly represent the actual conditions within the reactor. More complex mechanisms are frequent, especially in the case of immobilized enzymes, and most important is the effect of catalytic modulators (substrates and products) on enzyme stability under operation conditions. This review focuses primarily on reactor design and operation under modulated thermal inactivation. It also presents a scheme for bioreactor temperature optimization, based on validated temperature-explicit functions for all the kinetic and inactivation parameters involved. More conventional enzyme reactor design is presented merely as a background for the purpose of highlighting the need for a deeper insight into enzyme inactivation for proper bioreactor design.
NASA Technical Reports Server (NTRS)
Nobbs, Steven G.
1995-01-01
An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.
Split-personality transmission: shifts like an automatic, saves fuel like a manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, D.
1981-11-01
The design, operation and performance of a British-invented automatic transmission which claims to result in fuel economy valves equal to those attained with manual shifts are described. Developed for both 4-speed and 6-speed transmissions, this transmission uses standard parts made for existing manual transmissions, rearranges the gear pairings, and relies on a microcomputer to pick the optimal shift points according to load requirements. (LCL)
Archer, Charles J [Rochester, MN; Hardwick, Camesha R [Fayetteville, NC; McCarthy, Patrick J [Rochester, MN; Wallenfelt, Brian P [Eden Prairie, MN
2009-06-23
Methods, parallel computers, and products are provided for identifying messaging completion on a parallel computer. The parallel computer includes a plurality of compute nodes, the compute nodes coupled for data communications by at least two independent data communications networks including a binary tree data communications network optimal for collective operations that organizes the nodes as a tree and a torus data communications network optimal for point to point operations that organizes the nodes as a torus. Embodiments include reading all counters at each node of the torus data communications network; calculating at each node a current node value in dependence upon the values read from the counters at each node; and determining for all nodes whether the current node value for each node is the same as a previously calculated node value for each node. If the current node is the same as the previously calculated node value for all nodes of the torus data communications network, embodiments include determining that messaging is complete and if the current node is not the same as the previously calculated node value for all nodes of the torus data communications network, embodiments include determining that messaging is currently incomplete.
Minimal complexity control law synthesis
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Haddad, Wassim M.; Nett, Carl N.
1989-01-01
A paradigm for control law design for modern engineering systems is proposed: Minimize control law complexity subject to the achievement of a specified accuracy in the face of a specified level of uncertainty. Correspondingly, the overall goal is to make progress towards the development of a control law design methodology which supports this paradigm. Researchers achieve this goal by developing a general theory of optimal constrained-structure dynamic output feedback compensation, where here constrained-structure means that the dynamic-structure (e.g., dynamic order, pole locations, zero locations, etc.) of the output feedback compensation is constrained in some way. By applying this theory in an innovative fashion, where here the indicated iteration occurs over the choice of the compensator dynamic-structure, the paradigm stated above can, in principle, be realized. The optimal constrained-structure dynamic output feedback problem is formulated in general terms. An elegant method for reducing optimal constrained-structure dynamic output feedback problems to optimal static output feedback problems is then developed. This reduction procedure makes use of star products, linear fractional transformations, and linear fractional decompositions, and yields as a byproduct a complete characterization of the class of optimal constrained-structure dynamic output feedback problems which can be reduced to optimal static output feedback problems. Issues such as operational/physical constraints, operating-point variations, and processor throughput/memory limitations are considered, and it is shown how anti-windup/bumpless transfer, gain-scheduling, and digital processor implementation can be facilitated by constraining the controller dynamic-structure in an appropriate fashion.
NASA Technical Reports Server (NTRS)
Smith, R. E.; Pitts, J. I.; Lambiotte, J. J., Jr.
1978-01-01
The computer program FLO-22 for analyzing inviscid transonic flow past 3-D swept-wing configurations was modified to use vector operations and run on the STAR-100 computer. The vectorized version described herein was called FLO-22-V1. Vector operations were incorporated into Successive Line Over-Relaxation in the transformed horizontal direction. Vector relational operations and control vectors were used to implement upwind differencing at supersonic points. A high speed of computation and extended grid domain were characteristics of FLO-22-V1. The new program was not the optimal vectorization of Successive Line Over-Relaxation applied to transonic flow; however, it proved that vector operations can readily be implemented to increase the computation rate of the algorithm.
Cardinal, Thiane Ristow; Vigo, Alvaro; Duncan, Bruce Bartholow; Matos, Sheila Maria Alvim; da Fonseca, Maria de Jesus Mendes; Barreto, Sandhi Maria; Schmidt, Maria Inês
2018-01-01
Waist circumference (WC) has been incorporated in the definition of the metabolic syndrome (MetS) but the exact WC cut-off points across populations are not clear. The Joint Interim Statement (JIS) suggested possible cut-offs to different populations and ethnic groups. However, the adequacy of these cut-offs to Brazilian adults has been scarcely investigated. The objective of the study is to evaluate possible WC thresholds to be used in the definition of MetS using data from the Longitudinal Study of Adult Health (ELSA-Brasil), a multicenter cohort study of civil servants (35-74 years old) of six Brazilian cities. We analyzed baseline data from 14,893 participants (6772 men and 8121 women). A MetS was defined according to the JIS criteria, but excluding WC and thus requiring 2 of the 4 remaining elements. We used restricted cubic spline regression to graph the relationship between WC and MetS. We identified optimal cut-off points which maximized joint sensitivity and specificity (Youden's index) from Receiver Operator Characteristic Curves. We also estimated the C-statistics using logistic regression. We found no apparent threshold for WC in restricted cubic spline plots. Optimal cut-off for men was 92 cm (2 cm lower than that recommended by JIS for Caucasian/Europids or Sub-Saharan African men), but 2 cm higher than that recommended for ethnic Central and South American. For women, optimal cut-off was 86, 6 cm higher than that recommended for Caucasian/Europids and ethnic Central and South American. Optimal cut-offs did not very across age groups and most common race/color categories (except for Asian men, 87 cm). Sex-specific cut-offs for WC recommended by JIS differ from optimal cut-offs we found for adult men and women of Brazil´s most common ethnic groups.
Gharipour, Mojgan; Sadeghi, Masoumeh; Dianatkhah, Minoo; Nezafati, Pouya; Talaie, Mohammad; Oveisgharan, Shahram; Golshahi, Jafar
2016-01-01
High triglyceride (TG) and low high-density lipoprotein cholesterol (HDL-C) are important cardiovascular risk factors. The exact prognostic value of the TG/HDL-C ratio, a marker for cardiovascular events, is currently unknown among Iranians so this study sought to determine the optimal cutoff point for the TG/HDL-C ratio in predicting cardiovascular disease events in the Iranian population. The Isfahan Cohort Study (ICS) is an ongoing, longitudinal, population-based study that was originally conducted on adults aged ≥ 35 years, living in urban and rural areas of three districts in central Iran. After 10 years of follow-up, 5431 participants were re-evaluated using a standard protocol similar to the one used for baseline. At both measurements, participants underwent medical interviews, physical examinations, and fasting blood measurements. "High-risk" subjects were defined by the discrimination power of indices, which were assessed using receiver operating characteristic (ROC) analysis; the optimal cutoff point value for each index was then derived. The mean age of the participants was 50.7 ± 11.6 years. The TG/HDL-C ratio, at a threshold of 3.68, was used to screen for cardiovascular events among the study population. Subjects were divided into two groups ("low" and "high" risk) according to the TG/HDL-C concentration ratio at baseline. A slightly higher number of high-risk individuals were identified using the European cutoff points of 63.7% in comparison with the ICS cutoff points of 49.5%. The unadjusted hazard ratio (HR) was greatest in high-risk individuals identified by the ICS cutoff points (HR = 1.54, 95% CI [1.33-1.79]) vs European cutoff points (HR = 1.38, 95% [1.17-1.63]). There were no remarkable changes after adjusting for differences in sex and age (HR = 1.58, 95% CI [1.36-1.84] vs HR = 1.44, 95% CI [1.22-1.71]) for the ICS and European cutoff points, respectively. The threshold of TG/HDL ≥ 3.68 is the optimal cutoff point for predicting cardiovascular events in Iranian individuals. Copyright © 2016 National Lipid Association. Published by Elsevier Inc. All rights reserved.
Reservoir system expansion scheduling under conflicting interests - A Blue Nile application
NASA Astrophysics Data System (ADS)
Geressu, Robel; Harou, Julien
2017-04-01
New water resource developments are facing increasing resistance due to their real and perceived potential to affect existing systems' performance negatively. Hence, scheduling new dams in multi-reservoir systems requires considering conflicting performance objectives to minimize impacts, create consensus among wider stakeholder groups and avoid conflict. However, because of the large number of alternative expansion schedules, planning approaches often rely on simplifying assumptions such as the appropriate gap between expansion stages or less flexibility in reservoir release rules than what is possible. In this study, we investigate the extent to which these assumptions could limit our ability to find better performing alternatives. We apply a many-objective sequencing approach to the proposed Blue Nile hydropower reservoir system in Ethiopia to find best investment schedules and operating rules that maximize long-term discounted net benefits, downstream releases and energy generation during reservoir filling periods. The system is optimized using 30 realizations of stochastically generated streamflow data, statistically resembling the historical flow. Results take the form of Pareto-optimal trade-offs where each point on the curve or surface represents a combination of new reservoirs, their implementation dates and operating rules. Results show a significant relationship between detail in operating rule design (i.e., changing operating rules as the multi-reservoir expansion progresses) and the system performance. For the Blue Nile, failure to optimize operating rules in sufficient detail could result in underestimation of the net worth of the proposed investments by up to 6 billion USD if a development option with low downstream impact (slow filling of the reservoirs) is to be implemented.
In vivo RF powering for advanced biological research.
Zimmerman, Mark D; Chaimanonart, Nattapon; Young, Darrin J
2006-01-01
An optimized remote powering architecture with a miniature and implantable RF power converter for an untethered small laboratory animal inside a cage is proposed. The proposed implantable device exhibits dimensions less than 6 mmx6 mmx1 mm, and a mass of 100 mg including a medical-grade silicon coating. The external system consists of a Class-E power amplifier driving a tuned 15 cmx25 cm external coil placed underneath the cage. The implant device is located in the animal's abdomen in a plane parallel to the external coil and utilizes inductive coupling to receive power from the external system. A half-wave rectifier rectifies the received AC voltage and passes the resulting DC current to a 2.5 kOmega resistor, which represents the loading of an implantable microsystem. An optimal operating point with respect to operating frequency and number of turns in each coil inductor was determined by analyzing the system efficiency. The determined optimal operating condition is based on a 4-turn external coil and a 20-turn internal coil operating at 4 MHz. With the Class-E amplifier consuming a constant power of 25 W, this operating condition is sufficient to supply a desired 3.2 V with 1.3 mA to the load over a cage size of 10 cmx20 cm with an animal tilting angle of up to 60 degrees, which is the worst case considered for the prototype design. A voltage regulator can be designed to regulate the received DC power to a stable supply for the bio-implant microsystem.
Intel Xeon Phi accelerated Weather Research and Forecasting (WRF) Goddard microphysics scheme
NASA Astrophysics Data System (ADS)
Mielikainen, J.; Huang, B.; Huang, A. H.-L.
2014-12-01
The Weather Research and Forecasting (WRF) model is a numerical weather prediction system designed to serve both atmospheric research and operational forecasting needs. The WRF development is a done in collaboration around the globe. Furthermore, the WRF is used by academic atmospheric scientists, weather forecasters at the operational centers and so on. The WRF contains several physics components. The most time consuming one is the microphysics. One microphysics scheme is the Goddard cloud microphysics scheme. It is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the Goddard scheme code. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU does. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is one familiar to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discussed in this paper. The results show that the optimizations improved performance of Goddard microphysics scheme on Xeon Phi 7120P by a factor of 4.7×. In addition, the optimizations reduced the Goddard microphysics scheme's share of the total WRF processing time from 20.0 to 7.5%. Furthermore, the same optimizations improved performance on Intel Xeon E5-2670 by a factor of 2.8× compared to the original code.
Control allocation for gimballed/fixed thrusters
NASA Astrophysics Data System (ADS)
Servidia, Pablo A.
2010-02-01
Some overactuated control systems use a control distribution law between the controller and the set of actuators, usually called control allocator. Beyond the control allocator, the configuration of actuators may be designed to be able to operate after a single point of failure, for system optimization and/or decentralization objectives. For some type of actuators, a control allocation is used even without redundancy, being a good example the design and operation of thruster configurations. In fact, as the thruster mass flow direction and magnitude only can be changed under certain limits, this must be considered in the feedback implementation. In this work, the thruster configuration design is considered in the fixed (F), single-gimbal (SG) and double-gimbal (DG) thruster cases. The minimum number of thrusters for each case is obtained and for the resulting configurations a specific control allocation is proposed using a nonlinear programming algorithm, under nominal and single-point of failure conditions.
Strategies for enhanced deammonification performance and reduced nitrous oxide emissions.
Leix, Carmen; Drewes, Jörg E; Ye, Liu; Koch, Konrad
2017-07-01
Deammonification's performance and associated nitrous oxide emissions (N 2 O) depend on operational conditions. While studies have investigated factors for high performances and low emissions separately, this study investigated optimizing deammonification performance while simultaneously reducing N 2 O emissions. Using a design of experiment (DoE) method, two models were developed for the prediction of the nitrogen removal rate and N 2 O emissions during single-stage deammonification considering three operational factors (i.e., pH value, feeding and aeration strategy). The emission factor varied between 0.7±0.5% and 4.1±1.2% at different DoE-conditions. The nitrogen removal rate was predicted to be maximized at settings of pH 7.46, intermittent feeding and aeration. Conversely, emissions were predicted to be minimized at the design edges at pH 7.80, single feeding, and continuous aeration. Results suggested a weak positive correlation between the nitrogen removal rate and N 2 O emissions, thus, a single optimizing operational set-point for maximized performance and minimized emissions did not exist. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimization of fixed-range trajectories for supersonic transport aircraft
NASA Astrophysics Data System (ADS)
Windhorst, Robert Dennis
1999-11-01
This thesis develops near-optimal guidance laws that generate minimum fuel, time, or direct operating cost fixed-range trajectories for supersonic transport aircraft. The approach uses singular perturbation techniques to time-scale de-couple the equations of motion into three sets of dynamics, two of which are analyzed in the main body of this thesis and one of which is analyzed in the Appendix. The two-point-boundary-value-problems obtained by application of the maximum principle to the dynamic systems are solved using the method of matched asymptotic expansions. Finally, the two solutions are combined using the matching principle and an additive composition rule to form a uniformly valid approximation of the full fixed-range trajectory. The approach is used on two different time-scale formulations. The first holds weight constant, and the second allows weight and range dynamics to propagate on the same time-scale. Solutions for the first formulation are only carried out to zero order in the small parameter, while solutions for the second formulation are carried out to first order. Calculations for a HSCT design were made to illustrate the method. Results show that the minimum fuel trajectory consists of three segments: a minimum fuel energy-climb, a cruise-climb, and a minimum drag glide. The minimum time trajectory also has three segments: a maximum dynamic pressure ascent, a constant altitude cruise, and a maximum dynamic pressure glide. The minimum direct operating cost trajectory is an optimal combination of the two. For realistic costs of fuel and flight time, the minimum direct operating cost trajectory is very similar to the minimum fuel trajectory. Moreover, the HSCT has three local optimum cruise speeds, with the globally optimum cruise point at the highest allowable speed, if range is sufficiently long. The final range of the trajectory determines which locally optimal speed is best. Ranges of 500 to 6,000 nautical miles, subsonic and supersonic mixed flight, and varying fuel efficiency cases are analyzed. Finally, the payload-range curve of the HSCT design is determined.
NASA Astrophysics Data System (ADS)
Rowell, S.; Popov, A. A.; Meijaard, J. P.
2010-04-01
The response of a motorcycle is heavily dependent on the rider's control actions, and consequently a means of replicating the rider's behaviour provides an important extension to motorcycle dynamics. The primary objective here is to develop effective path-following simulations and to understand how riders control motorcycles. Optimal control theory is applied to the tracking of roadway by a motorcycle, using a non-linear motorcycle model operating in free control by steering torque input. A path-following controller with road preview is designed by minimising tracking errors and control effort. Tight controls with high weightings on performance and loose controls with high weightings on control power are defined. Special attention is paid to the modelling of multipoint preview in local and global coordinate systems. The controller model is simulated over a standard single lane-change manoeuvre. It is argued that the local coordinates point of view is more representative of the way that a human rider operates and interprets information. The simulations suggest that for accurate path following, using optimal control, the problem must be solved by the local coordinates approach in order to achieve accurate results with short preview horizons. Furthermore, some weaknesses of the optimal control approach are highlighted here.
Optimization control of LNG regasification plant using Model Predictive Control
NASA Astrophysics Data System (ADS)
Wahid, A.; Adicandra, F. F.
2018-03-01
Optimization of liquified natural gas (LNG) regasification plant is important to minimize costs, especially operational costs. Therefore, it is important to choose optimum LNG regasification plant design and maintaining the optimum operating conditions through the implementation of model predictive control (MPC). Optimal tuning parameter for MPC such as P (prediction horizon), M (control of the horizon) and T (sampling time) are achieved by using fine-tuning method. The optimal criterion for design is the minimum amount of energy used and for control is integral of square error (ISE). As a result, the optimum design is scheme 2 which is developed by Devold with an energy savings of 40%. To maintain the optimum conditions, required MPC with P, M and T as follows: tank storage pressure: 90, 2, 1; product pressure: 95, 2, 1; temperature vaporizer: 65, 2, 2; and temperature heater: 35, 6, 5, with ISE value at set point tracking respectively 0.99, 1792.78, 34.89 and 7.54, or improvement of control performance respectively 4.6%, 63.5%, 3.1% and 58.2% compared to PI controller performance. The energy savings that MPC controllers can make when there is a disturbance in temperature rise 1°C of sea water is 0.02 MW.
Robust Airfoil Optimization in High Resolution Design Space
NASA Technical Reports Server (NTRS)
Li, Wu; Padula, Sharon L.
2003-01-01
The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of B-spline control points as design variables yet the resulting airfoil shape is fairly smooth, and (3) it allows the user to make a trade-off between the level of optimization and the amount of computing time consumed. The robust optimization method is demonstrated by solving a lift-constrained drag minimization problem for a two-dimensional airfoil in viscous flow with a large number of geometric design variables. Our experience with robust optimization indicates that our strategy produces reasonable airfoil shapes that are similar to the original airfoils, but these new shapes provide drag reduction over the specified range of Mach numbers. We have tested this strategy on a number of advanced airfoil models produced by knowledgeable aerodynamic design team members and found that our strategy produces airfoils better or equal to any designs produced by traditional design methods.
Aerospace engineering design by systematic decomposition and multilevel optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Giles, G. L.; Barthelemy, J.-F. M.
1984-01-01
This paper describes a method for systematic analysis and optimization of large engineering systems, e.g., aircraft, by decomposition of a large task into a set of smaller, self-contained subtasks that can be solved concurrently. The subtasks may be arranged in many hierarchical levels with the assembled system at the top level. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization. It is pointed out that the method is intended to be compatible with the typical engineering organization and the modern technology of distributed computing.
NASA Astrophysics Data System (ADS)
Theunynck, Denis; Peze, Thierry; Toumazou, Vincent; Zunquin, Gauthier; Cohen, Olivier; Monges, Arnaud
2005-03-01
It is interesting to see whether the model of routing designed for races and great Navy operations could be transferred to commercial navigation and if so, within which framework.Sail boat routing conquered its letters of nobility during great races like the « Route du Rhum » or the transatlantic race « Jacques Vabre ». It is the ultimate stage of the step begun by the Navy at the time of great operations, like D-day (Overlord )June 6, 1944, in Normandy1.Routing is, from the beginning, mainly based on statistical knowledge and weather forecast, but with the recent availability of reliable currents forecast, sail boats routers and/or skippers now have to learn how to use both winds and currents to obtain the best performance, that is to travel between two points in the shortest time possible in acceptable security conditions.Are the currents forecast only useful to racing sail boat ? Of course not, they are a great help to fisherman for whom the knowledge of currents is also the knowledge of sea temperature who indicates the probability of fish presence. They are also used in offshore work to predict the hardness of the sea during operation.A less developed field of application is the route optimization of trading ships. The idea is to optimize the use of currents to increase the relative speed of ships with no augmentation of fuel expense. This new field will require that currents forecasters learn about the specific needs of another type of clients. There is also a need of teaching because the future customers will have to learn how to use the information they will get.At this point, the introduction of the use of currents forecast in racing sail boats routing is only the first step. It is of great interest because it can rely on a high knowledge in routing.The main difference is of course that the wind direction and its force are of greater importance to a sail boat that they are for a trading ship for whom the point of interest will be the fuel consumption and the ETA respect.Despite that, sail boat routing could be use as a prototype to determine the needs, both in term of information and formations of ship routers and skippers2.
NASA Astrophysics Data System (ADS)
Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj
2018-02-01
N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.
Intelligent control of mixed-culture bioprocesses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoner, D.L.; Larsen, E.D.; Miller, K.S.
A hierarchical control system is being developed and applied to a mixed culture bioprocess in a continuous stirred tank reactor. A bioreactor, with its inherent complexity and non-linear behavior was an interesting, yet, difficult application for control theory. The bottom level of the hierarchy was implemented as a number of integrated set point controls and data acquisition modules. Within the second level was a diagnostic system that used expert knowledge to determine the operational status of the sensors, actuators, and control modules. A diagnostic program was successfully implemented for the detection of stirrer malfunctions, and to monitor liquid delivery ratesmore » and recalibrate the pumps when deviations from desired flow rates occurred. The highest control level was a supervisory shell that was developed using expert knowledge and the history of the reactor operation to determine the set points required to meet a set of production criteria. At this stage the supervisory shell analyzed the data to determine the state of the system. In future implementations, this shell will determine the set points required to optimize a cost function using expert knowledge and adaptive learning techniques.« less
NASA Technical Reports Server (NTRS)
1995-01-01
The design of a High-Speed Civil Transport (HSCT) air-breathing propulsion system for multimission, variable-cycle operations was successfully optimized through a soft coupling of the engine performance analyzer NASA Engine Performance Program (NEPP) to a multidisciplinary optimization tool COMETBOARDS that was developed at the NASA Lewis Research Center. The design optimization of this engine was cast as a nonlinear optimization problem, with engine thrust as the merit function and the bypass ratios, r-values of fans, fuel flow, and other factors as important active design variables. Constraints were specified on factors including the maximum speed of the compressors, the positive surge margins for the compressors with specified safety factors, the discharge temperature, the pressure ratios, and the mixer extreme Mach number. Solving the problem by using the most reliable optimization algorithm available in COMETBOARDS would provide feasible optimum results only for a portion of the aircraft flight regime because of the large number of mission points (defined by altitudes, Mach numbers, flow rates, and other factors), diverse constraint types, and overall poor conditioning of the design space. Only the cascade optimization strategy of COMETBOARDS, which was devised especially for difficult multidisciplinary applications, could successfully solve a number of engine design problems for their flight regimes. Furthermore, the cascade strategy converged to the same global optimum solution even when it was initiated from different design points. Multiple optimizers in a specified sequence, pseudorandom damping, and reduction of the design space distortion via a global scaling scheme are some of the key features of the cascade strategy. HSCT engine concept, optimized solution for HSCT engine concept. A COMETBOARDS solution for an HSCT engine (Mach-2.4 mixed-flow turbofan) along with its configuration is shown. The optimum thrust is normalized with respect to NEPP results. COMETBOARDS added value in the design optimization of the HSCT engine.
Applications Performance Under MPL and MPI on NAS IBM SP2
NASA Technical Reports Server (NTRS)
Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)
1994-01-01
On July 5, 1994, an IBM Scalable POWER parallel System (IBM SP2) with 64 nodes, was installed at the Numerical Aerodynamic Simulation (NAS) Facility Each node of NAS IBM SP2 is a "wide node" consisting of a RISC 6000/590 workstation module with a clock of 66.5 MHz which can perform four floating point operations per clock with a peak performance of 266 Mflop/s. By the end of 1994, 64 nodes of IBM SP2 will be upgraded to 160 nodes with a peak performance of 42.5 Gflop/s. An overview of the IBM SP2 hardware is presented. The basic understanding of architectural details of RS 6000/590 will help application scientists the porting, optimizing, and tuning of codes from other machines such as the CRAY C90 and the Paragon to the NAS SP2. Optimization techniques such as quad-word loading, effective utilization of two floating point units, and data cache optimization of RS 6000/590 is illustrated, with examples giving performance gains at each optimization step. The conversion of codes using Intel's message passing library NX to codes using native Message Passing Library (MPL) and the Message Passing Interface (NMI) library available on the IBM SP2 is illustrated. In particular, we will present the performance of Fast Fourier Transform (FFT) kernel from NAS Parallel Benchmarks (NPB) under MPL and MPI. We have also optimized some of Fortran BLAS 2 and BLAS 3 routines, e.g., the optimized Fortran DAXPY runs at 175 Mflop/s and optimized Fortran DGEMM runs at 230 Mflop/s per node. The performance of the NPB (Class B) on the IBM SP2 is compared with the CRAY C90, Intel Paragon, TMC CM-5E, and the CRAY T3D.
Permanent magnet synchronous motor servo system control based on μC/OS
NASA Astrophysics Data System (ADS)
Shi, Chongyang; Chen, Kele; Chen, Xinglong
2015-10-01
When Opto-Electronic Tracking system operates in complex environments, every subsystem must operate efficiently and stably. As a important part of Opto-Electronic Tracking system, the performance of PMSM(Permanent Magnet Synchronous Motor) servo system affects the Opto-Electronic Tracking system's accuracy and speed greatly[1][2]. This paper applied embedded real-time operating system μC/OS to the control of PMSM servo system, implemented SVPWM(Space Vector Pulse Width Modulation) algorithm in PMSM servo system, optimized the stability of PMSM servo system. Pointing on the characteristics of the Opto-Electronic Tracking system, this paper expanded μC/OS with software redundancy processes, remote debugging and upgrading. As a result, the Opto- Electronic Tracking system performs efficiently and stably.
JPL-ANTOPT antenna structure optimization program
NASA Technical Reports Server (NTRS)
Strain, D. M.
1994-01-01
New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.
NASA Technical Reports Server (NTRS)
Brauer, G. L.; Habeger, A. R.; Stevenson, R.
1974-01-01
The basic equations and models used in a computer program (6D POST) to optimize simulated trajectories with six degrees of freedom were documented. The 6D POST program was conceived as a direct extension of the program POST, which dealt with point masses, and considers the general motion of a rigid body with six degrees of freedom. It may be used to solve a wide variety of atmospheric flight mechanics and orbital transfer problems for powered or unpowered vehicles operating near a rotating oblate planet. Its principal features are: an easy to use NAMELIST type input procedure, an integrated set of Flight Control System (FCS) modules, and a general-purpose discrete parameter targeting and optimization capability. It was written in FORTRAN 4 for the CDC 6000 series computers.
Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.
Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J
2017-09-01
A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.
Fundamental limits of repeaterless quantum communications
Pirandola, Stefano; Laurenza, Riccardo; Ottaviani, Carlo; Banchi, Leonardo
2017-01-01
Quantum communications promises reliable transmission of quantum information, efficient distribution of entanglement and generation of completely secure keys. For all these tasks, we need to determine the optimal point-to-point rates that are achievable by two remote parties at the ends of a quantum channel, without restrictions on their local operations and classical communication, which can be unlimited and two-way. These two-way assisted capacities represent the ultimate rates that are reachable without quantum repeaters. Here, by constructing an upper bound based on the relative entropy of entanglement and devising a dimension-independent technique dubbed ‘teleportation stretching', we establish these capacities for many fundamental channels, namely bosonic lossy channels, quantum-limited amplifiers, dephasing and erasure channels in arbitrary dimension. In particular, we exactly determine the fundamental rate-loss tradeoff affecting any protocol of quantum key distribution. Our findings set the limits of point-to-point quantum communications and provide precise and general benchmarks for quantum repeaters. PMID:28443624
Fundamental limits of repeaterless quantum communications.
Pirandola, Stefano; Laurenza, Riccardo; Ottaviani, Carlo; Banchi, Leonardo
2017-04-26
Quantum communications promises reliable transmission of quantum information, efficient distribution of entanglement and generation of completely secure keys. For all these tasks, we need to determine the optimal point-to-point rates that are achievable by two remote parties at the ends of a quantum channel, without restrictions on their local operations and classical communication, which can be unlimited and two-way. These two-way assisted capacities represent the ultimate rates that are reachable without quantum repeaters. Here, by constructing an upper bound based on the relative entropy of entanglement and devising a dimension-independent technique dubbed 'teleportation stretching', we establish these capacities for many fundamental channels, namely bosonic lossy channels, quantum-limited amplifiers, dephasing and erasure channels in arbitrary dimension. In particular, we exactly determine the fundamental rate-loss tradeoff affecting any protocol of quantum key distribution. Our findings set the limits of point-to-point quantum communications and provide precise and general benchmarks for quantum repeaters.
A method to approximate a closest loadability limit using multiple load flow solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yorino, Naoto; Harada, Shigemi; Cheng, Haozhong
A new method is proposed to approximate a closest loadability limit (CLL), or closest saddle node bifurcation point, using a pair of multiple load flow solutions. More strictly, the obtainable points by the method are the stationary points including not only CLL but also farthest and saddle points. An operating solution and a low voltage load flow solution are used to efficiently estimate the node injections at a CLL as well as the left and right eigenvectors corresponding to the zero eigenvalue of the load flow Jacobian. They can be used in monitoring loadability margin, in identification of weak spotsmore » in a power system and in the examination of an optimal control against voltage collapse. Most of the computation time of the proposed method is taken in calculating the load flow solution pair. The remaining computation time is less than that of an ordinary load flow.« less
Kong, Jianlei; Ding, Xiaokang; Liu, Jinhao; Yan, Lei; Wang, Jianli
2015-01-01
In this paper, a new algorithm to improve the accuracy of estimating diameter at breast height (DBH) for tree trunks in forest areas is proposed. First, the information is collected by a two-dimensional terrestrial laser scanner (2DTLS), which emits laser pulses to generate a point cloud. After extraction and filtration, the laser point clusters of the trunks are obtained, which are optimized by an arithmetic means method. Then, an algebraic circle fitting algorithm in polar form is non-linearly optimized by the Levenberg-Marquardt method to form a new hybrid algorithm, which is used to acquire the diameters and positions of the trees. Compared with previous works, this proposed method improves the accuracy of diameter estimation of trees significantly and effectively reduces the calculation time. Moreover, the experimental results indicate that this method is stable and suitable for the most challenging conditions, which has practical significance in improving the operating efficiency of forest harvester and reducing the risk of causing accidents. PMID:26147726
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, Ludovic; Vaeck, Nathalie; Justum, Yves
2015-04-07
Following a recent proposal of L. Wang and D. Babikov [J. Chem. Phys. 137, 064301 (2012)], we theoretically illustrate the possibility of using the motional states of a Cd{sup +} ion trapped in a slightly anharmonic potential to simulate the single-particle time-dependent Schrödinger equation. The simulated wave packet is discretized on a spatial grid and the grid points are mapped on the ion motional states which define the qubit network. The localization probability at each grid point is obtained from the population in the corresponding motional state. The quantum gate is the elementary evolution operator corresponding to the time-dependent Schrödingermore » equation of the simulated system. The corresponding matrix can be estimated by any numerical algorithm. The radio-frequency field which is able to drive this unitary transformation among the qubit states of the ion is obtained by multi-target optimal control theory. The ion is assumed to be cooled in the ground motional state, and the preliminary step consists in initializing the qubits with the amplitudes of the initial simulated wave packet. The time evolution of the localization probability at the grids points is then obtained by successive applications of the gate and reading out the motional state population. The gate field is always identical for a given simulated potential, only the field preparing the initial wave packet has to be optimized for different simulations. We check the stability of the simulation against decoherence due to fluctuating electric fields in the trap electrodes by applying dissipative Lindblad dynamics.« less
Fuzzy logic control of stand-alone photovoltaic system with battery storage
NASA Astrophysics Data System (ADS)
Lalouni, S.; Rekioua, D.; Rekioua, T.; Matagne, E.
Photovoltaic energy has nowadays an increased importance in electrical power applications, since it is considered as an essentially inexhaustible and broadly available energy resource. However, the output power provided via the photovoltaic conversion process depends on solar irradiation and temperature. Therefore, to maximize the efficiency of the photovoltaic energy system, it is necessary to track the maximum power point of the PV array. The present paper proposes a maximum power point tracker (MPPT) method, based on fuzzy logic controller (FLC), applied to a stand-alone photovoltaic system. It uses a sampling measure of the PV array power and voltage then determines an optimal increment required to have the optimal operating voltage which permits maximum power tracking. This method carries high accuracy around the optimum point when compared to the conventional one. The stand-alone photovoltaic system used in this paper includes two bi-directional DC/DC converters and a lead-acid battery bank to overcome the scare periods. One converter works as an MPP tracker, while the other regulates the batteries state of charge and compensates the power deficit to provide a continuous delivery of energy to the load. The Obtained simulation results show the effectiveness of the proposed fuzzy logic controller.
An Investigation of the Bomber and Tanker Mating Process in the Single Integrated Operations Plan.
1982-03-01
REFUELING C LOCATIONS ARE OPTIMIZED TO MAXIMIZE BOKSER ENTRY POINT C FUEL. BOMBER, TANKER, AND FRI DATA ARE INPUTS TO THlE C PROGRAM, AND THE INDIVIDUAL...NOTluI 50 UEAD(3.,E,1ND=I0)PRULAT(I).PRULONCI),PRICAP(I) COTOSO I$ NOPRlmI-1 TASm4 44. C C FOR EACH BOKSER TRACK, ASSIGN THE APPROPRIATE At mIN 0r C
2005-01-01
qubits . Suppression of Superconductivity in Granular Metals Igor Beloborodov Argonne National Laboratory, USA We investigate the suppression of...Russia Various strategies for extending coherence times of superconducting qubits have been proposed. We analyze the effect of fluctuations on a... qubit operated at an optimal point in the free- induction decay and the spin-echo-like experiments. Motivated by the recent experimental findings we
Agamy, Mohammed; Elasser, Ahmed; Sabate, Juan Antonio; Galbraith, Anthony William; Harfman Todorovic, Maja
2014-09-09
A distributed photovoltaic (PV) power plant includes a plurality of distributed dc-dc converters. The dc-dc converters are configured to switch in coordination with one another such that at least one dc-dc converter transfers power to a common dc-bus based upon the total system power available from one or more corresponding strings of PV modules. Due to the coordinated switching of the dc-dc converters, each dc-dc converter transferring power to the common dc-bus continues to operate within its optimal efficiency range as well as to optimize the maximum power point tracking in order to increase the energy yield of the PV power plant.
Digital controllers for VTOL aircraft
NASA Technical Reports Server (NTRS)
Stengel, R. F.; Broussard, J. R.; Berry, P. W.
1976-01-01
Using linear-optimal estimation and control techniques, digital-adaptive control laws have been designed for a tandem-rotor helicopter which is equipped for fully automatic flight in terminal area operations. Two distinct discrete-time control laws are designed to interface with velocity-command and attitude-command guidance logic, and each incorporates proportional-integral compensation for non-zero-set-point regulation, as well as reduced-order Kalman filters for sensor blending and noise rejection. Adaptation to flight condition is achieved with a novel gain-scheduling method based on correlation and regression analysis. The linear-optimal design approach is found to be a valuable tool in the development of practical multivariable control laws for vehicles which evidence significant coupling and insufficient natural stability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X; Belcher, AH; Wiersma, R
Purpose: In radiation therapy optimization the constraints can be either hard constraints which must be satisfied or soft constraints which are included but do not need to be satisfied exactly. Currently the voxel dose constraints are viewed as soft constraints and included as a part of the objective function and approximated as an unconstrained problem. However in some treatment planning cases the constraints should be specified as hard constraints and solved by constrained optimization. The goal of this work is to present a computation efficiency graph form alternating direction method of multipliers (ADMM) algorithm for constrained quadratic treatment planning optimizationmore » and compare it with several commonly used algorithms/toolbox. Method: ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. Various proximal operators were first constructed as applicable to quadratic IMRT constrained optimization and the problem was formulated in a graph form of ADMM. A pre-iteration operation for the projection of a point to a graph was also proposed to further accelerate the computation. Result: The graph form ADMM algorithm was tested by the Common Optimization for Radiation Therapy (CORT) dataset including TG119, prostate, liver, and head & neck cases. Both unconstrained and constrained optimization problems were formulated for comparison purposes. All optimizations were solved by LBFGS, IPOPT, Matlab built-in toolbox, CVX (implementing SeDuMi) and Mosek solvers. For unconstrained optimization, it was found that LBFGS performs the best, and it was 3–5 times faster than graph form ADMM. However, for constrained optimization, graph form ADMM was 8 – 100 times faster than the other solvers. Conclusion: A graph form ADMM can be applied to constrained quadratic IMRT optimization. It is more computationally efficient than several other commercial and noncommercial optimizers and it also used significantly less computer memory.« less
Generator voltage stabilisation for series-hybrid electric vehicles.
Stewart, P; Gladwin, D; Stewart, J; Cowley, R
2008-04-01
This paper presents a controller for use in speed control of an internal combustion engine for series-hybrid electric vehicle applications. Particular reference is made to the stability of the rectified DC link voltage under load disturbance. In the system under consideration, the primary power source is a four-cylinder normally aspirated gasoline internal combustion engine, which is mechanically coupled to a three-phase permanent magnet AC generator. The generated AC voltage is subsequently rectified to supply a lead-acid battery, and permanent magnet traction motors via three-phase full bridge power electronic inverters. Two complementary performance objectives exist. Firstly to maintain the internal combustion engine at its optimal operating point, and secondly to supply a stable 42 V supply to the traction drive inverters. Achievement of these goals minimises the transient energy storage requirements at the DC link, with a consequent reduction in both weight and cost. These objectives imply constant velocity operation of the internal combustion engine under external load disturbances and changes in both operating conditions and vehicle speed set-points. An electronically operated throttle allows closed loop engine velocity control. System time delays and nonlinearities render closed loop control design extremely problematic. A model-based controller is designed and shown to be effective in controlling the DC link voltage, resulting in the well-conditioned operation of the hybrid vehicle.
Romero, Julián; Sacoto-Cabrera, Erwin J.
2017-01-01
We analyze the feasibility of providing Wireless Sensor Network-data-based services in an Internet of Things scenario from an economical point of view. The scenario has two competing service providers with their own private sensor networks, a network operator and final users. The scenario is analyzed as two games using game theory. In the first game, sensors decide to subscribe or not to the network operator to upload the collected sensing-data, based on a utility function related to the mean service time and the price charged by the operator. In the second game, users decide to subscribe or not to the sensor-data-based service of the service providers based on a Logit discrete choice model related to the quality of the data collected and the subscription price. The sinks and users subscription stages are analyzed using population games and discrete choice models, while network operator and service providers pricing stages are analyzed using optimization and Nash equilibrium concepts respectively. The model is shown feasible from an economic point of view for all the actors if there are enough interested final users and opens the possibility of developing more efficient models with different types of services. PMID:29186847
NASA Astrophysics Data System (ADS)
Seibert, S. P.; Skublics, D.; Ehret, U.
2014-09-01
The coordinated operation of reservoirs in large-scale river basins has great potential to improve flood mitigation. However, this requires large scale hydrological models to translate the effect of reservoir operation to downstream points of interest, in a quality sufficient for the iterative development of optimized operation strategies. And, of course, it requires reservoirs large enough to make a noticeable impact. In this paper, we present and discuss several methods dealing with these prerequisites for reservoir operation using the example of three major floods in the Bavarian Danube basin (45,000 km2) and nine reservoirs therein: We start by presenting an approach for multi-criteria evaluation of model performance during floods, including aspects of local sensitivity to simulation quality. Then we investigate the potential of joint hydrologic-2d-hydrodynamic modeling to improve model performance. Based on this, we evaluate upper limits of reservoir impact under idealized conditions (perfect knowledge of future rainfall) with two methods: Detailed simulations and statistical analysis of the reservoirs' specific retention volume. Finally, we investigate to what degree reservoir operation strategies optimized for local (downstream vicinity to the reservoir) and regional (at the Danube) points of interest are compatible. With respect to model evaluation, we found that the consideration of local sensitivities to simulation quality added valuable information not included in the other evaluation criteria (Nash-Sutcliffe efficiency and Peak timing). With respect to the second question, adding hydrodynamic models to the model chain did, contrary to our expectations, not improve simulations, despite the fact that under idealized conditions (using observed instead of simulated lateral inflow) the hydrodynamic models clearly outperformed the routing schemes of the hydrological models. Apparently, the advantages of hydrodynamic models could not be fully exploited when fed by output from hydrological models afflicted with systematic errors in volume and timing. This effect could potentially be reduced by joint calibration of the hydrological-hydrodynamic model chain. Finally, based on the combination of the simulation-based and statistical impact assessment, we identified one reservoir potentially useful for coordinated, regional flood mitigation for the Danube. While this finding is specific to our test basin, the more interesting and generally valid finding is that operation strategies optimized for local and regional flood mitigation are not necessarily mutually exclusive, sometimes they are identical, sometimes they can, due to temporal offsets, be pursued simultaneously.
Optimization of the Nano-Dust Analyzer (NDA) for operation under solar UV illumination
NASA Astrophysics Data System (ADS)
O`Brien, L.; Grün, E.; Sternovsky, Z.
2015-12-01
The performance of the Nano-Dust Analyzer (NDA) instrument is analyzed for close pointing to the Sun, finding the optimal field-of-view (FOV), arrangement of internal baffles and measurement requirements. The laboratory version of the NDA instrument was recently developed (O'Brien et al., 2014) for the detection and elemental composition analysis of nano-dust particles. These particles are generated near the Sun by the collisional breakup of interplanetary dust particles (IDP), and delivered to Earth's orbit through interaction with the magnetic field of the expanding solar wind plasma. NDA is operating on the basis of impact ionization of the particle and collecting the generated ions in a time-of-flight fashion. The challenge in the measurement is that nano-dust particles arrive from a direction close to that of the Sun and thus the instrument is exposed to intense ultraviolet (UV) radiation. The performed optical ray-tracing analysis shows that it is possible to suppress the number of UV photons scattering into NDA's ion detector to levels that allow both high signal-to-noise ratio measurements, and long-term instrument operation. Analysis results show that by avoiding direct illumination of the target, the photon flux reaching the detector is reduced by a factor of about 103. Furthermore, by avoiding the target and also implementing a low-reflective coating, as well as an optimized instrument geometry consisting of an internal baffle system and a conical detector housing, the photon flux can be reduced by a factor of 106, bringing it well below the operation requirement. The instrument's FOV is optimized for the detection of nano-dust particles, while excluding the Sun. With the Sun in the FOV, the instrument can operate with reduced sensitivity and for a limited duration. The NDA instrument is suitable for future space missions to provide the unambiguous detection of nano-dust particles, to understand the conditions in the inner heliosphere and its temporal variability, and to constrain the chemical differentiation and processing of IDPs.
Pitot, Denis; Takieddine, Mazen; Abbassi, Ziad; Agrafiotis, Apostolos; Bruyns, Laurence; Ceuterick, Michel; Daoudi, Nabil; Dolimont, Amaury; Soulimani, Abdelak; Vaneukem, Pol
2014-10-01
Since Wittgrove introduced the laparoscopic version of the gastric bypass in 1994, the interest still remains in the decrease of the abdominal wall trauma in order to optimize the benefits of laparoscopy on postoperative pain, cosmesis, hospital stay, and convalescence in bariatric patients. This work is to report the feasibility of gastric bypass surgery by a pure transumbilical single-incision laparoscopic surgery (SILS) with a mechanical circular gastrojejunal anastomosis. Thirty-four patients (10 males and 24 females) were offered to receive gastric bypass with circular mechanical gastrojejunal anastomosis by Single Incision Laparoscopic Surgery (SILS) using pure transumbilical access. Anastomotic leak occurrence was the primary end-point. Patients demographics, operative time, additional trocarts, hemorrhage, intra abdominal abscess, length of post-operative stay, readmission, 30 days death, gastrojejunal anastomosis stricture, marginal ulcers, reflux complains, seromas, incisional hernias, and % excess BMI loss were also recorded in a prospective database. Primary end-point showed no anastomotic leak occurrence during the hospital stay or during the first 30 post-operative days. SILS gastric bypass with a circular mechanical gastrojejunal anastomosis is feasible and seems to be safe.
Didier, Ryne A; Hopkins, Katharine L; Coakley, Fergus V; Krishnaswami, Sanjay; Spiro, David M; Foster, Bryan R
2017-09-01
Magnetic resonance imaging (MRI) has emerged as a promising modality for evaluating pediatric appendicitis. However optimal imaging protocols, including roles of contrast agents and sedation, have not been established and diagnostic criteria have not been fully evaluated. To investigate performance characteristics of rapid MRI without contrast agents or sedation in the diagnosis of pediatric appendicitis. We included patients ages 4-18 years with suspicion of appendicitis who underwent rapid MRI between October 2013 and March 2015 without contrast agent or sedation. After two-radiologist review, we determined performance characteristics of individual diagnostic criteria and aggregate diagnostic criteria by comparing MRI results to clinical outcomes. We used receiver operating characteristic (ROC) curves to determine cut-points for appendiceal diameter and wall thickness for optimization of predictive power, and we calculated area under the curve (AUC) as a measure of test accuracy. Ninety-eight MRI examinations were performed in 97 subjects. Overall, MRI had a 94% sensitivity, 95% specificity, 91% positive predictive value and 97% negative predictive value. Optimal cut-points for appendiceal diameter and wall thickness were ≥7 mm and ≥2 mm, respectively. Independently, those cut-points produced sensitivities of 91% and 84% and specificities of 84% and 43%. Presence of intraluminal fluid (30/33) or localized periappendiceal fluid (32/33) showed a significant association with acute appendicitis (P<0.01), with sensitivities of 91% and 97% and specificities of 60% and 50%. For examinations in which the appendix was not identified by one or both reviewers (23/98), the clinical outcome was negative. Rapid MRI without contrast agents or sedation is accurate for diagnosis of pediatric appendicitis when multiple diagnostic criteria are considered in aggregate. Individual diagnostic criteria including optimized cut-points of ≥7 mm for diameter and ≥2 mm for wall thickness demonstrate high sensitivities but relatively low specificities. Nonvisualization of the appendix favors a negative diagnosis.
Acceleration of Radiance for Lighting Simulation by Using Parallel Computing with OpenCL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo, Wangda; McNeil, Andrew; Wetter, Michael
2011-09-06
We report on the acceleration of annual daylighting simulations for fenestration systems in the Radiance ray-tracing program. The algorithm was optimized to reduce both the redundant data input/output operations and the floating-point operations. To further accelerate the simulation speed, the calculation for matrix multiplications was implemented using parallel computing on a graphics processing unit. We used OpenCL, which is a cross-platform parallel programming language. Numerical experiments show that the combination of the above measures can speed up the annual daylighting simulations 101.7 times or 28.6 times when the sky vector has 146 or 2306 elements, respectively.
A Long-Distance RF-Powered Sensor Node with Adaptive Power Management for IoT Applications.
Pizzotti, Matteo; Perilli, Luca; Del Prete, Massimo; Fabbri, Davide; Canegallo, Roberto; Dini, Michele; Masotti, Diego; Costanzo, Alessandra; Franchi Scarselli, Eleonora; Romani, Aldo
2017-07-28
We present a self-sustained battery-less multi-sensor platform with RF harvesting capability down to -17 dBm and implementing a standard DASH7 wireless communication interface. The node operates at distances up to 17 m from a 2 W UHF carrier. RF power transfer allows operation when common energy scavenging sources (e.g., sun, heat, etc.) are not available, while the DASH7 communication protocol makes it fully compatible with a standard IoT infrastructure. An optimized energy-harvesting module has been designed, including a rectifying antenna (rectenna) and an integrated nano-power DC/DC converter performing maximum-power-point-tracking (MPPT). A nonlinear/electromagnetic co-design procedure is adopted to design the rectenna, which is optimized to operate at ultra-low power levels. An ultra-low power microcontroller controls on-board sensors and wireless protocol, to adapt the power consumption to the available detected power by changing wake-up policies. As a result, adaptive behavior can be observed in the designed platform, to the extent that the transmission data rate is dynamically determined by RF power. Among the novel features of the system, we highlight the use of nano-power energy harvesting, the implementation of specific hardware/software wake-up policies, optimized algorithms for best sampling rate implementation, and adaptive behavior by the node based on the power received.
A Long-Distance RF-Powered Sensor Node with Adaptive Power Management for IoT Applications
del Prete, Massimo; Fabbri, Davide; Canegallo, Roberto; Dini, Michele; Costanzo, Alessandra
2017-01-01
We present a self-sustained battery-less multi-sensor platform with RF harvesting capability down to −17 dBm and implementing a standard DASH7 wireless communication interface. The node operates at distances up to 17 m from a 2 W UHF carrier. RF power transfer allows operation when common energy scavenging sources (e.g., sun, heat, etc.) are not available, while the DASH7 communication protocol makes it fully compatible with a standard IoT infrastructure. An optimized energy-harvesting module has been designed, including a rectifying antenna (rectenna) and an integrated nano-power DC/DC converter performing maximum-power-point-tracking (MPPT). A nonlinear/electromagnetic co-design procedure is adopted to design the rectenna, which is optimized to operate at ultra-low power levels. An ultra-low power microcontroller controls on-board sensors and wireless protocol, to adapt the power consumption to the available detected power by changing wake-up policies. As a result, adaptive behavior can be observed in the designed platform, to the extent that the transmission data rate is dynamically determined by RF power. Among the novel features of the system, we highlight the use of nano-power energy harvesting, the implementation of specific hardware/software wake-up policies, optimized algorithms for best sampling rate implementation, and adaptive behavior by the node based on the power received. PMID:28788084
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Sirui, E-mail: siruitan@hotmail.com; Huang, Lianjie, E-mail: ljh@lanl.gov
For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within amore » given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.« less
The effect of different control point sampling sequences on convergence of VMAT inverse planning
NASA Astrophysics Data System (ADS)
Pardo Montero, Juan; Fenwick, John D.
2011-04-01
A key component of some volumetric-modulated arc therapy (VMAT) optimization algorithms is the progressive addition of control points to the optimization. This idea was introduced in Otto's seminal VMAT paper, in which a coarse sampling of control points was used at the beginning of the optimization and new control points were progressively added one at a time. A different form of the methodology is also present in the RapidArc optimizer, which adds new control points in groups called 'multiresolution levels', each doubling the number of control points in the optimization. This progressive sampling accelerates convergence, improving the results obtained, and has similarities with the ordered subset algorithm used to accelerate iterative image reconstruction. In this work we have used a VMAT optimizer developed in-house to study the performance of optimization algorithms which use different control point sampling sequences, most of which fall into three different classes: doubling sequences, which add new control points in groups such that the number of control points in the optimization is (roughly) doubled; Otto-like progressive sampling which adds one control point at a time, and equi-length sequences which contain several multiresolution levels each with the same number of control points. Results are presented in this study for two clinical geometries, prostate and head-and-neck treatments. A dependence of the quality of the final solution on the number of starting control points has been observed, in agreement with previous works. We have found that some sequences, especially E20 and E30 (equi-length sequences with 20 and 30 multiresolution levels, respectively), generate better results than a 5 multiresolution level RapidArc-like sequence. The final value of the cost function is reduced up to 20%, such reductions leading to small improvements in dosimetric parameters characterizing the treatments—slightly more homogeneous target doses and better sparing of the organs at risk.
Wu, Jun; Hu, Xie-he; Chen, Sheng; Chu, Jian
2003-01-01
The closed-loop stability issue of finite-precision realizations was investigated for digital controllers implemented in block-floating-point format. The controller coefficient perturbation was analyzed resulting from using finite word length (FWL) block-floating-point representation scheme. A block-floating-point FWL closed-loop stability measure was derived which considers both the dynamic range and precision. To facilitate the design of optimal finite-precision controller realizations, a computationally tractable block-floating-point FWL closed-loop stability measure was then introduced and the method of computing the value of this measure for a given controller realization was developed. The optimal controller realization is defined as the solution that maximizes the corresponding measure, and a numerical optimization approach was adopted to solve the resulting optimal realization problem. A numerical example was used to illustrate the design procedure and to compare the optimal controller realization with the initial realization.
Microgrids and distributed generation systems: Control, operation, coordination and planning
NASA Astrophysics Data System (ADS)
Che, Liang
Distributed Energy Resources (DERs) which include distributed generations (DGs), distributed energy storage systems, and adjustable loads are key components in microgrid operations. A microgrid is a small electric power system integrated with on-site DERs to serve all or some portion of the local load and connected to the utility grid through the point of common coupling (PCC). Microgrids can operate in both grid-connected mode and island mode. The structure and components of hierarchical control for a microgrid at Illinois Institute of Technology (IIT) are discussed and analyzed. Case studies would address the reliable and economic operation of IIT microgrid. The simulation results of IIT microgrid operation demonstrate that the hierarchical control and the coordination strategy of distributed energy resources (DERs) is an effective way of optimizing the economic operation and the reliability of microgrids. The benefits and challenges of DC microgrids are addressed with a DC model for the IIT microgrid. We presented the hierarchical control strategy including the primary, secondary, and tertiary controls for economic operation and the resilience of a DC microgrid. The simulation results verify that the proposed coordinated strategy is an effective way of ensuring the resilient response of DC microgrids to emergencies and optimizing their economic operation at steady state. The concept and prototype of a community microgrid that interconnecting multiple microgrids in a community are proposed. Two works are conducted. For the coordination, novel three-level hierarchical coordination strategy to coordinate the optimal power exchanges among neighboring microgrids is proposed. For the planning, a multi-microgrid interconnection planning framework using probabilistic minimal cut-set (MCS) based iterative methodology is proposed for enhancing the economic, resilience, and reliability signals in multi-microgrid operations. The implementation of high-reliability microgrids requires proper protection schemes that effectively function in both grid-connected and island modes. This chapter presents a communication-assisted four-level hierarchical protection strategy for high-reliability microgrids, and tests the proposed protection strategy based on a loop structured microgrid. The simulation results demonstrate the proposed strategy to be an effective and efficient option for microgrid protection. Additionally, microgrid topology ought to be optimally planned. To address the microgrid topology planning, a graph-partitioning and integer-programming integrated methodology is proposed. This work is not included in the dissertation. Interested readers can refer to our related publication.
NASA Astrophysics Data System (ADS)
Jokar, Ali; Godarzi, Ali Abbasi; Saber, Mohammad; Shafii, Mohammad Behshad
2016-11-01
In this paper, a novel approach has been presented to simulate and optimize the pulsating heat pipes (PHPs). The used pulsating heat pipe setup was designed and constructed for this study. Due to the lack of a general mathematical model for exact analysis of the PHPs, a method has been applied for simulation and optimization using the natural algorithms. In this way, the simulator consists of a kind of multilayer perceptron neural network, which is trained by experimental results obtained from our PHP setup. The results show that the complex behavior of PHPs can be successfully described by the non-linear structure of this simulator. The input variables of the neural network are input heat flux to evaporator (q″), filling ratio (FR) and inclined angle (IA) and its output is thermal resistance of PHP. Finally, based upon the simulation results and considering the heat pipe's operating constraints, the optimum operating point of the system is obtained by using genetic algorithm (GA). The experimental results show that the optimum FR (38.25 %), input heat flux to evaporator (39.93 W) and IA (55°) that obtained from GA are acceptable.
NASA Astrophysics Data System (ADS)
Bonissone, Stefano R.
2001-11-01
There are many approaches to solving multi-objective optimization problems using evolutionary algorithms. We need to select methods for representing and aggregating preferences, as well as choosing strategies for searching in multi-dimensional objective spaces. First we suggest the use of linguistic variables to represent preferences and the use of fuzzy rule systems to implement tradeoff aggregations. After a review of alternatives EA methods for multi-objective optimizations, we explore the use of multi-sexual genetic algorithms (MSGA). In using a MSGA, we need to modify certain parts of the GAs, namely the selection and crossover operations. The selection operator groups solutions according to their gender tag to prepare them for crossover. The crossover is modified by appending a gender tag at the end of the chromosome. We use single and double point crossovers. We determine the gender of the offspring by the amount of genetic material provided by each parent. The parent that contributed the most to the creation of a specific offspring determines the gender that the offspring will inherit. This is still a work in progress, and in the conclusion we examine many future extensions and experiments.
Luo, He; Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang
2018-01-01
Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided.
Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang
2018-01-01
Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided. PMID:29561888
An individual risk prediction model for lung cancer based on a study in a Chinese population.
Wang, Xu; Ma, Kewei; Cui, Jiuwei; Chen, Xiao; Jin, Lina; Li, Wei
2015-01-01
Early detection and diagnosis remains an effective yet challenging approach to improve the clinical outcome of patients with cancer. Low-dose computed tomography screening has been suggested to improve the diagnosis of lung cancer in high-risk individuals. To make screening more efficient, it is necessary to identify individuals who are at high risk. We conducted a case-control study to develop a predictive model for identification of such high-risk individuals. Clinical data from 705 lung cancer patients and 988 population-based controls were used for the development and evaluation of the model. Associations between environmental variants and lung cancer risk were analyzed with a logistic regression model. The predictive accuracy of the model was determined by calculating the area under the receiver operating characteristic curve and the optimal operating point. Our results indicate that lung cancer risk factors included older age, male gender, lower education level, family history of cancer, history of chronic obstructive pulmonary disease, lower body mass index, smoking cigarettes, a diet with less seafood, vegetables, fruits, dairy products, soybean products and nuts, a diet rich in meat, and exposure to pesticides and cooking emissions. The area under the curve was 0.8851 and the optimal operating point was obtained. With a cutoff of 0.35, the false positive rate, true positive rate, and Youden index were 0.21, 0.87, and 0.66, respectively. The risk prediction model for lung cancer developed in this study could discriminate high-risk from low-risk individuals.
[Interface interconnection and data integration in implementing of digital operating room].
Feng, Jingyi; Chen, Hua; Liu, Jiquan
2011-10-01
The digital operating-room, with highly integrated clinical information, is very important for rescuing lives of patients and improving quality of operations. Since equipments in domestic operating-rooms have diversified interface and nonstandard communication protocols, designing and implementing an integrated data sharing program for different kinds of diagnosing, monitoring, and treatment equipments become a key point in construction of digital operating room. This paper addresses interface interconnection and data integration for commonly used clinical equipments from aspects of hardware interface, interface connection and communication protocol, and offers a solution for interconnection and integration of clinical equipments in heterogeneous environment. Based on the solution, a case of an optimal digital operating-room is presented in this paper. Comparing with the international solution for digital operating-room, the solution proposed in this paper is more economical and effective. And finally, this paper provides a proposal for the platform construction of digital perating-room as well as a viewpoint for standardization of domestic clinical equipments.
Techatraisak, Kitirat; Wongmeerit, Krissanee; Dangrat, Chongdee; Wongwananuruk, Thanyarat; Indhavivadhana, Suchada
2016-01-01
To evaluate the relationship between measures of body adiposity and visceral adiposity index (VAI) and risk of metabolic syndrome (MS) and to identify the optimal cut-off points of each measurement in Thai polycystic ovary syndrome (PCOS). A cross-sectional study was completed physical examination, fasting plasma glucose, lipid profiles of 399 PCOS and 42 age-matched normal controls. Body mass index (BMI), waist-to-hip ratio (WHR), waist-to-height ratio (WHtR) and VAI were calculated. Associations between different measures and MS were evaluated and the receiver-operating characteristic (ROC) curve was performed to determine appropriate cut-off points for identifying MS. Percentage of MS in PCOS was 24.6%, whereas none MS in controls. Previously recommended cut-off values for body adiposity and VAI were significantly associated with MS. ROC curve analysis of the only PCOS showed newly obtained optimal cut-off points for BMI and VAI of ≥28 kg/m(2) (AUC = 0.90) and >5.6 (AUC = 0.94), respectively. Values found to be more accurate than the original ones. VAI was the best predictor, followed by BMI and WHtR. All body adiposity and VAI parameters can predict the risk of MS. Optimal values for Thai PCOS were ≥28 kg/m(2) for BMI, ≥0.85 for WHR, ≥0.5 for WHtR and >5.6 for VAI.
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2013-07-01
Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Power management of remote microgrids considering battery lifetime
NASA Astrophysics Data System (ADS)
Chalise, Santosh
Currently, 20% (1.3 billion) of the world's population still lacks access to electricity and many live in remote areas where connection to the grid is not economical or practical. Remote microgrids could be the solution to the problem because they are designed to provide power for small communities within clearly defined electrical boundaries. Reducing the cost of electricity for remote microgrids can help to increase access to electricity for populations in remote areas and developing countries. The integration of renewable energy and batteries in diesel based microgrids has shown to be effective in reducing fuel consumption. However, the operational cost remains high due to the low lifetime of batteries, which are heavily used to improve the system's efficiency. In microgrid operation, a battery can act as a source to augment the generator or a load to ensure full load operation. In addition, a battery increases the utilization of PV by storing extra energy. However, the battery has a limited energy throughput. Therefore, it is required to provide balance between fuel consumption and battery lifetime throughput in order to lower the cost of operation. This work presents a two-layer power management system for remote microgrids. First layer is day ahead scheduling, where power set points of dispatchable resources were calculated. Second layer is real time dispatch, where schedule set points from the first layer are accepted and resources are dispatched accordingly. A novel scheduling algorithm is proposed for a dispatch layer, which considers the battery lifetime in optimization and is expected to reduce the operational cost of the microgrid. This method is based on a goal programming approach which has the fuel and the battery wear cost as two objectives to achieve. The effectiveness of this method was evaluated through a simulation study of a PV-diesel hybrid microgrid using deterministic and stochastic approach of optimization.
Finding the optimal lengths for three branches at a junction.
Woldenberg, M J; Horsfield, K
1983-09-21
This paper presents an exact analytical solution to the problem of locating the junction point between three branches so that the sum of the total costs of the branches is minimized. When the cost per unit length of each branch is known the angles between each pair of branches can be deduced following reasoning first introduced to biology by Murray. Assuming the outer ends of each branch are fixed, the location of the junction and the length of each branch are then deduced using plane geometry and trigonometry. The model has applications in determining the optimal cost of a branch or branches at a junction. Comparing the optimal to the actual cost of a junction is a new way to compare cost models for goodness of fit to actual junction geometry. It is an unambiguous measure and is superior to comparing observed and optimal angles between each daughter and the parent branch. We present data for 199 junctions in the pulmonary arteries of two human lungs. For the branches at each junction we calculated the best fitting value of x from the relationship that flow alpha (radius)x. We found that the value of x determined whether a junction was best fitted by a surface, volume, drag or power minimization model. While economy of explanation casts doubt that four models operate simultaneously, we found that optimality may still operate, since the angle to the major daughter is less than the angle to the minor daughter. Perhaps optimality combined with a space filling branching pattern governs the branching geometry of the pulmonary artery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Hany F; El Hariri, Mohamad; Elsayed, Ahmed
Microgrids’ adaptive protection techniques rely on communication signals from the point of common coupling to ad- just the corresponding relays’ settings for either grid-connected or islanded modes of operation. However, during communication out- ages or in the event of a cyberattack, relays settings are not changed. Thus adaptive protection schemes are rendered unsuc- cessful. Due to their fast response, supercapacitors, which are pre- sent in the microgrid to feed pulse loads, could also be utilized to enhance the resiliency of adaptive protection schemes to communi- cation outages. Proper sizing of the supercapacitors is therefore im- portant in order to maintainmore » a stable system operation and also reg- ulate the protection scheme’s cost. This paper presents a two-level optimization scheme for minimizing the supercapacitor size along with optimizing its controllers’ parameters. The latter will lead to a reduction of the supercapacitor fault current contribution and an increase in that of other AC resources in the microgrid in the ex- treme case of having a fault occurring simultaneously with a pulse load. It was also shown that the size of the supercapacitor can be reduced if the pulse load is temporary disconnected during the transient fault period. Simulations showed that the resulting super- capacitor size and the optimized controller parameters from the proposed two-level optimization scheme were feeding enough fault currents for different types of faults and minimizing the cost of the protection scheme.« less
NASA Astrophysics Data System (ADS)
Biswas, R.; Kuar, A. S.; Mitra, S.
2014-09-01
Nd:YAG laser microdrilled holes on gamma-titanium aluminide, a newly developed alloy having wide applications in turbine blades, engine valves, cases, metal cutting tools, missile components, nuclear fuel and biomedical engineering, are important from the dimensional accuracy and quality of hole point of view. Keeping this in mind, a central composite design (CCD) based on response surface methodology (RSM) is employed for multi-objective optimization of pulsed Nd:YAG laser microdrilling operation on gamma-titanium aluminide alloy sheet to achieve optimum hole characteristics within existing resources. The three characteristics such as hole diameter at entry, hole diameter at exit and hole taper have been considered for simultaneous optimization. The individual optimization of all three responses has also been carried out. The input parameters considered are lamp current, pulse frequency, assist air pressure and thickness of the job. The responses at predicted optimum parameter level are in good agreement with the results of confirmation experiments conducted for verification tests.
Optimization Strategies for Single-Stage, Multi-Stage and Continuous ADRs
NASA Technical Reports Server (NTRS)
Shirron, Peter J.
2014-01-01
Adiabatic Demagnetization Refrigerators (ADR) have many advantages that are prompting a resurgence in their use in spaceflight and laboratory applications. They are solid-state coolers capable of very high efficiency and very wide operating range. However, their low energy storage density translates to larger mass for a given cooling capacity than is possible with other refrigeration techniques. The interplay between refrigerant mass and other parameters such as magnetic field and heat transfer points in multi-stage ADRs gives rise to a wide parameter space for optimization. This paper first presents optimization strategies for single ADR stages, focusing primarily on obtaining the largest cooling capacity per stage mass, then discusses the optimization of multi-stage and continuous ADRs in the context of the coordinated heat transfer that must occur between stages. The goal for the latter is usually to obtain the largest cooling power per mass or volume, but there can also be many secondary objectives, such as limiting instantaneous heat rejection rates and producing intermediate temperatures for cooling of other instrument components.
Time-optimal thermalization of single-mode Gaussian states
NASA Astrophysics Data System (ADS)
Carlini, Alberto; Mari, Andrea; Giovannetti, Vittorio
2014-11-01
We consider the problem of time-optimal control of a continuous bosonic quantum system subject to the action of a Markovian dissipation. In particular, we consider the case of a one-mode Gaussian quantum system prepared in an arbitrary initial state and which relaxes to the steady state due to the action of the dissipative channel. We assume that the unitary part of the dynamics is represented by Gaussian operations which preserve the Gaussian nature of the quantum state, i.e., arbitrary phase rotations, bounded squeezing, and unlimited displacements. In the ideal ansatz of unconstrained quantum control (i.e., when the unitary phase rotations, squeezing, and displacement of the mode can be performed instantaneously), we study how control can be optimized for speeding up the relaxation towards the fixed point of the dynamics and we analytically derive the optimal relaxation time. Our model has potential and interesting applications to the control of modes of electromagnetic radiation and of trapped levitated nanospheres.
Mean Posterior Corneal Power and Astigmatism in Normal Versus Keratoconic Eyes.
Feizi, Sepehr; Delfazayebaher, Siamak; Javadi, Mohammad Ali; Karimian, Farid; Ownagh, Vahid; Sadeghpour, Fatemeh
2018-01-01
To compare mean posterior corneal power and astigmatism in normal versus keratoconus affected eyes and determine the optimal cut-off points to maximize sensitivity and specificity in discriminating keratoconus from normal corneas. A total of 204 normal eyes and 142 keratoconus affected eyes were enrolled in this prospective comparative study. Mean posterior corneal power and astigmatism were measured using a dual Scheimpflug camera. Correlation coefficients were calculated to assess the relationship between the magnitudes of keratometric and posterior corneal astigmatism in the study groups. Receiver operating characteristic curves were used to compare the sensitivity and specificity of the measured parameters and to identify the optimal cut-off points for discriminating keratoconus from normal corneas. The mean posterior corneal power was -6.29 ± 0.20 D in the normal group and -7.77 ± 0.87 D in the keratoconus group ( P < 0.001). The mean magnitudes of the posterior corneal astigmatisms were -0.32 ± 0.15 D and -0.94 ± 0.39 D in the normal and keratoconus groups, respectively ( P < 0.001). Significant correlations were found between the magnitudes of keratometric and posterior corneal astigmatism in the normal (r=-0.76, P < 0.001) and keratoconus (r=-0.72, P < 0.001) groups. The mean posterior corneal power and astigmatism were highly reliable characteristics that distinguished keratoconus from normal corneas (area under the curve, 0.99 and 0.95, respectively). The optimal cut-off points of mean posterior corneal power and astigmatism were -6.70 D and -0.54 D, respectively. Mean posterior corneal power and astigmatism measured using a Galilei analyzer camera might have potential in diagnosing keratoconus. The cut-off points provided can be used for keratoconus screening.
Booth, Ronald A; Jiang, Ying; Morrison, Howard; Orpana, Heather; Rogers Van Katwyk, Susan; Lemieux, Chantal
2018-02-01
Previous studies have shown varying sensitivity and specificity of hemoglobin A1c (HbA1c) to identify diabetes and prediabetes, compared to 2-h oral glucose tolerance testing (OGTT) and fasting plasma glucose (FPG), in different ethnic groups. Within the Canadian population, the ability of HbA1c to identify prediabetes and diabetes in First Nations, Métis and Inuit, East and South Asian ethnic groups has yet to be determined. We collected demographic, lifestyle information, biochemical results of glycemic status (FPG, OGTT, and HbA1c) from an ethnically diverse Canadian population sample, which included a purposeful sampling of First Nations, Métis, Inuit, South Asian and East Asian participants. Sensitivity and specificity using Canadian Diabetes Association (CDA) recommended cut-points varied between ethnic groups, with greater variability for identification of prediabetes than diabetes. Dysglycemia (prediabetes and diabetes) was identified with a sensitivity and specificity ranging from 47.1% to 87.5%, respectively in Caucasians to 24.1% and 88.8% in Inuit. Optimal HbA1c ethnic-specific cut-points for dysglycemia and diabetes were determined by receiver operating characteristic (ROC) curve analysis. Our sample showed broad differences in the ability of HbA1c to identify dysglycemia or diabetes in different ethnic groups. Optimal cut-points for dysglycemia or diabetes in all ethnic groups were substantially lower than CDA recommendations. Utilization of HbA1c as the sole biochemical diagnostic marker may produce varying degrees of false negative results depending on the ethnicity of screened individuals. Further research is necessary to identify and validate optimal ethnic specific cut-points used for diabetic screening in the Canadian population. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Poppas, D P; Klioze, S D; Uzzo, R G; Schlossberg, S M
1995-02-01
Laser tissue welding in genitourinary reconstructive surgery has been shown in animal models to decrease operative time, improve healing, and decrease postoperative fistula formation when compared with conventional suture controls. Although the absence of suture material is the ultimate goal, this has not been shown to be practical with current technology for larger repairs. Therefore, suture-assisted laser tissue welding will likely be performed. This study sought to determine the optimal suture to be used during laser welding. The integrity of various organic and synthetic sutures exposed to laser irradiation were analyzed. Sutures studied included gut, clear Vicryl, clear polydioxanone suture (PDS), and violet PDS. Sutures were irradiated with a potassium titanyl phosphate (KTP)-532 laser or an 808-nm diode laser with and without the addition of a light-absorbing chromophore (fluorescein or indocyanine green, respectively). A remote temperature-sensing device obtained real-time surface temperatures during lasing. The average temperature, time, and total energy at break point were recorded. Overall, gut suture achieved significantly higher temperatures and withstood higher average energy delivery at break point with both the KTP-532 and the 808-nm diode lasers compared with all other groups (P < 0.05). Both chromophore-treated groups had higher average temperatures at break point combined with lower average energy. The break-point temperature for all groups other than gut occurred at 91 degrees C or less. The optimal temperature range for tissue welding appears to be between 60 degrees and 80 degrees C. Gut suture offers the greatest margin of error for KTP and 808-nm diode laser welding with or without the use of a chromophore.
Lau, Esther Yuet Ying; Harry Hui, C; Cheung, Shu-Fai; Lam, Jasmine
2015-11-01
Sleep and optimism are important psycho-biological and personality constructs, respectively. However, very little work has examined the causal relationship between them, and none has examined the potential mechanisms operating in the relationship. This study aimed to understand whether sleep quality was a cause or an effect of optimism, and whether depressive mood could explain the relationship. Internet survey data were collected from 987 Chinese working adults (63.4% female, 92.4% full-time workers, 27.0% married, 90.2% Hong Kong residents, mean age=32.59 at three time-points, spanning about 19 months). Measures included a Chinese attributional style questionnaire, the Pittsburgh Sleep Quality Index, and the Depression Anxiety Stress Scale. Cross-sectional analyses revealed moderate correlations among sleep quality, depressive mood, and optimism. Cross-lagged analyses showed a bidirectional causality between optimism and sleep. Path analysis demonstrated that depressive mood fully mediated the influence of optimism on sleep quality, and it partially mediated the influence of sleep quality on optimism. Optimism improves sleep. Poor sleep makes a pessimist. The effects of sleep quality on optimism could not be fully explained by depressive mood, highlighting the unique role of sleep on optimism. Understanding the mechanisms of the feedback loop of sleep quality, mood, and optimism may provide insights for clinical interventions for individuals presented with mood-related problems. Copyright © 2015 Elsevier Inc. All rights reserved.
Luo, Xiongbiao; Wan, Ying; He, Xiangjian; Mori, Kensaku
2015-02-01
Registration of pre-clinical images to physical space is indispensable for computer-assisted endoscopic interventions in operating rooms. Electromagnetically navigated endoscopic interventions are increasingly performed at current diagnoses and treatments. Such interventions use an electromagnetic tracker with a miniature sensor that is usually attached at an endoscope distal tip to real time track endoscope movements in a pre-clinical image space. Spatial alignment between the electromagnetic tracker (or sensor) and pre-clinical images must be performed to navigate the endoscope to target regions. This paper proposes an adaptive marker-free registration method that uses a multiple point selection strategy. This method seeks to address an assumption that the endoscope is operated along the centerline of an intraluminal organ which is easily violated during interventions. We introduce an adaptive strategy that generates multiple points in terms of sensor measurements and endoscope tip center calibration. From these generated points, we adaptively choose the optimal point, which is the closest to its assigned the centerline of the hollow organ, to perform registration. The experimental results demonstrate that our proposed adaptive strategy significantly reduced the target registration error from 5.32 to 2.59 mm in static phantoms validation, as well as from at least 7.58 mm to 4.71 mm in dynamic phantom validation compared to current available methods. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Holladay, Jon; Day, Greg; Gill, Larry
2004-01-01
Spacecraft are typically designed with a primary focus on weight in order to meet launch vehicle performance parameters. However, for pressurized and/or man-rated spacecraft, it is also necessary to have an understanding of the vehicle operating environments to properly size the pressure vessel. Proper sizing of the pressure vessel requires an understanding of the space vehicle's life cycle and compares the physical design optimization (weight and launch "cost") to downstream operational complexity and total life cycle cost. This paper will provide an overview of some major environmental design drivers and provide examples for calculating the optimal design pressure versus a selected set of design parameters related to thermal and environmental perspectives. In addition, this paper will provide a generic set of cracking pressures for both positive and negative pressure relief valves that encompasses worst case environmental effects for a variety of launch / landing sites. Finally, several examples are included to highlight pressure relief set points and vehicle weight impacts for a selected set of orbital missions.
Robust multi-model control of an autonomous wind power system
NASA Astrophysics Data System (ADS)
Cutululis, Nicolas Antonio; Ceanga, Emil; Hansen, Anca Daniela; Sørensen, Poul
2006-09-01
This article presents a robust multi-model control structure for a wind power system that uses a variable speed wind turbine (VSWT) driving a permanent magnet synchronous generator (PMSG) connected to a local grid. The control problem consists in maximizing the energy captured from the wind for varying wind speeds. The VSWT-PMSG linearized model analysis reveals the resonant nature of its dynamic at points on the optimal regimes characteristic (ORC). The natural frequency of the system and the damping factor are strongly dependent on the operating point on the ORC. Under these circumstances a robust multi-model control structure is designed. The simulation results prove the viability of the proposed control structure. Copyright
NASA Astrophysics Data System (ADS)
Santos, Sergio; Barcons, Victor; Christenson, Hugo K.; Billingsley, Daniel J.; Bonass, William A.; Font, Josep; Thomson, Neil H.
2013-08-01
A way to operate fundamental mode amplitude modulation atomic force microscopy is introduced which optimizes stability and resolution for a given tip size and shows negligible tip wear over extended time periods (˜24 h). In small amplitude small set-point (SASS) imaging, the cantilever oscillates with sub-nanometer amplitudes in the proximity of the sample, without the requirement of using large drive forces, as the dynamics smoothly lead the tip to the surface through the water layer. SASS is demonstrated on single molecules of double-stranded DNA in ambient conditions where sharp silicon tips (R ˜ 2-5 nm) can resolve the right-handed double helix.
Assessing the system value of optimal load shifting
Merrick, James; Ye, Yinyu; Entriken, Bob
2017-04-30
We analyze a competitive electricity market, where consumers exhibit optimal load shifting behavior to maximize utility and producers/suppliers maximize their profit under supply capacity constraints. The associated computationally tractable formulation can be used to inform market design or policy analysis in the context of increasing availability of the smart grid technologies that enable optimal load shifting. Through analytic and numeric assessment of the model, we assess the equilibrium value of optimal electricity load shifting, including how the value changes as more electricity consumers adopt associated technologies. For our illustrative numerical case, derived from the Current Trends scenario of the ERCOTmore » Long Term System Assessment, the average energy arbitrage value per ERCOT customer of optimal load shifting technologies is estimated to be $3 for the 2031 scenario year. We assess the sensitivity of this result to the flexibility of load, along with its relationship to the deployment of renewables. Finally, the model presented can also be a starting point for designing system operation infrastructure that communicates with the devices that schedule loads in response to price signals.« less
Assessing the system value of optimal load shifting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merrick, James; Ye, Yinyu; Entriken, Bob
We analyze a competitive electricity market, where consumers exhibit optimal load shifting behavior to maximize utility and producers/suppliers maximize their profit under supply capacity constraints. The associated computationally tractable formulation can be used to inform market design or policy analysis in the context of increasing availability of the smart grid technologies that enable optimal load shifting. Through analytic and numeric assessment of the model, we assess the equilibrium value of optimal electricity load shifting, including how the value changes as more electricity consumers adopt associated technologies. For our illustrative numerical case, derived from the Current Trends scenario of the ERCOTmore » Long Term System Assessment, the average energy arbitrage value per ERCOT customer of optimal load shifting technologies is estimated to be $3 for the 2031 scenario year. We assess the sensitivity of this result to the flexibility of load, along with its relationship to the deployment of renewables. Finally, the model presented can also be a starting point for designing system operation infrastructure that communicates with the devices that schedule loads in response to price signals.« less
Optimal helicopter trajectory planning for terrain following flight
NASA Technical Reports Server (NTRS)
Menon, P. K. A.
1990-01-01
Helicopters operating in high threat areas have to fly close to the earth surface to minimize the risk of being detected by the adversaries. Techniques are presented for low altitude helicopter trajectory planning. These methods are based on optimal control theory and appear to be implementable onboard in realtime. Second order necessary conditions are obtained to provide a criterion for finding the optimal trajectory when more than one extremal passes through a given point. A second trajectory planning method incorporating a quadratic performance index is also discussed. Trajectory planning problem is formulated as a differential game. The objective is to synthesize optimal trajectories in the presence of an actively maneuvering adversary. Numerical methods for obtaining solutions to these problems are outlined. As an alternative to numerical method, feedback linearizing transformations are combined with the linear quadratic game results to synthesize explicit nonlinear feedback strategies for helicopter pursuit-evasion. Some of the trajectories generated from this research are evaluated on a six-degree-of-freedom helicopter simulation incorporating an advanced autopilot. The optimal trajectory planning methods presented are also useful for autonomous land vehicle guidance.
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
Evaluation and correction of laser-scanned point clouds
NASA Astrophysics Data System (ADS)
Teutsch, Christian; Isenberg, Tobias; Trostmann, Erik; Weber, Michael; Berndt, Dirk; Strothotte, Thomas
2005-01-01
The digitalization of real-world objects is of great importance in various application domains. E.g. in industrial processes quality assurance is very important. Geometric properties of workpieces have to be measured. Traditionally, this is done with gauges which is somewhat subjective and time-consuming. We developed a robust optical laser scanner for the digitalization of arbitrary objects, primary, industrial workpieces. As measuring principle we use triangulation with structured lighting and a multi-axis locomotor system. Measurements on the generated data leads to incorrect results if the contained error is too high. Therefore, processes for geometric inspection under non-laboratory conditions are needed that are robust in permanent use and provide high accuracy as well as high operation speed. The many existing methods for polygonal mesh optimization produce very esthetic 3D models but often require user interaction and are limited in processing speed and/or accuracy. Furthermore, operations on optimized meshes consider the entire model and pay only little attention to individual measurements. However, many measurements contribute to parts or single scans with possibly strong differences between neighboring scans being lost during mesh construction. Also, most algorithms consider unsorted point clouds although the scanned data is structured through device properties and measuring principles. We use this underlying structure to achieve high processing speeds and extract intrinsic system parameters and use them for fast pre-processing.
NASA Astrophysics Data System (ADS)
Kiran, B. S.; Singh, Satyendra; Negi, Kuldeep
The GSAT-12 spacecraft is providing Communication services from the INSAT/GSAT system in the Indian region. The spacecraft carries 12 extended C-band transponders. GSAT-12 was launched by ISRO’s PSLV from Sriharikota, into a sub-geosynchronous Transfer Orbit (sub-GTO) of 284 x 21000 km with inclination 18 deg. This Mission successfully accomplished combined optimization of launch vehicle and satellite capabilities to maximize operational life of the s/c. This paper describes mission analysis carried out for GSAT-12 comprising launch window, orbital events study and orbit raising maneuver strategies considering various Mission operational constraints. GSAT-12 is equipped with two earth sensors (ES), three gyroscopes and digital sun sensor. The launch window was generated considering mission requirement of minimum 45 minutes of ES data for calibration of gyros with Roll-sun-pointing orientation in T.O. Since the T.O. period was a rather short 6.1 hr, required pitch biases were worked out to meet the gyro-calibration requirement. A 440 N Liquid Apogee Motor (LAM) is used for orbit raising. The objective of the maneuver strategy is to achieve desired drift orbit satisfying mission constraints and minimizing propellant expenditure. In case of sub-GTO, the optimal strategy is to first perform an in-plane maneuver at perigee to raise the apogee to synchronous level and then perform combined maneuvers at the synchronous apogee to achieve desired drift orbit. The perigee burn opportunities were examined considering ground station visibility requirement for monitoring the burn. Two maneuver strategies were proposed: an optimal five-burn strategy with two perigee burns centered around perigee#5 and perigee#8 with partial ground station visibility and three apogee burns with dual station visibility, a near-optimal five-burn strategy with two off-perigee burns at perigee#5 and perigee#8 with single ground station visibility and three apogee burns with dual station visibility. The range vector profiles were studied in the s/c frame during LAM burn phases and accurate polarization predictions were provided to supporting ground stations. The near optimal strategy was selected for implementation in order to ensure full visibility during each LAM burn. Contingency maneuver plans were generated in preparation for specified Propulsion system related contingencies. Maneuver plans were generated considering 3-sigma dispersions in T.O. GSAT-12 is positioned at 83 deg East longitude. The estimated operational life is about 11 years which was realized through operationally optimal maneuver strategy selected from the detailed mission analysis.
Martínez-Sánchez, Jose M; Fu, Marcela; Ariza, Carles; López, María J; Saltó, Esteve; Pascual, José A; Schiaffino, Anna; Borràs, Josep M; Peris, Mercè; Agudo, Antonio; Nebot, Manel; Fernández, Esteve
2009-01-01
To assess the optimal cut-point for salivary cotinine concentration to identify smoking status in the adult population of Barcelona. We performed a cross-sectional study of a representative sample (n=1,117) of the adult population (>16 years) in Barcelona (2004-2005). This study gathered information on active and passive smoking by means of a questionnaire and a saliva sample for cotinine determination. We analyzed sensitivity and specificity according to sex, age, smoking status (daily and occasional), and exposure to second-hand smoke at home. ROC curves and the area under the curve were calculated. The prevalence of smokers (daily and occasional) was 27.8% (95% CI: 25.2-30.4%). The optimal cut-point to discriminate smoking status was 9.2 ng/ml (sensitivity=88.7% and specificity=89.0%). The area under the ROC curve was 0.952. The optimal cut-point was 12.2 ng/ml in men and 7.6 ng/ml in women. The optimal cut-point was higher at ages with a greater prevalence of smoking. Daily smokers had a higher cut-point than occasional smokers. The optimal cut-point to discriminate smoking status in the adult population is 9.2 ng/ml, with sensitivities and specificities around 90%. The cut-point was higher in men and in younger people. The cut-point increases with higher prevalence of daily smokers.
Feedback dew-point sensor utilizing optimally cut plastic optical fibres
NASA Astrophysics Data System (ADS)
Hadjiloucas, S.; Irvine, J.; Keating, D. A.
2000-01-01
A plastic optical fibre reflectance sensor that makes full use of the critical angle of the fibres is implemented to monitor dew formation on a Peltier-cooled reflector surface. The optical configuration permits isolation of optoelectronic components from the sensing head and better light coupling between the reflector and the detecting fibre, giving a better signal of the onset of dew formation on the reflector. Continuous monitoring of the rate of change in reflectance as well as the absolute reflectance signals, the use of a novel polymethyl-methacrylate-coated hydrophobic film reflector on the Peltier element and the application of feedback around the point of dew formation, further reduces the possibility of contamination of the sensor head. Under closed-loop operation, the sensor is capable of cycling around the point of dew formation at a frequency of 2.5 Hz.
A conceptual framework for economic optimization of an animal health surveillance portfolio.
Guo, X; Claassen, G D H; Oude Lansink, A G J M; Saatkamp, H W
2016-04-01
Decision making on hazard surveillance in livestock product chains is a multi-hazard, multi-stakeholder, and multi-criteria process that includes a variety of decision alternatives. The multi-hazard aspect means that the allocation of the scarce resource for surveillance should be optimized from the point of view of a surveillance portfolio (SP) rather than a single hazard. In this paper, we present a novel conceptual approach for economic optimization of a SP to address the resource allocation problem for a surveillance organization from a theoretical perspective. This approach uses multi-criteria techniques to evaluate the performances of different settings of a SP, taking cost-benefit aspects of surveillance and stakeholders' preferences into account. The credibility of the approach has also been checked for conceptual validity, data needs and operational validity; the application potentials of the approach are also discussed.
Tomo3D 2.0--exploitation of advanced vector extensions (AVX) for 3D reconstruction.
Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus
2015-02-01
Tomo3D is a program for fast tomographic reconstruction on multicore computers. Its high speed stems from code optimization, vectorization with Streaming SIMD Extensions (SSE), multithreading and optimization of disk access. Recently, Advanced Vector eXtensions (AVX) have been introduced in the x86 processor architecture. Compared to SSE, AVX double the number of simultaneous operations, thus pointing to a potential twofold gain in speed. However, in practice, achieving this potential is extremely difficult. Here, we provide a technical description and an assessment of the optimizations included in Tomo3D to take advantage of AVX instructions. Tomo3D 2.0 allows huge reconstructions to be calculated in standard computers in a matter of minutes. Thus, it will be a valuable tool for electron tomography studies with increasing resolution needs. Copyright © 2014 Elsevier Inc. All rights reserved.
Gerbershagen, H J; Rothaug, J; Kalkman, C J; Meissner, W
2011-10-01
Cut-off points (CPs) of the numeric rating scale (NRS 0-10) are regularly used in postoperative pain treatment. However, there is insufficient evidence to identify the optimal CP between mild and moderate pain. A total of 435 patients undergoing general, trauma, or oral and maxillofacial surgery were studied. To determine the optimal CP for pain treatment, four approaches were used: first, patients estimated their tolerable postoperative pain intensity before operation; secondly, 24 h after surgery, they indicated if they would have preferred to receive more analgesics; thirdly, satisfaction with pain treatment was analysed, and fourthly, multivariate analysis was used to calculate the optimal CP for pain intensities in relation to pain-related interference with movement, breathing, sleep, and mood. The estimated tolerable postoperative pain before operation was median (range) NRS 4.0 (0-10). Patients who would have liked more analgesics reported significantly higher average pain since surgery [median NRS 5.0 (0-9)] compared with those without this request [NRS 3.0 (0-8)]. Patients satisfied with pain treatment reported an average pain intensity of median NRS 3.0 (0-8) compared with less satisfied patients with NRS 5.0 (2-9). Analysis of average postoperative pain in relation to pain-related interference with mood and activity indicated pain categories of NRS 0-2, mild; 3-4, moderate; and 5-10, severe pain. Three of the four methods identified a treatment threshold of average pain of NRS≥4. This was considered to identify patients with pain of moderate-to-severe intensity. This cut-off was indentified as the tolerable pain threshold.
Initial eye movements during face identification are optimal and similar across cultures
Or, Charles C.-F.; Peterson, Matthew F.; Eckstein, Miguel P.
2015-01-01
Culture influences not only human high-level cognitive processes but also low-level perceptual operations. Some perceptual operations, such as initial eye movements to faces, are critical for extraction of information supporting evolutionarily important tasks such as face identification. The extent of cultural effects on these crucial perceptual processes is unknown. Here, we report that the first gaze location for face identification was similar across East Asian and Western Caucasian cultural groups: Both fixated a featureless point between the eyes and the nose, with smaller between-group than within-group differences and with a small horizontal difference across cultures (8% of the interocular distance). We also show that individuals of both cultural groups initially fixated at a slightly higher point on Asian faces than on Caucasian faces. The initial fixations were found to be both fundamental in acquiring the majority of information for face identification and optimal, as accuracy deteriorated when observers held their gaze away from their preferred fixations. An ideal observer that integrated facial information with the human visual system's varying spatial resolution across the visual field showed a similar information distribution across faces of both races and predicted initial human fixations. The model consistently replicated the small vertical difference between human fixations to Asian and Caucasian faces but did not predict the small horizontal leftward bias of Caucasian observers. Together, the results suggest that initial eye movements during face identification may be driven by brain mechanisms aimed at maximizing accuracy, and less influenced by culture. The findings increase our understanding of the interplay between the brain's aims to optimally accomplish basic perceptual functions and to respond to sociocultural influences. PMID:26382003
An approach to the parametric design of ion thrusters
NASA Technical Reports Server (NTRS)
Wilbur, Paul J.; Beattie, John R.; Hyman, Jay, Jr.
1988-01-01
A methodology that can be used to determine which of several physical constraints can limit ion thruster power and thrust, under various design and operating conditions, is presented. The methodology is exercised to demonstrate typical limitations imposed by grid system span-to-gap ratio, intragrid electric field, discharge chamber power per unit beam area, screen grid lifetime, and accelerator grid lifetime constraints. Limitations on power and thrust for a thruster defined by typical discharge chamber and grid system parameters when it is operated at maximum thrust-to-power are discussed. It is pointed out that other operational objectives such as optimization of payload fraction or mission duration can be substituted for the thrust-to-power objective and that the methodology can be used as a tool for mission analysis.
Aghamohammadi, Hossein; Saadi Mesgari, Mohammad; Molaei, Damoon; Aghamohammadi, Hasan
2013-01-01
Location-allocation is a combinatorial optimization problem, and is defined as Non deterministic Polynomial Hard (NP) hard optimization. Therefore, solution of such a problem should be shifted from exact to heuristic or Meta heuristic due to the complexity of the problem. Locating medical centers and allocating injuries of an earthquake to them has high importance in earthquake disaster management so that developing a proper method will reduce the time of relief operation and will consequently decrease the number of fatalities. This paper presents the development of a heuristic method based on two nested genetic algorithms to optimize this location allocation problem by using the abilities of Geographic Information System (GIS). In the proposed method, outer genetic algorithm is applied to the location part of the problem and inner genetic algorithm is used to optimize the resource allocation. The final outcome of implemented method includes the spatial location of new required medical centers. The method also calculates that how many of the injuries at each demanding point should be taken to any of the existing and new medical centers as well. The results of proposed method showed high performance of designed structure to solve a capacitated location-allocation problem that may arise in a disaster situation when injured people has to be taken to medical centers in a reasonable time.
Exponential Modelling for Mutual-Cohering of Subband Radar Data
NASA Astrophysics Data System (ADS)
Siart, U.; Tejero, S.; Detlefsen, J.
2005-05-01
Increasing resolution and accuracy is an important issue in almost any type of radar sensor application. However, both resolution and accuracy are strongly related to the available signal bandwidth and energy that can be used. Nowadays, often several sensors operating in different frequency bands become available on a sensor platform. It is an attractive goal to use the potential of advanced signal modelling and optimization procedures by making proper use of information stemming from different frequency bands at the RF signal level. An important prerequisite for optimal use of signal energy is coherence between all contributing sensors. Coherent multi-sensor platforms are greatly expensive and are thus not available in general. This paper presents an approach for accurately estimating object radar responses using subband measurements at different RF frequencies. An exponential model approach allows to compensate for the lack of mutual coherence between independently operating sensors. Mutual coherence is recovered from the a-priori information that both sensors have common scattering centers in view. Minimizing the total squared deviation between measured data and a full-range exponential signal model leads to more accurate pole angles and pole magnitudes compared to single-band optimization. The model parameters (range and magnitude of point scatterers) after this full-range optimization process are also more accurate than the parameters obtained from a commonly used super-resolution procedure (root-MUSIC) applied to the non-coherent subband data.
Distress or no distress, that's the question: A cutoff point for distress in a working population
van Rhenen, Willem; van Dijk, Frank JH; Schaufeli, Wilmar B; Blonk, Roland WB
2008-01-01
Background The objective of the present study is to establish an optimal cutoff point for distress measured with the corresponding scale of the 4DSQ, using the prediction of sickness absence as a criterion. The cutoff point should result in a measure that can be used as a credible selection instrument for sickness absence in occupational health practice and in future studies on distress and mental disorders. Methods Distress is measured using the Four Dimensional Symptom Questionnaire (4DSQ), a 50-item self-report questionnaire, in a working population with and without sickness absence due to distress. Sensitivity and specificity were compared for various potential cutoff points, and a receiver operating characteristics analysis was conducted. Results and conclusion A distress cutoff point of ≥11 was defined. The choice was based on a challenging specificity and negative predictive value and indicates a distress level at which an employee is presumably at risk for subsequent sick leave on psychological grounds. The defined distress cutoff point is appropriate for use in occupational health practice and in studies of distress in working populations. PMID:18205912
Distress or no distress, that's the question: A cutoff point for distress in a working population.
van Rhenen, Willem; van Dijk, Frank Jh; Schaufeli, Wilmar B; Blonk, Roland Wb
2008-01-18
The objective of the present study is to establish an optimal cutoff point for distress measured with the corresponding scale of the 4DSQ, using the prediction of sickness absence as a criterion. The cutoff point should result in a measure that can be used as a credible selection instrument for sickness absence in occupational health practice and in future studies on distress and mental disorders. Distress is measured using the Four Dimensional Symptom Questionnaire (4DSQ), a 50-item self-report questionnaire, in a working population with and without sickness absence due to distress. Sensitivity and specificity were compared for various potential cutoff points, and a receiver operating characteristics analysis was conducted. A distress cutoff point of >/=11 was defined. The choice was based on a challenging specificity and negative predictive value and indicates a distress level at which an employee is presumably at risk for subsequent sick leave on psychological grounds. The defined distress cutoff point is appropriate for use in occupational health practice and in studies of distress in working populations.
Neural dynamic programming and its application to control systems
NASA Astrophysics Data System (ADS)
Seong, Chang-Yun
There are few general practical feedback control methods for nonlinear MIMO (multi-input-multi-output) systems, although such methods exist for their linear counterparts. Neural Dynamic Programming (NDP) is proposed as a practical design method of optimal feedback controllers for nonlinear MIMO systems. NDP is an offspring of both neural networks and optimal control theory. In optimal control theory, the optimal solution to any nonlinear MIMO control problem may be obtained from the Hamilton-Jacobi-Bellman equation (HJB) or the Euler-Lagrange equations (EL). The two sets of equations provide the same solution in different forms: EL leads to a sequence of optimal control vectors, called Feedforward Optimal Control (FOC); HJB yields a nonlinear optimal feedback controller, called Dynamic Programming (DP). DP produces an optimal solution that can reject disturbances and uncertainties as a result of feedback. Unfortunately, computation and storage requirements associated with DP solutions can be problematic, especially for high-order nonlinear systems. This dissertation presents an approximate technique for solving the DP problem based on neural network techniques that provides many of the performance benefits (e.g., optimality and feedback) of DP and benefits from the numerical properties of neural networks. We formulate neural networks to approximate optimal feedback solutions whose existence DP justifies. We show the conditions under which NDP closely approximates the optimal solution. Finally, we introduce the learning operator characterizing the learning process of the neural network in searching the optimal solution. The analysis of the learning operator provides not only a fundamental understanding of the learning process in neural networks but also useful guidelines for selecting the number of weights of the neural network. As a result, NDP finds---with a reasonable amount of computation and storage---the optimal feedback solutions to nonlinear MIMO control problems that would be very difficult to solve with DP. NDP was demonstrated on several applications such as the lateral autopilot logic for a Boeing 747, the minimum fuel control of a double-integrator plant with bounded control, the backward steering of a two-trailer truck, and the set-point control of a two-link robot arm.
Huang, Mingzhi; Wan, Jinquan; Hu, Kang; Ma, Yongwen; Wang, Yan
2013-12-01
An on-line hybrid fuzzy-neural soft-sensing model-based control system was developed to optimize dissolved oxygen concentration in a bench-scale anaerobic/anoxic/oxic (A(2)/O) process. In order to improve the performance of the control system, a self-adapted fuzzy c-means clustering algorithm and adaptive network-based fuzzy inference system (ANFIS) models were employed. The proposed control system permits the on-line implementation of every operating strategy of the experimental system. A set of experiments involving variable hydraulic retention time (HRT), influent pH (pH), dissolved oxygen in the aerobic reactor (DO), and mixed-liquid return ratio (r) was carried out. Using the proposed system, the amount of COD in the effluent stabilized at the set-point and below. The improvement was achieved with optimum dissolved oxygen concentration because the performance of the treatment process was optimized using operating rules implemented in real time. The system allows various expert operational approaches to be deployed with the goal of minimizing organic substances in the outlet while using the minimum amount of energy.
NASA Technical Reports Server (NTRS)
Duval, Walter M. B.; Batur, Celal; Bennett, Robert J.
1997-01-01
We present an innovative design of a vertical transparent multizone furnace which can operate in the temperature range of 25 C to 750 C and deliver thermal gradients of 2 C/cm to 45 C/cm for the commercial applications to crystal growth. The operation of the eight zone furnace is based on a self-tuning temperature control system with a DC power supply for optimal thermal stability. We show that the desired thermal profile over the entire length of the furnace consists of a functional combination of the fundamental thermal profiles for each individual zone obtained by setting the set-point temperature for that zone. The self-tuning system accounts for the zone to zone thermal interactions. The control system operates such that the thermal profile is maintained under thermal load, thus boundary conditions on crystal growth ampoules can be predetermined prior to crystal growth. Temperature profiles for the growth of crystals via directional solidification, vapor transport techniques, and multiple gradient applications are shown to be easily implemented. The unique feature of its transparency and ease of programming thermal profiles make the furnace useful for scientific and commercial applications for the determination of process parameters to optimize crystal growth conditions.
NASA Technical Reports Server (NTRS)
Duvual, Walter M. B.; Batur, Celal; Bennett, Robert J.
1998-01-01
We present an innovative design of a vertical transparent multizone furnace which can operate in the temperature range of 25 C to 750 C and deliver thermal gradients of 2 C/cm to 45 C/cm for the commercial applications to crystal growth. The operation of the eight zone furnace is based on a self-tuning temperature control system with a DC power supply for optimal thermal stability. We show that the desired thermal profile over the entire length of the furnace consists of a functional combination of the fundamental thermal profiles for each individual zone obtained by setting the set-point temperature for that zone. The self-tuning system accounts for the zone to zone thermal interactions. The control system operates such that the thermal profile is maintained under thermal load, thus boundary conditions on crystal growth ampoules can be predetermined prior to crystal growth. Temperature profiles for the growth of crystals via directional solidification, vapor transport techniques, and multiple gradient applications are shown to be easily implemented. The unique feature of its transparency and ease of programming thermal profiles make the furnace useful in scientific and commercial applications for determining the optimized process parameters for crystal growth.
Modeling an enhanced ridesharing system with meet points and time windows
Li, Xin; Hu, Sangen; Deng, Kai
2018-01-01
With the rising of e-hailing services in urban areas, ride sharing is becoming a common mode of transportation. This paper presents a mathematical model to design an enhanced ridesharing system with meet points and users’ preferable time windows. The introduction of meet points allows ridesharing operators to trade off the benefits of saving en-route delays and the cost of additional walking for some passengers to be collectively picked up or dropped off. This extension to the traditional door-to-door ridesharing problem brings more operation flexibility in urban areas (where potential requests may be densely distributed in neighborhood), and thus could achieve better system performance in terms of reducing the total travel time and increasing the served passengers. We design and implement a Tabu-based meta-heuristic algorithm to solve the proposed mixed integer linear program (MILP). To evaluate the validation and effectiveness of the proposed model and solution algorithm, several scenarios are designed and also resolved to optimality by CPLEX. Results demonstrate that (i) detailed route plan associated with passenger assignment to meet points can be obtained with en-route delay savings; (ii) as compared to CPLEX, the meta-heuristic algorithm bears the advantage of higher computation efficiency and produces good quality solutions with 8%~15% difference from the global optima; and (iii) introducing meet points to ridesharing system saves the total travel time by 2.7%-3.8% for small-scale ridesharing systems. More benefits are expected for ridesharing systems with large size of fleet. This study provides a new tool to efficiently operate the ridesharing system, particularly when the ride sharing vehicles are in short supply during peak hours. Traffic congestion mitigation will also be expected. PMID:29715302
Ramírez-Vélez, R; Correa-Bautista, J E; Martínez-Torres, J; Méneses-Echavez, J F; González-Ruiz, K; González-Jiménez, E; Schmidt-RioValle, J; Lobelo, F
2016-01-01
Background/Objectives: Indices predictive of central obesity include waist circumference (WC) and waist-to-height ratio (WHtR). These data are lacking for Colombian adults. This study aims at establishing smoothed centile charts and LMS tables for WC and WHtR; appropriate cutoffs were selected using receiver-operating characteristic analysis based on data from the representative sample. Subjects/Methods: We used data from the cross-sectional, national representative nutrition survey (ENSIN, 2010). A total of 83 220 participants (aged 20–64) were enroled. Weight, height, body mass index (BMI), WC and WHtR were measured and percentiles calculated using the LMS method (L (curve Box-Cox), M (curve median), and S (curve coefficient of variation)). Receiver operating characteristics curve analyses were used to evaluate the optimal cutoff point of WC and WHtR for overweight and obesity based on WHO definitions. Results: Reference values for WC and WHtR are presented. Mean WC and WHtR increased with age for both genders. We found a strong positive correlation between WC and BMI (r=0.847, P< 0.01) and WHtR and BMI (r=0.878, P<0.01). In obese men, the cutoff point value is 96.6 cm for the WC. In women, the cutoff point value is 91.0 cm for the WC. Receiver operating characteristic curve for WHtR was also obtained and the cutoff point value of 0.579 in men, and in women the cutoff point value was 0.587. A high sensitivity and specificity were obtained. Conclusions: This study presents first reference values of WC and WHtR for Colombians aged 20–64. Through LMS tables for adults, we hope to provide quantitative tools to study obesity and its complications. PMID:27026425
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Y
2016-06-15
Purpose: To test the impact of the use of apex optimization points for new vaginal cylinder (VC) applicators. Methods: New “ClickFit” single channel VC applicators (Varian) that have a different top thicknesses but the same diameters as the old VC applicators (2.3 cm diameter, 2.6 cm, 3.0 cm, and 3.5 cm) were compared using phantom studies. Old VC applicator plans without apex optimization points were also compared to the plans with the optimization points. The apex doses were monitored at 5 mm depth doses (8 points) where a prescription dose (Rx) of 6Gy was prescribed. VC surface doses (8 points)more » were also analyzed. Results: The new VC applicator plans without apex optimization points presented significantly lower 5mm depth doses than Rx (on average −31 ± 7%, p <0.00001) due to their thicker VC tops (3.4 ± 1.1 mm thicker with the range of 1.2 to 4.4 mm) than the old VC applicators. Old VC applicator plans also showed a statistically significant reduction (p <0.00001) due to Ir-192 source anisotropic effect at the apex region but the % reduction over Rx was only −7 ± 9%. However, by adding apex optimization points to the new VC applicator plans, the plans improved 5 mm depth doses (−7 ± 9% over Rx) that were not statistically different from old VC plans (p = 0.923), along with apex VC surface doses (−22 ± 10% over old VC versus −46 ± 7% without using apex optimization points). Conclusion: The use of apex optimization points are important in order to avoid significant additional cold doses (−24 ± 2%) at the prescription depth (5 mm) of apex, especially for the new VC applicators that have thicker tops.« less
Feasibility Studies for a Mediterranean Neutrino Observatory - The NEMO.RD Project
NASA Astrophysics Data System (ADS)
de Marzo, C.; Ambriola, M.; Bellotti, R.; Cafagna, F.; Calicchio, M.; Ciacio, F.; Circella, M.; de Marzo, C.; Montaruli, T.; Falchieri, D.; Gabrielli, A.; Gandolfi, E.; Masetti, M.; Vitullo, C.; Zanarini, G.; Habel, R.; Usai, I.; Aiello, S.; Burrafato, G.; Caponetto, L.; Costanzo, E.; Lopresti, D.; Pappalardo, L.; Petta, C.; Randazzo, N.; Russo, G. V.; Troia, O.; Barnà, R.; D'Amico, V.; de Domenico, E.; de Pasquale, D.; Giacobbe, S.; Italiano, A.; Migliardo, F.; Salvato, G.; Trafirò, A.; Trimarchi, M.; Ameli, F.; Bonori, M.; Bottai, S.; Capone, A.; Desiati, P.; Massa, F.; Masullo, R.; Salusti, E.; Vicini, M.; Coniglione, R.; Migneco, E.; Piattelli, P.; Riccobene, R.; Sapienza, P.; Cordelli, M.; Trasatti, L.; Valente, V.; de Marchis, G.; Piccari, L.; Accerboni, E.; Mosetti, R.; Astraldi, M.; Gasparini, G. P.; Ulzega, A.; Orrù, P.
2000-06-01
The NEMO.RD Project is a feasibility study of a km3 underwater telescope for high energy astrophysical neutrinos to be located in the Mediterranea Sea. At present this study concerns: i) Monte Carlo simulation study of the capabilities of various arrays of phototubes in order to determine the detector geometry that can optimize performance and cost; ii) design of low power consumption electronic cards for data acquisition and transmission to shore; iii) feasibility study of mechanics, deployment, connection and maintenance of such a detector in collaboration with petrol industries having experience of undersea operations; iv) oceanographic exploration of various sites in search for the optimal one. A brief report on the status of points i) and iv) is presented here
A cellular glass substrate solar concentrator
NASA Technical Reports Server (NTRS)
Bedard, R.; Bell, D.
1980-01-01
The design of a second generation point focusing solar concentration is discussed. The design is based on reflective gores fabricated of thin glass mirror bonded continuously to a contoured substrate of cellular glass. The concentrator aperture and structural stiffness was optimized for minimum concentrator cost given the performance requirement of delivering 56 kWth to a 22 cm diameter receiver aperture with a direct normal insolation of 845 watts sq m and an operating wind of 50 kmph. The reflective panel, support structure, drives, foundation and instrumentation and control subsystem designs, optimized for minimum cost, are summarized. The use of cellular glass as a reflective panel substrate material is shown to offer significant weight and cost advantages compared to existing technology materials.
Optimization of the NIF ignition point design hohlraum
NASA Astrophysics Data System (ADS)
Callahan, D. A.; Hinkel, D. E.; Berger, R. L.; Divol, L.; Dixit, S. N.; Edwards, M. J.; Haan, S. W.; Jones, O. S.; Lindl, J. D.; Meezan, N. B.; Michel, P. A.; Pollaine, S. M.; Suter, L. J.; Town, R. P. J.; Bradley, P. A.
2008-05-01
In preparation for the start of NIF ignition experiments, we have designed a porfolio of targets that span the temperature range that is consistent with initial NIF operations: 300 eV, 285 eV, and 270 eV. Because these targets are quite complicated, we have developed a plan for choosing the optimum hohlraum for the first ignition attempt that is based on this portfolio of designs coupled with early NIF experiements using 96 beams. These early experiments will measure the laser plasma instabilities of the candidate designs and will demonstrate our ability to tune symmetry in these designs. These experimental results, coupled with the theory and simulations that went into the designs, will allow us to choose the optimal hohlraum for the first NIF ignition attempt.
NASA Technical Reports Server (NTRS)
Lin, N. J.; Quinn, R. D.
1991-01-01
A locally-optimal trajectory management (LOTM) approach is analyzed, and it is found that care should be taken in choosing the Ritz expansion and cost function. A modified cost function for the LOTM approach is proposed which includes the kinetic energy along with the base reactions in a weighted and scale sum. The effects of the modified functions are demonstrated with numerical examples for robots operating in two- and three-dimensional space. It is pointed out that this modified LOTM approach shows good performance, the reactions do not fluctuate greatly, joint velocities reach their objectives at the end of the manifestation, and the CPU time is slightly more than twice the manipulation time.
Additional studies for the spectrophotometric measurement of iodine in water
NASA Technical Reports Server (NTRS)
1972-01-01
Previous work in iodine spectroscopy is briefly reviewed. Continued studies of the direct spectrophotometric determination of aqueous iodine complexed with potassium iodide show that free iodine is optimally determined at the isosbestic point for these solutions. The effects on iodine determinations of turbidity and chemical substances (in trace amounts) is discussed and illustrated. At the levels tested, iodine measurements are not significantly altered by such substances. A preliminary design for an on-line, automated iodine monitor with eventual capability of operating also as a controller was analyzed and developed in detail with respect single beam colorimeter operating at two wavelengths (using a rotating filter wheel). A flow-through sample cell allows the instrument to operate continuously, except for momentary stop flow when measurements are made. The timed automatic cycling of the system may be interrupted whenever desired, for manual operation. An analog output signal permits controlling an iodine generator.
Tele-Autonomous control involving contact. Final Report Thesis; [object localization
NASA Technical Reports Server (NTRS)
Shao, Lejun; Volz, Richard A.; Conway, Lynn; Walker, Michael W.
1990-01-01
Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed.
Meta-heuristic algorithm to solve two-sided assembly line balancing problems
NASA Astrophysics Data System (ADS)
Wirawan, A. D.; Maruf, A.
2016-02-01
Two-sided assembly line is a set of sequential workstations where task operations can be performed at two sides of the line. This type of line is commonly used for the assembly of large-sized products: cars, buses, and trucks. This paper propose a Decoding Algorithm with Teaching-Learning Based Optimization (TLBO), a recently developed nature-inspired search method to solve the two-sided assembly line balancing problem (TALBP). The algorithm aims to minimize the number of mated-workstations for the given cycle time without violating the synchronization constraints. The correlation between the input parameters and the emergence point of objective function value is tested using scenarios generated by design of experiments. A two-sided assembly line operated in an Indonesia's multinational manufacturing company is considered as the object of this paper. The result of the proposed algorithm shows reduction of workstations and indicates that there is negative correlation between the emergence point of objective function value and the size of population used.
Voltage oriented control of self-excited induction generator for wind energy system with MPPT
NASA Astrophysics Data System (ADS)
Amieur, Toufik; Taibi, Djamel; Amieur, Oualid
2018-05-01
This paper presents the study and simulation of the self-excited induction generator in the wind power production in isolated sites. With this intention, a model of the wind turbine was established. Extremum-seeking control algorithm method by using Maximum Power Point Tracking (MPPT) is proposed control solution aims at driving the average position of the operating point near to optimality. The reference of turbine rotor speed is adjusted such that the turbine operates around maximum power for the current wind speed value. After a brief review of the concepts of converting wind energy into electrical energy. The proposed modeling tools were developed to study the performance of standalone induction generators connected to capacitor bank. The purpose of this technique is to maintain a constant voltage at the output of the rectifier whatever the loads and speeds. The system studied in this work is developed and tested in MATLAB/Simulink environment. Simulation results validate the performance and effectiveness of the proposed control methods.
Cairo consensus on the IVF laboratory environment and air quality: report of an expert meeting.
Mortimer, D; Cohen, J; Mortimer, S T; Fawzy, M; McCulloh, D H; Morbeck, D E; Pollet-Villard, X; Mansour, R T; Brison, D R; Doshi, A; Harper, J C; Swain, J E; Gilligan, A V
2018-03-02
This proceedings report presents the outcomes from an international Expert Meeting to establish a consensus on the recommended technical and operational requirements for air quality within modern assisted reproduction technology (ART) laboratories. Topics considered included design and construction of the facility, as well as its heating, ventilation and air conditioning system; control of particulates, micro-organisms (bacteria, fungi and viruses) and volatile organic compounds (VOCs) within critical areas; safe cleaning practices; operational practices to optimize air quality while minimizing physicochemical risks to gametes and embryos (temperature control versus air flow); and appropriate infection-control practices that minimize exposure to VOC. More than 50 consensus points were established under the general headings of assessing site suitability, basic design criteria for new construction, and laboratory commissioning and ongoing VOC management. These consensus points should be considered as aspirational benchmarks for existing ART laboratories, and as guidelines for the construction of new ART laboratories. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng
2016-01-01
With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051
Four-body trajectory optimization
NASA Technical Reports Server (NTRS)
Pu, C. L.; Edelbaum, T. N.
1973-01-01
A collection of typical three-body trajectories from the L1 libration point on the sun-earth line to the earth is presented. These trajectories in the sun-earth system are grouped into four distinct families which differ in transfer time and delta V requirements. Curves showing the variations of delta V with respect to transfer time, and typical two and three-impulse primer vector histories, are included. The development of a four-body trajectory optimization program to compute fuel optimal trajectories between the earth and a point in the sun-earth-moon system are also discussed. Methods for generating fuel optimal two-impulse trajectories which originate at the earth or a point in space, and fuel optimal three-impulse trajectories between two points in space, are presented. A brief qualitative comparison of these methods is given. An example of a four-body two-impulse transfer from the Li libration point to the earth is included.
Kutsumi, Osamu; Kato, Yushi; Matsui, Yuuki; Kitagawa, Atsushi; Muramatsu, Masayuki; Uchida, Takashi; Yoshida, Yoshikazu; Sato, Fuminobu; Iida, Toshiyuki
2010-02-01
Multicharged ions that are needed are produced from solid pure material with high melting point in an electron cyclotron resonance ion source. We develop an evaporator by using induction heating (IH) with multilayer induction coil, which is made from bare molybdenum or tungsten wire without water cooling and surrounding the pure vaporized material. We optimize the shapes of induction coil and vaporized materials and operation of rf power supply. We conduct experiment to investigate the reproducibility and stability in the operation and heating efficiency. IH evaporator produces pure material vapor because materials directly heated by eddy currents have no contact with insulated materials, which are usually impurity gas sources. The power and the frequency of the induction currents range from 100 to 900 W and from 48 to 23 kHz, respectively. The working pressure is about 10(-4)-10(-3) Pa. We measure the temperature of the vaporized materials with different shapes, and compare them with the result of modeling. We estimate the efficiency of the IH vapor source. We are aiming at the evaporator's higher melting point material than that of iron.
NASA Astrophysics Data System (ADS)
Kutsumi, Osamu; Kato, Yushi; Matsui, Yuuki; Kitagawa, Atsushi; Muramatsu, Masayuki; Uchida, Takashi; Yoshida, Yoshikazu; Sato, Fuminobu; Iida, Toshiyuki
2010-02-01
Multicharged ions that are needed are produced from solid pure material with high melting point in an electron cyclotron resonance ion source. We develop an evaporator by using induction heating (IH) with multilayer induction coil, which is made from bare molybdenum or tungsten wire without water cooling and surrounding the pure vaporized material. We optimize the shapes of induction coil and vaporized materials and operation of rf power supply. We conduct experiment to investigate the reproducibility and stability in the operation and heating efficiency. IH evaporator produces pure material vapor because materials directly heated by eddy currents have no contact with insulated materials, which are usually impurity gas sources. The power and the frequency of the induction currents range from 100to900W and from 48to23kHz, respectively. The working pressure is about 10-4-10-3Pa. We measure the temperature of the vaporized materials with different shapes, and compare them with the result of modeling. We estimate the efficiency of the IH vapor source. We are aiming at the evaporator's higher melting point material than that of iron.
Air Traffic Management Technology Demonstration-1 Concept of Operations (ATD-1 ConOps)
NASA Technical Reports Server (NTRS)
Baxley, Brian T.; Johnson, William C.; Swenson, Harry; Robinson, John E.; Prevot, Thomas; Callantine, Todd; Scardina, John; Greene, Michael
2012-01-01
The operational goal of the ATD-1 ConOps is to enable aircraft, using their onboard FMS capabilities, to fly Optimized Profile Descents (OPDs) from cruise to the runway threshold at a high-density airport, at a high throughput rate, using primarily speed control to maintain in-trail separation and the arrival schedule. The three technologies in the ATD-1 ConOps achieve this by calculating a precise arrival schedule, using controller decision support tools to provide terminal controllers with speeds for aircraft to fly to meet times at a particular meter points, and onboard software providing flight crews with speeds for the aircraft to fly to achieve a particular spacing behind preceding aircraft.
Speeding up adiabatic population transfer in a Josephson qutrit via counter-diabatic driving
NASA Astrophysics Data System (ADS)
Feng, Zhi-Bo; Lu, Xiao-Jing; Li, M.; Yan, Run-Ying; Zhou, Yun-Qing
2017-12-01
We propose a theoretical scheme to speed up adiabatic population transfer in a Josephson artificial qutrit by transitionless quantum driving. At a magic working point, an effective three-level subsystem can be chosen to constitute our qutrit. With Stokes and pump driving, adiabatic population transfer can be achieved in the qutrit by means of stimulated Raman adiabatic passage. Assisted by a counter-diabatic driving, the adiabatic population transfer can be sped up drastically with accessible parameters. Moreover, the accelerated operation is flexibly reversible and highly robust against decoherence effects. Thanks to these distinctive advantages, the present protocol could offer a promising avenue for optimal coherent operations in Josephson quantum circuits.
Online Mapping and Perception Algorithms for Multi-robot Teams Operating in Urban Environments
2015-01-01
each method on a 2.53 GHz Intel i5 laptop. All our algorithms are hand-optimized, implemented in Java and single threaded. To determine which algorithm...approach would be to label all the pixels in the image with an x, y, z point. However, the angular resolution of the camera is finer than that of the...edge criterion. That is, each edge is either present or absent. In [42], edge existence is further screened by a fixed threshold for angular
2013-05-29
not necessarily express the views of and should not be attributed to ESA. 1 and visual navigation to maneuver autonomously to reduce the size of the...successful orbit and three-dimensional imaging of an RSO, using passive visual -only navigation and real-time near-optimal guidance. The mission design...Kit ( STK ) in the Earth-centered Earth-fixed (ECF) co- ordinate system, loaded to Simulink and transformed to the BFF for calculation of the SRP
The use of experimental design to find the operating maximum power point of PEM fuel cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crăciunescu, Aurelian; Pătularu, Laurenţiu; Ciumbulea, Gloria
2015-03-10
Proton Exchange Membrane (PEM) Fuel Cells are difficult to model due to their complex nonlinear nature. In this paper, the development of a PEM Fuel Cells mathematical model based on the Design of Experiment methodology is described. The Design of Experiment provides a very efficient methodology to obtain a mathematical model for the studied multivariable system with only a few experiments. The obtained results can be used for optimization and control of the PEM Fuel Cells systems.
NASA Astrophysics Data System (ADS)
Awad, M. M.
2017-03-01
The purpose of this discussion is to increase the awareness of the divergent views on the entransy concept among the readers of chemical physics. Comments are presented in particular on the paper by Ahmadi et al. (2016) where the authors used entransy dissipation in their analysis. Based on the view points of independent different groups of researchers world wide, I draw the attention of readers to the reality that entransy has no physical meaning.
With the Mountain Men: Co-Operation and Competition within the Context of Cohort
1988-05-01
fireteam leader positions had been filled with selected, but brand new, OSUT graduates. PFCs. An hour’s discussion under an afternoon oak tree with a...every single trooper, six months out on the "real Army" side of OSUT, was the will, enthusiasm, and optimism of a brand -new soldier 2 days out of AIT or...ragged decline, never again reaching the high-point of a freshly-trained, brand -new soldier. The unusual phenomenon I was watching inside that
Design of a reversible single precision floating point subtractor.
Anantha Lakshmi, Av; Sudha, Gf
2014-01-04
In recent years, Reversible logic has emerged as a major area of research due to its ability to reduce the power dissipation which is the main requirement in the low power digital circuit design. It has wide applications like low power CMOS design, Nano-technology, Digital signal processing, Communication, DNA computing and Optical computing. Floating-point operations are needed very frequently in nearly all computing disciplines, and studies have shown floating-point addition/subtraction to be the most used floating-point operation. However, few designs exist on efficient reversible BCD subtractors but no work on reversible floating point subtractor. In this paper, it is proposed to present an efficient reversible single precision floating-point subtractor. The proposed design requires reversible designs of an 8-bit and a 24-bit comparator unit, an 8-bit and a 24-bit subtractor, and a normalization unit. For normalization, a 24-bit Reversible Leading Zero Detector and a 24-bit reversible shift register is implemented to shift the mantissas. To realize a reversible 1-bit comparator, in this paper, two new 3x3 reversible gates are proposed The proposed reversible 1-bit comparator is better and optimized in terms of the number of reversible gates used, the number of transistor count and the number of garbage outputs. The proposed work is analysed in terms of number of reversible gates, garbage outputs, constant inputs and quantum costs. Using these modules, an efficient design of a reversible single precision floating point subtractor is proposed. Proposed circuits have been simulated using Modelsim and synthesized using Xilinx Virtex5vlx30tff665-3. The total on-chip power consumed by the proposed 32-bit reversible floating point subtractor is 0.410 W.
Gyrokinetic particle-in-cell optimization on emerging multi- and manycore platforms
Madduri, Kamesh; Im, Eun-Jin; Ibrahim, Khaled Z.; ...
2011-03-02
The next decade of high-performance computing (HPC) systems will see a rapid evolution and divergence of multi- and manycore architectures as power and cooling constraints limit increases in microprocessor clock speeds. Understanding efficient optimization methodologies on diverse multicore designs in the context of demanding numerical methods is one of the greatest challenges faced today by the HPC community. In this paper, we examine the efficient multicore optimization of GTC, a petascale gyrokinetic toroidal fusion code for studying plasma microturbulence in tokamak devices. For GTC’s key computational components (charge deposition and particle push), we explore efficient parallelization strategies across a broadmore » range of emerging multicore designs, including the recently-released Intel Nehalem-EX, the AMD Opteron Istanbul, and the highly multithreaded Sun UltraSparc T2+. We also present the first study on tuning gyrokinetic particle-in-cell (PIC) algorithms for graphics processors, using the NVIDIA C2050 (Fermi). Our work discusses several novel optimization approaches for gyrokinetic PIC, including mixed-precision computation, particle binning and decomposition strategies, grid replication, SIMDized atomic floating-point operations, and effective GPU texture memory utilization. Overall, we achieve significant performance improvements of 1.3–4.7× on these complex PIC kernels, despite the inherent challenges of data dependency and locality. Finally, our work also points to several architectural and programming features that could significantly enhance PIC performance and productivity on next-generation architectures.« less
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
NASA Astrophysics Data System (ADS)
Tang, Dunbing; Dai, Min
2015-09-01
The traditional production planning and scheduling problems consider performance indicators like time, cost and quality as optimization objectives in manufacturing processes. However, environmentally-friendly factors like energy consumption of production have not been completely taken into consideration. Against this background, this paper addresses an approach to modify a given schedule generated by a production planning and scheduling system in a job shop floor, where machine tools can work at different cutting speeds. It can adjust the cutting speeds of the operations while keeping the original assignment and processing sequence of operations of each job fixed in order to obtain energy savings. First, the proposed approach, based on a mixed integer programming mathematical model, changes the total idle time of the given schedule to minimize energy consumption in the job shop floor while accepting the optimal solution of the scheduling objective, makespan. Then, a genetic-simulated annealing algorithm is used to explore the optimal solution due to the fact that the problem is strongly NP-hard. Finally, the effectiveness of the approach is performed smalland large-size instances, respectively. The experimental results show that the approach can save 5%-10% of the average energy consumption while accepting the optimal solution of the makespan in small-size instances. In addition, the average maximum energy saving ratio can reach to 13%. And it can save approximately 1%-4% of the average energy consumption and approximately 2.4% of the average maximum energy while accepting the near-optimal solution of the makespan in large-size instances. The proposed research provides an interesting point to explore an energy-aware schedule optimization for a traditional production planning and scheduling problem.
NASA Astrophysics Data System (ADS)
Berkowitz, Evan; Nicholson, Amy; Chang, Chia Cheng; Rinaldi, Enrico; Clark, M. A.; Joó, Bálint; Kurth, Thorsten; Vranas, Pavlos; Walker-Loud, André
2018-03-01
There are many outstanding problems in nuclear physics which require input and guidance from lattice QCD calculations of few baryons systems. However, these calculations suffer from an exponentially bad signal-to-noise problem which has prevented a controlled extrapolation to the physical point. The variational method has been applied very successfully to two-meson systems, allowing for the extraction of the two-meson states very early in Euclidean time through the use of improved single hadron operators. The sheer numerical cost of using the same techniques in two-baryon systems has so far been prohibitive. We present an alternate strategy which offers some of the same advantages as the variational method while being significantly less numerically expensive. We first use the Matrix Prony method to form an optimal linear combination of single baryon interpolating fields generated from the same source and different sink interpolating fields. Very early in Euclidean time this optimal linear combination is numerically free of excited state contamination, so we coin it a calm baryon. This calm baryon operator is then used in the construction of the two-baryon correlation functions. To test this method, we perform calculations on the WM/JLab iso-clover gauge configurations at the SU(3) flavor symmetric point with mπ 800 MeV — the same configurations we have previously used for the calculation of two-nucleon correlation functions. We observe the calm baryon significantly removes the excited state contamination from the two-nucleon correlation function to as early a time as the single-nucleon is improved, provided non-local (displaced nucleon) sources are used. For the local two-nucleon correlation function (where both nucleons are created from the same space-time location) there is still improvement, but there is significant excited state contamination in the region the single calm baryon displays no excited state contamination.
Amand, L; Carlsson, B
2013-01-01
Ammonium feedback control is increasingly used to determine the dissolved oxygen (DO) set-point in aerated activated sludge processes for nitrogen removal. This study compares proportional-integral (PI) ammonium feedback control with a DO profile created from a mathematical minimisation of the daily air flow rate. All simulated scenarios are set to reach the same treatment level of ammonium, based on a daily average concentration. The influent includes daily variations only and the model has three aerated zones. Comparisons are made at different plant loads and DO concentrations, and the placement of the ammonium sensor is investigated. The results show that ammonium PI control can achieve the best performance if the DO set-point is limited at a maximum value and with little integral action in the controller. Compared with constant DO control the best-performing ammonium controller can achieve 1-3.5% savings in the air flow rate, while the optimal solution can achieve a 3-7% saving. Energy savings are larger when operating at higher DO concentrations.
Mahmoodabadi, M. J.; Taherkhorsandi, M.; Bagheri, A.
2014-01-01
An optimal robust state feedback tracking controller is introduced to control a biped robot. In the literature, the parameters of the controller are usually determined by a tedious trial and error process. To eliminate this process and design the parameters of the proposed controller, the multiobjective evolutionary algorithms, that is, the proposed method, modified NSGAII, Sigma method, and MATLAB's Toolbox MOGA, are employed in this study. Among the used evolutionary optimization algorithms to design the controller for biped robots, the proposed method operates better in the aspect of designing the controller since it provides ample opportunities for designers to choose the most appropriate point based upon the design criteria. Three points are chosen from the nondominated solutions of the obtained Pareto front based on two conflicting objective functions, that is, the normalized summation of angle errors and normalized summation of control effort. Obtained results elucidate the efficiency of the proposed controller in order to control a biped robot. PMID:24616619
Earth to Moon Transfer: Direct vs Via Libration Points (L1, L2)
NASA Technical Reports Server (NTRS)
Condon, Gerald L.; Wilson, Samuel W.
2004-01-01
For some three decades, the Apollo-style mission has served as a proven baseline technique for transporting flight crews to the Moon and back with expendable hardware. This approach provides an optimal design for expeditionary missions, emphasizing operational flexibility in terms of safely returning the crew in the event of a hardware failure. However, its application is limited essentially to low-latitude lunar sites, and it leaves much to be desired as a model for exploratory and evolutionary programs that employ reusable space-based hardware. This study compares the performance requirements for a lunar orbit rendezvous mission type with one using the cislunar libration point (L1) as a stopover and staging point for access to arbitrary sites on the lunar surface. For selected constraints and mission objectives, it contrasts the relative uniformity of performance cost when the L1 staging point is used with the wide variation of cost for the Apollo-style lunar orbit rendezvous.
Alternative Methods for Estimating Plane Parameters Based on a Point Cloud
NASA Astrophysics Data System (ADS)
Stryczek, Roman
2017-12-01
Non-contact measurement techniques carried out using triangulation optical sensors are increasingly popular in measurements with the use of industrial robots directly on production lines. The result of such measurements is often a cloud of measurement points that is characterized by considerable measuring noise, presence of a number of points that differ from the reference model, and excessive errors that must be eliminated from the analysis. To obtain vector information points contained in the cloud that describe reference models, the data obtained during a measurement should be subjected to appropriate processing operations. The present paperwork presents an analysis of suitability of methods known as RANdom Sample Consensus (RANSAC), Monte Carlo Method (MCM), and Particle Swarm Optimization (PSO) for the extraction of the reference model. The effectiveness of the tested methods is illustrated by examples of measurement of the height of an object and the angle of a plane, which were made on the basis of experiments carried out at workshop conditions.
Encapsulation of Capacitive Micromachined Ultrasonic Transducers Using Viscoelastic Polymer
Lin, Der-Song; Zhuang, Xuefeng; Wong, Serena H.; Kupnik, Mario; Khuri-Yakub, Butrus Thomas
2010-01-01
The packaging of a medical imaging or therapeutic ultrasound transducer should provide protective insulation while maintaining high performance. For a capacitive micromachined ultrasonic transducer (CMUT), an ideal encapsulation coating would therefore require a limited and predictable change on the static operation point and the dynamic performance, while insulating the high dc and dc actuation voltages from the environment. To fulfill these requirements, viscoelastic materials, such as polydimethylsiloxane (PDMS), were investigated for an encapsulation material. In addition, PDMS, with a glass-transition temperature below room temperature, provides a low Young's modulus that preserves the static behavior; at higher frequencies for ultrasonic operation, this material becomes stiffer and acoustically matches to water. In this paper, we demonstrate the modeling and implementation of the viscoelastic polymer as the encapsulation material. We introduce a finite element model (FEM) that addresses viscoelasticity. This enables us to correctly calculate both the static operation point and the dynamic behavior of the CMUT. CMUTs designed for medical imaging and therapeutic ultrasound were fabricated and encapsulated. Static and dynamic measurements were used to verify the FEM and show excellent agreement. This paper will help in the design process for optimizing the static and the dynamic behavior of viscoelastic-polymer-coated CMUTs. PMID:21170294
Optimization of wastewater treatment plant operation for greenhouse gas mitigation.
Kim, Dongwook; Bowen, James D; Ozelkan, Ertunga C
2015-11-01
This study deals with the determination of optimal operation of a wastewater treatment system for minimizing greenhouse gas emissions, operating costs, and pollution loads in the effluent. To do this, an integrated performance index that includes three objectives was established to assess system performance. The ASMN_G model was used to perform system optimization aimed at determining a set of operational parameters that can satisfy three different objectives. The complex nonlinear optimization problem was simulated using the Nelder-Mead Simplex optimization algorithm. A sensitivity analysis was performed to identify influential operational parameters on system performance. The results obtained from the optimization simulations for six scenarios demonstrated that there are apparent trade-offs among the three conflicting objectives. The best optimized system simultaneously reduced greenhouse gas emissions by 31%, reduced operating cost by 11%, and improved effluent quality by 2% compared to the base case operation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Estimating Phenomenological Parameters in Multi-Assets Markets
NASA Astrophysics Data System (ADS)
Raffaelli, Giacomo; Marsili, Matteo
Financial correlations exhibit a non-trivial dynamic behavior. This is reproduced by a simple phenomenological model of a multi-asset financial market, which takes into account the impact of portfolio investment on price dynamics. This captures the fact that correlations determine the optimal portfolio but are affected by investment based on it. Such a feedback on correlations gives rise to an instability when the volume of investment exceeds a critical value. Close to the critical point the model exhibits dynamical correlations very similar to those observed in real markets. We discuss how the model's parameter can be estimated in real market data with a maximum likelihood principle. This confirms the main conclusion that real markets operate close to a dynamically unstable point.
Kamairudin, Norsuhaili; Gani, Siti Salwa Abd; Masoumi, Hamid Reza Fard; Hashim, Puziah
2014-10-16
The D-optimal mixture experimental design was employed to optimize the melting point of natural lipstick based on pitaya (Hylocereus polyrhizus) seed oil. The influence of the main lipstick components-pitaya seed oil (10%-25% w/w), virgin coconut oil (25%-45% w/w), beeswax (5%-25% w/w), candelilla wax (1%-5% w/w) and carnauba wax (1%-5% w/w)-were investigated with respect to the melting point properties of the lipstick formulation. The D-optimal mixture experimental design was applied to optimize the properties of lipstick by focusing on the melting point with respect to the above influencing components. The D-optimal mixture design analysis showed that the variation in the response (melting point) could be depicted as a quadratic function of the main components of the lipstick. The best combination of each significant factor determined by the D-optimal mixture design was established to be pitaya seed oil (25% w/w), virgin coconut oil (37% w/w), beeswax (17% w/w), candelilla wax (2% w/w) and carnauba wax (2% w/w). With respect to these factors, the 46.0 °C melting point property was observed experimentally, similar to the theoretical prediction of 46.5 °C. Carnauba wax is the most influential factor on this response (melting point) with its function being with respect to heat endurance. The quadratic polynomial model sufficiently fit the experimental data.
Lee, Seung-Mok; Kim, Young-Gyu; Cho, Il-Hyoung
2005-01-01
Optimal operating conditions in order to treat dyeing wastewater were investigated by using the factorial design and responses surface methodology (RSM). The experiment was statistically designed and carried out according to a 22 full factorial design with four factorial points, three center points, and four axial points. Then, the linear and nonlinear regression was applied on the data by using SAS package software. The independent variables were TiO2 dosage, H2O2 concentration and total organic carbon (TOC) removal efficiency of dyeing wastewater was dependent variable. From the factorial design and responses surface methodology (RSM), maximum removal efficiency (85%) of dyeing wastewater was obtained at TiO2 dosage (1.82 gL(-1)), H2O2 concentration (980 mgL(-1)) for oxidation reaction (20 min).
The optimization problems of CP operation
NASA Astrophysics Data System (ADS)
Kler, A. M.; Stepanova, E. L.; Maximov, A. S.
2017-11-01
The problem of enhancing energy and economic efficiency of CP is urgent indeed. One of the main methods for solving it is optimization of CP operation. To solve the optimization problems of CP operation, Energy Systems Institute, SB of RAS, has developed a software. The software makes it possible to make optimization calculations of CP operation. The software is based on the techniques and software tools of mathematical modeling and optimization of heat and power installations. Detailed mathematical models of new equipment have been developed in the work. They describe sufficiently accurately the processes that occur in the installations. The developed models include steam turbine models (based on the checking calculation) which take account of all steam turbine compartments and regeneration system. They also enable one to make calculations with regenerative heaters disconnected. The software for mathematical modeling of equipment and optimization of CP operation has been developed. It is based on the technique for optimization of CP operating conditions in the form of software tools and integrates them in the common user interface. The optimization of CP operation often generates the need to determine the minimum and maximum possible total useful electricity capacity of the plant at set heat loads of consumers, i.e. it is necessary to determine the interval on which the CP capacity may vary. The software has been applied to optimize the operating conditions of the Novo-Irkutskaya CP of JSC “Irkutskenergo”. The efficiency of operating condition optimization and the possibility for determination of CP energy characteristics that are necessary for optimization of power system operation are shown.
Innovative phase shifter for pulse tube operating below 10 K
NASA Astrophysics Data System (ADS)
Duval, Jean-Marc; Charles, Ivan; Daniel, Christophe; André, Jérôme
2016-09-01
Stirling type pulse tubes are classically based on the use of an inertance phase shifter to optimize their cooling power. The limitations of the phase shifting capabilities of these inertances have been pointed out in various studies. These limitations are particularly critical for low temperature operation, typically below about 50 K. An innovative phase shifter using an inertance tube filled with liquid, or fluid with high density or low viscosity, and separated by a sealed metallic diaphragm has been conceived and tested. This device has been characterized and validated on a dedicated test bench. Operation on a 50-80 K pulse tube cooler and on a low temperature (below 8 K) pulse tube cooler have been demonstrated and have validated the device in operation. These developments open the door for efficient and compact low temperature Stirling type pulse tube coolers. The possibility of long life operation has been experimentally verified and a design for space applications is proposed.
Computing single step operators of logic programming in radial basis function neural networks
NASA Astrophysics Data System (ADS)
Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong
2014-07-01
Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.
Thickness effects of yttria-doped ceria interlayers on solid oxide fuel cells
NASA Astrophysics Data System (ADS)
Fan, Zeng; An, Jihwan; Iancu, Andrei; Prinz, Fritz B.
2012-11-01
Determining the optimal thickness range of the interlayed yttria-doped ceria (YDC) films promises to further enhance the performance of solid oxide fuel cells (SOFCs) at low operating temperatures. The YDC interlayers are fabricated by the atomic layer deposition (ALD) method with one super cycle of the YDC deposition consisting of 6 ceria deposition cycles and one yttria deposition cycle. YDC films of various numbers of ALD super cycles, ranging from 2 to 35, are interlayered into bulk fuel cells with a 200 um thick yttria-stabilized zirconia (YSZ) electrolyte. Measurements and analysis of the linear sweep voltammetry of these fuel cells reveal that the performance of the given cells is maximized at 10 super cycles. Auger elemental mapping and X-ray photoelectron spectroscopy (XPS) techniques are employed to determine the film completeness, and they verify 10 super cycles of YDC to be the critical thickness point. This optimal YDC interlayer condition (6Ce1Y × 10 super cycles) is applied to the case of micro fuel cells as well, and the average performance enhancement factor is 1.4 at operating temperatures of 400 and 450 °C. A power density of 1.04 W cm-2 at 500 °C is also achieved with the optimal YDC recipe.
NASA Astrophysics Data System (ADS)
Curletti, F.; Gandiglio, M.; Lanzini, A.; Santarelli, M.; Maréchal, F.
2015-10-01
This article investigates the techno-economic performance of large integrated biogas Solid Oxide Fuel Cell (SOFC) power plants. Both atmospheric and pressurized operation is analysed with CO2 vented or captured. The SOFC module produces a constant electrical power of 1 MWe. Sensitivity analysis and multi-objective optimization are the mathematical tools used to investigate the effects of Fuel Utilization (FU), SOFC operating temperature and pressure on the plant energy and economic performances. FU is the design variable that most affects the plant performance. Pressurized SOFC with hybridization with a gas turbine provides a notable boost in electrical efficiency. For most of the proposed plant configurations, the electrical efficiency ranges in the interval 50-62% (LHV biogas) when a trade-off of between energy and economic performances is applied based on Pareto charts obtained from multi-objective plant optimization. The hybrid SOFC is potentially able to reach an efficiency above 70% when FU is 90%. Carbon capture entails a penalty of more 10 percentage points in pressurized configurations mainly due to the extra energy burdens of captured CO2 pressurization and oxygen production and for the separate and different handling of the anode and cathode exhausts and power recovery from them.
Tertiary recycling of PVC-containing plastic waste by copyrolysis with cattle manure.
Duangchan, Apinya; Samart, Chanatip
2008-11-01
The corrosion from pyrolysis of PVC in plastic waste was reduced by copyrolysis of PVC with cattle manure. The optimization of pyrolysis conditions between PVC and cattle manure was studied via a statistical method, the Box-Behnken model. The pyrolysis reaction was operated in a tubular reactor. Heating rate, reaction temperature and the PVC:cattle manure ratio were optimized in the range of 1-5 degrees C/min, 250-450 degrees C and the ratio of 1:1-1:5, respectively. The suitable conditions which provided the highest HCl reduction efficiency were the lowest heating rate of 1 degrees C/min, the highest reaction temperature of 450 degrees C, and the PVC:cattle manure ratio of 1:5, with reliability of more than 90%. The copyrolysis of the mixture of PVC-containing plastic and cattle manure was operated at optimized conditions and the synergistic effect was studied on product yields. The presence of manure decreased the oil yield by about 17%. The distillation fractions of oil at various boiling points from both the presence and absence of manure were comparable. The BTX concentration decreased rapidly when manure was present and the chlorinated hydrocarbon was reduced by 45%. However, the octane number of the gasoline fraction was not affected by manure and was in the range of 99-100.
NASA Astrophysics Data System (ADS)
Geressu, Robel T.; Harou, Julien J.
2015-12-01
Multi-reservoir system planners should consider how new dams impact downstream reservoirs and the potential contribution of each component to coordinated management. We propose an optimized multi-criteria screening approach to identify best performing designs, i.e., the selection, size and operating rules of new reservoirs within multi-reservoir systems. Reservoir release operating rules and storage sizes are optimized concurrently for each separate infrastructure design under consideration. Outputs reveal system trade-offs using multi-dimensional scatter plots where each point represents an approximately Pareto-optimal design. The method is applied to proposed Blue Nile River reservoirs in Ethiopia, where trade-offs between total and firm energy output, aggregate storage and downstream irrigation and energy provision for the best performing designs are evaluated. This proof-of concept study shows that recommended Blue Nile system designs would depend on whether monthly firm energy or annual energy is prioritized. 39 TWh/yr of energy potential is available from the proposed Blue Nile reservoirs. The results show that depending on the amount of energy deemed sufficient, the current maximum capacities of the planned reservoirs could be larger than they need to be. The method can also be used to inform which of the proposed reservoir type and their storage sizes would allow for the highest downstream benefits to Sudan in different objectives of upstream operating objectives (i.e., operated to maximize either average annual energy or firm energy). The proposed approach identifies the most promising system designs, reveals how they imply different trade-offs between metrics of system performance, and helps system planners asses the sensitivity of overall performance to the design parameters of component reservoirs.
GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.
Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd
2018-01-01
In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.
NASA Technical Reports Server (NTRS)
Shay, Rick; Swieringa, Kurt A.; Baxley, Brian T.
2012-01-01
Flight deck based Interval Management (FIM) applications using ADS-B are being developed to improve both the safety and capacity of the National Airspace System (NAS). FIM is expected to improve the safety and efficiency of the NAS by giving pilots the technology and procedures to precisely achieve an interval behind the preceding aircraft by a specific point. Concurrently but independently, Optimized Profile Descents (OPD) are being developed to help reduce fuel consumption and noise, however, the range of speeds available when flying an OPD results in a decrease in the delivery precision of aircraft to the runway. This requires the addition of a spacing buffer between aircraft, reducing system throughput. FIM addresses this problem by providing pilots with speed guidance to achieve a precise interval behind another aircraft, even while flying optimized descents. The Interval Management with Spacing to Parallel Dependent Runways (IMSPiDR) human-in-the-loop experiment employed 24 commercial pilots to explore the use of FIM equipment to conduct spacing operations behind two aircraft arriving to parallel runways, while flying an OPD during high-density operations. This paper describes the impact of variations in pilot operations; in particular configuring the aircraft, their compliance with FIM operating procedures, and their response to changes of the FIM speed. An example of the displayed FIM speeds used incorrectly by a pilot is also discussed. Finally, this paper examines the relationship between achieving airline operational goals for individual aircraft and the need for ATC to deliver aircraft to the runway with greater precision. The results show that aircraft can fly an OPD and conduct FIM operations to dependent parallel runways, enabling operational goals to be achieved efficiently while maintaining system throughput.
New knowledge-based genetic algorithm for excavator boom structural optimization
NASA Astrophysics Data System (ADS)
Hua, Haiyan; Lin, Shuwen
2014-03-01
Due to the insufficiency of utilizing knowledge to guide the complex optimal searching, existing genetic algorithms fail to effectively solve excavator boom structural optimization problem. To improve the optimization efficiency and quality, a new knowledge-based real-coded genetic algorithm is proposed. A dual evolution mechanism combining knowledge evolution with genetic algorithm is established to extract, handle and utilize the shallow and deep implicit constraint knowledge to guide the optimal searching of genetic algorithm circularly. Based on this dual evolution mechanism, knowledge evolution and population evolution can be connected by knowledge influence operators to improve the configurability of knowledge and genetic operators. Then, the new knowledge-based selection operator, crossover operator and mutation operator are proposed to integrate the optimal process knowledge and domain culture to guide the excavator boom structural optimization. Eight kinds of testing algorithms, which include different genetic operators, are taken as examples to solve the structural optimization of a medium-sized excavator boom. By comparing the results of optimization, it is shown that the algorithm including all the new knowledge-based genetic operators can more remarkably improve the evolutionary rate and searching ability than other testing algorithms, which demonstrates the effectiveness of knowledge for guiding optimal searching. The proposed knowledge-based genetic algorithm by combining multi-level knowledge evolution with numerical optimization provides a new effective method for solving the complex engineering optimization problem.
Feasibility study of patient-specific surgical templates for the fixation of pedicle screws.
Salako, F; Aubin, C-E; Fortin, C; Labelle, H
2002-01-01
Surgery for scoliosis, as well as other posterior spinal surgeries, frequently uses pedicle screws to fix an instrumentation on the spine. Misplacement of a screw can lead to intra- and post-operative complications. The objective of this study is to design patient-specific surgical templates to guide the drilling operation. From the CT-scan of a vertebra, the optimal drilling direction and limit angles are computed from an inverse projection of the pedicle limits. The first template design uses a surface-to-surface registration method and was constructed in a CAD system by subtracting the vertebra from a rectangular prism and a cylinder with the optimal orientation. This template and the vertebra were built using rapid prototyping. The second design uses a point-to-surface registration method and has 6 adjustable screws to adjust the orientation and length of the drilling support device. A mechanism was designed to hold it in place on the spinal process. A virtual prototype was build with CATIA software. During the operation, the surgeon places either template on patient's vertebra until a perfect match is obtained before drilling. The second design seems better than the first one because it can be reused on different vertebra and is less sensible to registration errors. The next step is to build the second design and make experimental and simulations tests to evaluate the benefits of this template during a scoliosis operation.
Shunt-Enhanced, Lead-Driven Bifurcation of Epilayer GaAs based EEC Sensor Responsivity
NASA Astrophysics Data System (ADS)
Solin, Stuart; Werner, Fletcher
2015-03-01
The results reported here explore the geometric optimization of room-temperature EEC sensor responsivity to applied bias by exploring contact geometry and location. The EEC sensor structure resembles that of a MESFET, but the measurement technique and operation distinguish the EEC sensor significantly; the EEC sensor employs a four-point resistance measurement as opposed to a two-point source-drain measurement and is operated under both forward and reverse bias. Under direct forward bias, the sensor distinguishes itself from a traditional FET by allowing current to be injected from the gate, referred to as a shunt, into the active layer. We show that the observed bifurcation in EEC sensor response to direct reverse bias depends critically on measurement lead location. A dramatic enhancement in responsivity is achieved via a modification of the shunt geometry. A maximum percent change of 130,856% of the four-point resistance was achieved under a direct reverse bias of -1V using an enhanced shunt design, a 325 fold increase over the conventional EEC square shunt design. This result was accompanied by an observed bifurcation in sensor response, driven by a rotation of the four-point measurement leads. S. A. S is a co-founder of and has a financial interest in PixelEXX, a start-up company whose mission is to market imaging arrays.
Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen
In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operatormore » can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.« less
Short-Term Load Forecasting-Based Automatic Distribution Network Reconfiguration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen
In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operatormore » can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.« less
Optimal pressure regulation of the pneumatic ventricular assist device with bellows-type driver.
Lee, Jung Joo; Kim, Bum Soo; Choi, Jaesoon; Choi, Hyuk; Ahn, Chi Bum; Nam, Kyoung Won; Jeong, Gi Seok; Lim, Choon Hak; Son, Ho Sung; Sun, Kyung
2009-08-01
The bellows-type pneumatic ventricular assist device (VAD) generates pneumatic pressure with compression of bellows instead of using an air compressor. This VAD driver has a small volume that is suitable for portable devices. However, improper pneumatic pressure setup can not only cause a lack of adequate flow generation, but also cause durability problems. In this study, a pneumatic pressure regulation system for optimal operation of the bellows-type VAD has been developed. The optimal pneumatic pressure conditions according to various afterload conditions aiming for optimal flow rates were investigated, and an afterload estimation algorithm was developed. The developed regulation system, which consists of a pressure sensor and a two-way solenoid valve, estimates the current afterload and regulates the pneumatic pressure to the optimal point for the current afterload condition. Experiments were performed in a mock circulation system. The afterload estimation algorithm showed sufficient performance with the standard deviation of error, 8.8 mm Hg. The flow rate could be stably regulated with a developed system under various afterload conditions. The shortcoming of a bellows-type VAD could be handled with this simple pressure regulation system.
Economic optimization of operations for hybrid energy systems under variable markets
Chen, Jen; Garcia, Humberto E.
2016-05-21
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
Economic optimization of operations for hybrid energy systems under variable markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jen; Garcia, Humberto E.
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
NASA Astrophysics Data System (ADS)
Zhu, Kai-Jian; Li, Jun-Feng; Baoyin, He-Xi
2010-01-01
In case of an emergency like the Wenchuan earthquake, it is impossible to observe a given target on earth by immediately launching new satellites. There is an urgent need for efficient satellite scheduling within a limited time period, so we must find a way to reasonably utilize the existing satellites to rapidly image the affected area during a short time period. Generally, the main consideration in orbit design is satellite coverage with the subsatellite nadir point as a standard of reference. Two factors must be taken into consideration simultaneously in orbit design, i.e., the maximum observation coverage time and the minimum orbital transfer fuel cost. The local time of visiting the given observation sites must satisfy the solar radiation requirement. When calculating the operational orbit elements as optimal parameters to be evaluated, we obtain the minimum objective function by comparing the results derived from the primer vector theory with those derived from the Hohmann transfer because the operational orbit for observing the disaster area with impulse maneuvers is considered in this paper. The primer vector theory is utilized to optimize the transfer trajectory with three impulses and the Hohmann transfer is utilized for coplanar and small inclination of non-coplanar cases. Finally, we applied this method in a simulation of the rescue mission at Wenchuan city. The results of optimizing orbit design with a hybrid PSO and DE algorithm show that the primer vector and Hohmann transfer theory proved to be effective methods for multi-object orbit optimization.
ERIC Educational Resources Information Center
Sobh, Tarek M.; Tibrewal, Abhilasha
2006-01-01
Operating systems theory primarily concentrates on the optimal use of computing resources. This paper presents an alternative approach to teaching and studying operating systems design and concepts by way of parametrically optimizing critical operating system functions. Detailed examples of two critical operating systems functions using the…
One size fits all? An assessment tool for solid waste management at local and national levels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broitman, Dani, E-mail: danib@techunix.technion.ac.il; Ayalon, Ofira; Kan, Iddo
2012-10-15
Highlights: Black-Right-Pointing-Pointer Waste management schemes are generally implemented at national or regional level. Black-Right-Pointing-Pointer Local conditions characteristics and constraints are often neglected. Black-Right-Pointing-Pointer We developed an economic model able to compare multi-level waste management options. Black-Right-Pointing-Pointer A detailed test case with real economic data and a best-fit scenario is described. Black-Right-Pointing-Pointer Most efficient schemes combine clear National directives with local level flexibility. - Abstract: As environmental awareness rises, integrated solid waste management (WM) schemes are increasingly being implemented all over the world. The different WM schemes usually address issues such as landfilling restrictions (mainly due to methane emissions and competingmore » land use), packaging directives and compulsory recycling goals. These schemes are, in general, designed at a national or regional level, whereas local conditions and constraints are sometimes neglected. When national WM top-down policies, in addition to setting goals, also dictate the methods by which they are to be achieved, local authorities lose their freedom to optimize their operational WM schemes according to their specific characteristics. There are a myriad of implementation options at the local level, and by carrying out a bottom-up approach the overall national WM system will be optimal on economic and environmental scales. This paper presents a model for optimizing waste strategies at a local level and evaluates this effect at a national level. This is achieved by using a waste assessment model which enables us to compare both the economic viability of several WM options at the local (single municipal authority) level, and aggregated results for regional or national levels. A test case based on various WM approaches in Israel (several implementations of mixed and separated waste) shows that local characteristics significantly influence WM costs, and therefore the optimal scheme is one under which each local authority is able to implement its best-fitting mechanism, given that national guidelines are kept. The main result is that strict national/regional WM policies may be less efficient, unless some type of local flexibility is implemented. Our model is designed both for top-down and bottom-up assessment, and can be easily adapted for a wide range of WM option comparisons at different levels.« less
Evaluation of solar thermal power plants using economic and performance simulations
NASA Technical Reports Server (NTRS)
El-Gabawali, N.
1980-01-01
An energy cost analysis is presented for central receiver power plants with thermal storage and point focusing power plants with electrical storage. The present approach is based on optimizing the size of the plant to give the minimum energy cost (in mills/kWe hr) of an annual plant energy production. The optimization is done by considering the trade-off between the collector field size and the storage capacity for a given engine size. The energy cost is determined by the plant cost and performance. The performance is estimated by simulating the behavior of the plant under typical weather conditions. Plant capital and operational costs are estimated based on the size and performance of different components. This methodology is translated into computer programs for automatic and consistent evaluation.
Mulier, Michiel; Pastrav, Cesar; Van der Perre, Georges
2008-01-01
Defining the stem insertion end point during total hip replacement still relies on the surgeon's feeling. When a custom-made stem prosthesis with an optimal fit into the femoral canal is used, the risk of per-operative fractures is even greater than with standard prostheses. Vibration analysis is used in other clinical settings and has been tested as a means to detect optimal stem insertion in the laboratory. The first per-operative use of vibration analysis during non-cemented custom-made stem insertion in 30 patients is reported here. Thirty patients eligible for total hip replacement with uncemented stem prosthesis were included. The neck of the stem was connected with a shaker that emitted white noise as excitation signal and an impedance head that measured the frequency response. The response signal was sent to a computer that analyzed the frequency response function after each insertion phase. A technician present in the operating theatre but outside the laminated airflow provided feed-back to the surgeon. The correlation index between the frequency response function measured during the last two insertion hammering sessions was >0.99 in 86.7% of the cases. In four cases the surgeon stopped the insertion procedure because of a perceived risk of fracture. Two special cases illustrating the potential benefit of per-operative vibration analysis are described. The results of intra-operative vibration analysis indicate that this technique may be a useful tool assisting the orthopaedic surgeon in defining the insertion endpoint of the stem. The development of a more user-friendly device is therefore warranted.
Salehi, Mojtaba; Bahreininejad, Ardeshir
2011-08-01
Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously.
Salehi, Mojtaba
2010-01-01
Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously. PMID:21845020
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Solikhin
2016-06-01
In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.
NASA Astrophysics Data System (ADS)
Ambarita, H.; Siahaan, A. S.; Kawai, H.; Daimaruya, M.
2018-02-01
In the last decade, the demand for delayed coking capacity has been steadily increasing. The trend in the past 15 to 20 years has been for operators to try to maximize the output of their units by reducing cycle times. This mode of operation can result in very large temperature gradients within the drums during preheating stage and even more so during the quench cycle. This research provide the optimization estimation of fatigue life due to each for the absence of preheating stage and cutting stage. In the absence of preheating stage the decreasing of fatigue life is around 19% and the increasing of maximum stress in point 5 of shell-to-skirt junction is around 97 MPa. However for the absence of cutting stage it was found that is more severe compare to normal cycle. In this adjustment fatigue life reduce around 39% and maximum stress is increased around 154 MPa. It can concluded that for cycle optimization, eliminating preheating stage possibly can become an option due to the increasing demand of delayed coking process.
A High-Efficiency Wind Energy Harvester for Autonomous Embedded Systems
Brunelli, Davide
2016-01-01
Energy harvesting is currently a hot research topic, mainly as a consequence of the increasing attractiveness of computing and sensing solutions based on small, low-power distributed embedded systems. Harvesting may enable systems to operate in a deploy-and-forget mode, particularly when power grid is absent and the use of rechargeable batteries is unattractive due to their limited lifetime and maintenance requirements. This paper focuses on wind flow as an energy source feasible to meet the energy needs of a small autonomous embedded system. In particular the contribution is on the electrical converter and system integration. We characterize the micro-wind turbine, we define a detailed model of its behaviour, and then we focused on a highly efficient circuit to convert wind energy into electrical energy. The optimized design features an overall volume smaller than 64 cm3. The core of the harvester is a high efficiency buck-boost converter which performs an optimal power point tracking. Experimental results show that the wind generator boosts efficiency over a wide range of operating conditions. PMID:26959018
A High-Efficiency Wind Energy Harvester for Autonomous Embedded Systems.
Brunelli, Davide
2016-03-04
Energy harvesting is currently a hot research topic, mainly as a consequence of the increasing attractiveness of computing and sensing solutions based on small, low-power distributed embedded systems. Harvesting may enable systems to operate in a deploy-and-forget mode, particularly when power grid is absent and the use of rechargeable batteries is unattractive due to their limited lifetime and maintenance requirements. This paper focuses on wind flow as an energy source feasible to meet the energy needs of a small autonomous embedded system. In particular the contribution is on the electrical converter and system integration. We characterize the micro-wind turbine, we define a detailed model of its behaviour, and then we focused on a highly efficient circuit to convert wind energy into electrical energy. The optimized design features an overall volume smaller than 64 cm³. The core of the harvester is a high efficiency buck-boost converter which performs an optimal power point tracking. Experimental results show that the wind generator boosts efficiency over a wide range of operating conditions.
NASA Astrophysics Data System (ADS)
Battistelli, E. S.; Amiri, M.; Burger, B.; Halpern, M.; Knotek, S.; Ellis, M.; Gao, X.; Kelly, D.; Macintosh, M.; Irwin, K.; Reintsema, C.
2008-05-01
We have developed multi-channel electronics (MCE) which work in concert with time-domain multiplexors developed at NIST, to control and read signals from large format bolometer arrays of superconducting transition edge sensors (TESs). These electronics were developed as part of the Submillimeter Common-User Bolometer Array-2 (SCUBA2 ) camera, but are now used in several other instruments. The main advantages of these electronics compared to earlier versions is that they are multi-channel, fully programmable, suited for remote operations and provide a clean geometry, with no electrical cabling outside of the Faraday cage formed by the cryostat and the electronics chassis. The MCE is used to determine the optimal operating points for the TES and the superconducting quantum interference device (SQUID) amplifiers autonomously. During observation, the MCE execute a running PID-servo and apply to each first stage SQUID a feedback signal necessary to keep the system in a linear regime at optimal gain. The feedback and error signals from a ˜1000-pixel array can be written to hard drive at up to 2 kHz.
Low-discrepancy sampling of parametric surface using adaptive space-filling curves (SFC)
NASA Astrophysics Data System (ADS)
Hsu, Charles; Szu, Harold
2014-05-01
Space-Filling Curves (SFCs) are encountered in different fields of engineering and computer science, especially where it is important to linearize multidimensional data for effective and robust interpretation of the information. Examples of multidimensional data are matrices, images, tables, computational grids, and Electroencephalography (EEG) sensor data resulting from the discretization of partial differential equations (PDEs). Data operations like matrix multiplications, load/store operations and updating and partitioning of data sets can be simplified when we choose an efficient way of going through the data. In many applications SFCs present just this optimal manner of mapping multidimensional data onto a one dimensional sequence. In this report, we begin with an example of a space-filling curve and demonstrate how it can be used to find the most similarity using Fast Fourier transform (FFT) through a set of points. Next we give a general introduction to space-filling curves and discuss properties of them. Finally, we consider a discrete version of space-filling curves and present experimental results on discrete space-filling curves optimized for special tasks.
A nonlinear H-infinity approach to optimal control of the depth of anaesthesia
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Rigatou, Efthymia; Zervos, Nikolaos
2016-12-01
Controlling the level of anaesthesia is important for improving the success rate of surgeries and for reducing the risks to which operated patients are exposed. This paper proposes a nonlinear H-infinity approach to optimal control of the level of anaesthesia. The dynamic model of the anaesthesia, which describes the concentration of the anaesthetic drug in different parts of the body, is subjected to linearization at local operating points. These are defined at each iteration of the control algorithm and consist of the present value of the system's state vector and of the last control input that was exerted on it. For this linearization Taylor series expansion is performed and the system's Jacobian matrices are computed. For the linearized model an H-infinity controller is designed. The feedback control gains are found by solving at each iteration of the control algorithm an algebraic Riccati equation. The modelling errors due to this approximate linearization are considered as disturbances which are compensated by the robustness of the control loop. The stability of the control loop is confirmed through Lyapunov analysis.
Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.
Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144
Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Lin, Lin; Cheng, Runwei
Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.
Optimization of Nanowire-Resistance Load Logic Inverter.
Hashim, Yasir; Sidek, Othman
2015-09-01
This study is the first to demonstrate characteristics optimization of nanowire resistance load inverter. Noise margins and inflection voltage of transfer characteristics are used as limiting factors in this optimization. Results indicate that optimization depends on resistance value. Increasing of load resistor tends to increasing in noise margins until saturation point, increasing load resistor after this point will not improve noise margins significantly.
Mathematical model of highways network optimization
NASA Astrophysics Data System (ADS)
Sakhapov, R. L.; Nikolaeva, R. V.; Gatiyatullin, M. H.; Makhmutov, M. M.
2017-12-01
The article deals with the issue of highways network design. Studies show that the main requirement from road transport for the road network is to ensure the realization of all the transport links served by it, with the least possible cost. The goal of optimizing the network of highways is to increase the efficiency of transport. It is necessary to take into account a large number of factors that make it difficult to quantify and qualify their impact on the road network. In this paper, we propose building an optimal variant for locating the road network on the basis of a mathematical model. The article defines the criteria for optimality and objective functions that reflect the requirements for the road network. The most fully satisfying condition for optimality is the minimization of road and transport costs. We adopted this indicator as a criterion of optimality in the economic-mathematical model of a network of highways. Studies have shown that each offset point in the optimal binding road network is associated with all other corresponding points in the directions providing the least financial costs necessary to move passengers and cargo from this point to the other corresponding points. The article presents general principles for constructing an optimal network of roads.
Wiese, J; König, R
2009-01-01
Biogas plants gain worldwide increasing importance due to several advantages. However, concerning the equipment most of the existing biogas plants are low-tech plants. E.g., from the point of view of instrumentation, control and automation (ICA) most plants are black-box systems. Consequently, practice shows that many biogas plants are operated sub-optimally and/or in critical (load) ranges. To solve these problems, some new biogas plants have been equipped with modern machines and ICA equipment. In this paper, the authors will show details and discuss operational results of a modern agricultural biogas plant and the resultant opportunities for the implementation of a plant-wide automation.
NASA Technical Reports Server (NTRS)
1983-01-01
A profile of altitude, airspeed, and flight path angle as a function of range between a given set of origin and destination points for particular models of transport aircraft provided by NASA is generated. Inputs to the program include the vertical wind profile, the aircraft takeoff weight, the costs of time and fuel, certain constraint parameters and control flags. The profile can be near optimum in the sense of minimizing: (1) fuel, (2) time, or (3) a combination of fuel and time (direct operating cost (DOC)). The user can also, as an option, specify the length of time the flight is to span. The theory behind the technical details of this program is also presented.
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Fotia, Matthew L.; Hoke, John; Schauer, Fred
2015-01-01
A quasi-two-dimensional, computational fluid dynamic (CFD) simulation of a rotating detonation engine (RDE) is described. The simulation operates in the detonation frame of reference and utilizes a relatively coarse grid such that only the essential primary flow field structure is captured. This construction and other simplifications yield rapidly converging, steady solutions. Viscous effects, and heat transfer effects are modeled using source terms. The effects of potential inlet flow reversals are modeled using boundary conditions. Results from the simulation are compared to measured data from an experimental RDE rig with a converging-diverging nozzle added. The comparison is favorable for the two operating points examined. The utility of the code as a performance optimization tool and a diagnostic tool are discussed.
Turbulence management: Application aspects
NASA Astrophysics Data System (ADS)
Hirschel, E. H.; Thiede, P.; Monnoyer, F.
1989-04-01
Turbulence management for the reduction of turbulent friction drag is an important topic. Numerous research programs in this field have demonstrated that valuable net drag reduction is obtainable by techniques which do not involve substantial, expensive modifications or redesign of existing aircraft. Hence, large projects aiming at short term introduction of turbulence management technology into airline service are presently under development. The various points that have to be investigated for this purpose are presented. Both design and operational aspects are considered, the first dealing with optimizing of turbulence management techniques at operating conditions, and the latter defining the technical problems involved by application of turbulence management to in-service aircraft. The cooperative activities of Airbus Industrie and its partners are cited as an example.
Application of a stochastic inverse to the geophysical inverse problem
NASA Technical Reports Server (NTRS)
Jordan, T. H.; Minster, J. B.
1972-01-01
The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.
Space Station tethered elevator system
NASA Technical Reports Server (NTRS)
Haddock, Michael H.; Anderson, Loren A.; Hosterman, K.; Decresie, E.; Miranda, P.; Hamilton, R.
1989-01-01
The optimized conceptual engineering design of a space station tethered elevator is presented. The tethered elevator is an unmanned, mobile structure which operates on a ten-kilometer tether spanning the distance between Space Station Freedom and a platform. Its capabilities include providing access to residual gravity levels, remote servicing, and transportation to any point along a tether. The report discusses the potential uses, parameters, and evolution of the spacecraft design. Emphasis is placed on the elevator's structural configuration and three major subsystem designs. First, the design of elevator robotics used to aid in elevator operations and tethered experimentation is presented. Second, the design of drive mechanisms used to propel the vehicle is discussed. Third, the design of an onboard self-sufficient power generation and transmission system is addressed.
Brown, Jordan; Hanson, Jan E; Schmotzer, Brian; Webel, Allison R
2014-10-01
For people living with HIV (PLWH), spirituality and optimism have a positive influence on their health, can slow HIV disease progression, and can improve quality of life. Our aim was to describe longitudinal changes in spirituality and optimism after participation in the SystemCHANGE™-HIV intervention. Upon completion of the intervention, participants experienced an 11.5 point increase in overall spiritual well-being (p = 0.036), a 6.3 point increase in religious well-being (p = 0.030), a 4.8 point increase in existential well-being (p = 0.125), and a 0.8 point increase in total optimism (p = 0.268) relative to controls. Our data suggest a group-based self-management intervention increases spiritual well-being in PLWH.
Shakeri, Habibesadat; Pournaghi, Seyed-Javad; Hashemi, Javad; Mohammad-Zadeh, Mohammad; Akaberi, Arash
2017-10-26
The changes in serum 25-hydroxyvitamin D (25(OH)D) in adolescents from summer to winter and optimal serum vitamin D levels in the summer to ensure adequate vitamin D levels at the end of winter are currently unknown. This study was conducted to address this knowledge gap. The study was conducted as a cohort study. Sixty-eight participants aged 7-18 years and who had sufficient vitamin D levels at the end of the summer in 2011 were selected using stratified random sampling. Subsequently, the participants' vitamin D levels were measured at the end of the winter in 2012. A receiver operating characteristic (ROC) curve was used to determine optimal cutoff points for vitamin D at the end of the summer to predict sufficient vitamin D levels at the end of the winter. The results indicated that 89.7% of all the participants had a decrease in vitamin D levels from summer to winter: 14.7% of them were vitamin D-deficient, 36.8% had insufficient vitamin D concentrations and only 48.5% where able to maintain sufficient vitamin D. The optimal cutoff point to provide assurance of sufficient serum vitamin D at the end of the winter was 40 ng/mL at the end of the summer. Sex, age and vitamin D levels at the end of the summer were significant predictors of non-sufficient vitamin D at the end of the winter. In this age group, a dramatic reduction in vitamin D was observed over the follow-up period. Sufficient vitamin D at the end of the summer did not guarantee vitamin D sufficiency at the end of the winter. We found 40 ng/mL as an optimal cutoff point.
Mobile Wireless Sensor Networks for Advanced Soil Sensing and Ecosystem Monitoring
NASA Astrophysics Data System (ADS)
Mollenhauer, Hannes; Schima, Robert; Remmler, Paul; Mollenhauer, Olaf; Hutschenreuther, Tino; Toepfer, Hannes; Dietrich, Peter; Bumberger, Jan
2015-04-01
For an adequate characterization of ecosystems it is necessary to detect individual processes with suitable monitoring strategies and methods. Due to the natural complexity of all environmental compartments, single point or temporally and spatially fixed measurements are mostly insufficient for an adequate representation. The application of mobile wireless sensor networks for soil and atmosphere sensing offers significant benefits, due to the simple adjustment of the sensor distribution, the sensor types and the sample rate (e.g. by using optimization approaches or event triggering modes) to the local test conditions. This can be essential for the monitoring of heterogeneous and dynamic environmental systems and processes. One significant advantage in the application of mobile ad-hoc wireless sensor networks is their self-organizing behavior. Thus, the network autonomously initializes and optimizes itself. Due to the localization via satellite a major reduction in installation and operation costs and time is generated. In addition, single point measurements with a sensor are significantly improved by measuring at several optimized points continuously. Since performing analog and digital signal processing and computation in the sensor nodes close to the sensors a significant reduction of the data to be transmitted can be achieved which leads to a better energy management of nodes. Furthermore, the miniaturization of the nodes and energy harvesting are current topics under investigation. First results of field measurements are given to present the potentials and limitations of this application in environmental science. In particular, collected in-situ data with numerous specific soil and atmosphere parameters per sensor node (more than 25) recorded over several days illustrates the high performance of this system for advanced soil sensing and soil-atmosphere interaction monitoring. Moreover, investigations of biotic and abiotic process interactions and optimization of sensor positioning for measuring soil moisture are scopes of this work and initial results of these issues will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madduri, Kamesh; Im, Eun-Jin; Ibrahim, Khaled Z.
The next decade of high-performance computing (HPC) systems will see a rapid evolution and divergence of multi- and manycore architectures as power and cooling constraints limit increases in microprocessor clock speeds. Understanding efficient optimization methodologies on diverse multicore designs in the context of demanding numerical methods is one of the greatest challenges faced today by the HPC community. In this paper, we examine the efficient multicore optimization of GTC, a petascale gyrokinetic toroidal fusion code for studying plasma microturbulence in tokamak devices. For GTC’s key computational components (charge deposition and particle push), we explore efficient parallelization strategies across a broadmore » range of emerging multicore designs, including the recently-released Intel Nehalem-EX, the AMD Opteron Istanbul, and the highly multithreaded Sun UltraSparc T2+. We also present the first study on tuning gyrokinetic particle-in-cell (PIC) algorithms for graphics processors, using the NVIDIA C2050 (Fermi). Our work discusses several novel optimization approaches for gyrokinetic PIC, including mixed-precision computation, particle binning and decomposition strategies, grid replication, SIMDized atomic floating-point operations, and effective GPU texture memory utilization. Overall, we achieve significant performance improvements of 1.3–4.7× on these complex PIC kernels, despite the inherent challenges of data dependency and locality. Finally, our work also points to several architectural and programming features that could significantly enhance PIC performance and productivity on next-generation architectures.« less
NASA Astrophysics Data System (ADS)
Li, Haichen; Qin, Tao; Wang, Weiping; Lei, Xiaohui; Wu, Wenhui
2018-02-01
Due to the weakness in holding diversity and reaching global optimum, the standard particle swarm optimization has not performed well in reservoir optimal operation. To solve this problem, this paper introduces downhill simplex method to work together with the standard particle swarm optimization. The application of this approach in Goupitan reservoir optimal operation proves that the improved method had better accuracy and higher reliability with small investment.
Design and development of bio-inspired framework for reservoir operation optimization
NASA Astrophysics Data System (ADS)
Asvini, M. Sakthi; Amudha, T.
2017-12-01
Frameworks for optimal reservoir operation play an important role in the management of water resources and delivery of economic benefits. Effective utilization and conservation of water from reservoirs helps to manage water deficit periods. The main challenge in reservoir optimization is to design operating rules that can be used to inform real-time decisions on reservoir release. We develop a bio-inspired framework for the optimization of reservoir release to satisfy the diverse needs of various stakeholders. In this work, single-objective optimization and multiobjective optimization problems are formulated using an algorithm known as "strawberry optimization" and tested with actual reservoir data. Results indicate that well planned reservoir operations lead to efficient deployment of the reservoir water with the help of optimal release patterns.
Póvoa, P; Oehmen, A; Inocêncio, P; Matos, J S; Frazão, A
2017-05-01
The main objective of this paper is to demonstrate the importance of applying dynamic modelling and real energy prices on a full scale water resource recovery facility (WRRF) for the evaluation of control strategies in terms of energy costs with aeration. The Activated Sludge Model No. 1 (ASM1) was coupled with real energy pricing and a power consumption model and applied as a dynamic simulation case study. The model calibration is based on the STOWA protocol. The case study investigates the importance of providing real energy pricing comparing (i) real energy pricing, (ii) weighted arithmetic mean energy pricing and (iii) arithmetic mean energy pricing. The operational strategies evaluated were (i) old versus new air diffusers, (ii) different DO set-points and (iii) implementation of a carbon removal controller based on nitrate sensor readings. The application in a full scale WRRF of the ASM1 model coupled with real energy costs was successful. Dynamic modelling with real energy pricing instead of constant energy pricing enables the wastewater utility to optimize energy consumption according to the real energy price structure. Specific energy cost allows the identification of time periods with potential for linking WRRF with the electric grid to optimize the treatment costs, satisfying operational goals.
Multi-objective Optimization of Departure Procedures at Gimpo International Airport
NASA Astrophysics Data System (ADS)
Kim, Junghyun; Lim, Dongwook; Monteiro, Dylan Jonathan; Kirby, Michelle; Mavris, Dimitri
2018-04-01
Most aviation communities have increasing concerns about the environmental impacts, which are directly linked to health issues for local residents near the airport. In this study, the environmental impact of different departure procedures using the Aviation Environmental Design Tool (AEDT) was analyzed. First, actual operational data were compiled at Gimpo International Airport (March 20, 2017) from an open source. Two modifications were made in the AEDT to model the operational circumstances better and the preliminary AEDT simulations were performed according to the acquired operational procedures. Simulated noise results showed good agreements with noise measurement data at specific locations. Second, a multi-objective optimization of departure procedures was performed for the Boeing 737-800. Four design variables were selected and AEDT was linked to a variety of advanced design methods. The results showed that takeoff thrust had the greatest influence and it was found that fuel burn and noise had an inverse relationship. Two points representing each fuel burn and noise optimum on the Pareto front were parsed and run in AEDT to compare with the baseline. The results showed that the noise optimum case reduced Sound Exposure Level 80-dB noise exposure area by approximately 5% while the fuel burn optimum case reduced total fuel burn by 1% relative to the baseline for aircraft-level analysis.
High-Z plasma facing components in fusion devices: boundary conditions and operational experiences
NASA Astrophysics Data System (ADS)
Neu, R.
2006-04-01
In present day fusion devices optimization of the performance and experimental freedom motivates the use of low-Z plasma facing materials (PFMs). However, in a future fusion reactor, for economic reasons, a sufficient lifetime of the first wall components is essential. Additionally, tritium retention has to be small to meet safety requirements. Tungsten appears to be the most realistic material choice for reactor plasma facing components (PFCs) because it exhibits the lowest erosion. But besides this there are a lot of criteria which have to be fulfilled simultaneously in a reactor. Results from present day devices and from laboratory experiments confirm the advantages of high-Z PFMs but also point to operational restrictions, when using them as PFCs. These are associated with the central impurity concentration, which is determined by the sputtering yield, the penetration of the impurities and their transport within the confined plasma. The restrictions could exclude successful operation of a reactor, but concomitantly there exist remedies to ameliorate their impact. Obviously some price has to be paid in terms of reduced performance but lacking of materials or concepts which could substitute high-Z PFCs, emphasis has to be put on the development and optimization of reactor-relevant scenarios which incorporate the experiences and measures.
Sedentary Behaviour Profiling of Office Workers: A Sensitivity Analysis of Sedentary Cut-Points
Boerema, Simone T.; Essink, Gerard B.; Tönis, Thijs M.; van Velsen, Lex; Hermens, Hermie J.
2015-01-01
Measuring sedentary behaviour and physical activity with wearable sensors provides detailed information on activity patterns and can serve health interventions. At the basis of activity analysis stands the ability to distinguish sedentary from active time. As there is no consensus regarding the optimal cut-point for classifying sedentary behaviour, we studied the consequences of using different cut-points for this type of analysis. We conducted a battery of sitting and walking activities with 14 office workers, wearing the Promove 3D activity sensor to determine the optimal cut-point (in counts per minute (m·s−2)) for classifying sedentary behaviour. Then, 27 office workers wore the sensor for five days. We evaluated the sensitivity of five sedentary pattern measures for various sedentary cut-points and found an optimal cut-point for sedentary behaviour of 1660 × 10−3 m·s−2. Total sedentary time was not sensitive to cut-point changes within ±10% of this optimal cut-point; other sedentary pattern measures were not sensitive to changes within the ±20% interval. The results from studies analyzing sedentary patterns, using different cut-points, can be compared within these boundaries. Furthermore, commercial, hip-worn activity trackers can implement feedback and interventions on sedentary behaviour patterns, using these cut-points. PMID:26712758
Metrics for Assessment of Smart Grid Data Integrity Attacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annarita Giani; Miles McQueen; Russell Bent
2012-07-01
There is an emerging consensus that the nation’s electricity grid is vulnerable to cyber attacks. This vulnerability arises from the increasing reliance on using remote measurements, transmitting them over legacy data networks to system operators who make critical decisions based on available data. Data integrity attacks are a class of cyber attacks that involve a compromise of information that is processed by the grid operator. This information can include meter readings of injected power at remote generators, power flows on transmission lines, and relay states. These data integrity attacks have consequences only when the system operator responds to compromised datamore » by redispatching generation under normal or contingency protocols. These consequences include (a) financial losses from sub-optimal economic dispatch to service loads, (b) robustness/resiliency losses from placing the grid at operating points that are at greater risk from contingencies, and (c) systemic losses resulting from cascading failures induced by poor operational choices. This paper is focused on understanding the connections between grid operational procedures and cyber attacks. We first offer two examples to illustrate how data integrity attacks can cause economic and physical damage by misleading operators into taking inappropriate decisions. We then focus on unobservable data integrity attacks involving power meter data. These are coordinated attacks where the compromised data are consistent with the physics of power flow, and are therefore passed by any bad data detection algorithm. We develop metrics to assess the economic impact of these attacks under re-dispatch decisions using optimal power flow methods. These metrics can be use to prioritize the adoption of appropriate countermeasures including PMU placement, encryption, hardware upgrades, and advance attack detection algorithms.« less
Schwarz, Patric; Pannes, Klaus Dieter; Nathan, Michel; Reimer, Hans Jorg; Kleespies, Axel; Kuhn, Nicole; Rupp, Anne; Zügel, Nikolaus Peter
2011-10-01
The decision to optimize the processes in the operating tract was based on two factors: competition among clinics and a desire to optimize the use of available resources. The aim of the project was to improve operating room (OR) capacity utilization by reduction of change and throughput time per patient. The study was conducted at Centre Hospitalier Emil Mayrisch Clinic for specialized care (n = 618 beds) Luxembourg (South). A prospective analysis was performed before and after the implementation of optimized processes. Value stream analysis and design (value stream mapping, VSM) were used as tools. VSM depicts patient throughput and the corresponding information flows. Furthermore it is used to identify process waste (e.g. time, human resources, materials, etc.). For this purpose, change times per patient (extubation of patient 1 until intubation of patient 2) and throughput times (inward transfer until outward transfer) were measured. VSM, change and throughput times for 48 patient flows (VSM A(1), actual state = initial situation) served as the starting point. Interdisciplinary development of an optimized VSM (VSM-O) was evaluated. Prospective analysis of 42 patients (VSM-A(2)) without and 75 patients (VSM-O) with an optimized process in place were conducted. The prospective analysis resulted in a mean change time of (mean ± SEM) VSM-A(2) 1,507 ± 100 s versus VSM-O 933 ± 66 s (p < 0.001). The mean throughput time VSM-A(2) (mean ± SEM) was 151 min (±8) versus VSM-O 120 min (±10) (p < 0.05). This corresponds to a 23% decrease in waiting time per patient in total. Efficient OR capacity utilization and the optimized use of human resources allowed an additional 1820 interventions to be carried out per year without any increase in human resources. In addition, perioperative patient monitoring was increased up to 100%.
NASA Astrophysics Data System (ADS)
Rosenberg, D. E.; Alafifi, A.
2016-12-01
Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.
NASA Astrophysics Data System (ADS)
DeSena, J. T.; Martin, S. R.; Clarke, J. C.; Dutrow, D. A.; Newman, A. J.
2012-06-01
As the number and diversity of sensing assets available for intelligence, surveillance and reconnaissance (ISR) operations continues to expand, the limited ability of human operators to effectively manage, control and exploit the ISR ensemble is exceeded, leading to reduced operational effectiveness. Automated support both in the processing of voluminous sensor data and sensor asset control can relieve the burden of human operators to support operation of larger ISR ensembles. In dynamic environments it is essential to react quickly to current information to avoid stale, sub-optimal plans. Our approach is to apply the principles of feedback control to ISR operations, "closing the loop" from the sensor collections through automated processing to ISR asset control. Previous work by the authors demonstrated non-myopic multiple platform trajectory control using a receding horizon controller in a closed feedback loop with a multiple hypothesis tracker applied to multi-target search and track simulation scenarios in the ground and space domains. This paper presents extensions in both size and scope of the previous work, demonstrating closed-loop control, involving both platform routing and sensor pointing, of a multisensor, multi-platform ISR ensemble tasked with providing situational awareness and performing search, track and classification of multiple moving ground targets in irregular warfare scenarios. The closed-loop ISR system is fullyrealized using distributed, asynchronous components that communicate over a network. The closed-loop ISR system has been exercised via a networked simulation test bed against a scenario in the Afghanistan theater implemented using high-fidelity terrain and imagery data. In addition, the system has been applied to space surveillance scenarios requiring tracking of space objects where current deliberative, manually intensive processes for managing sensor assets are insufficiently responsive. Simulation experiment results are presented. The algorithm to jointly optimize sensor schedules against search, track, and classify is based on recent work by Papageorgiou and Raykin on risk-based sensor management. It uses a risk-based objective function and attempts to minimize and balance the risks of misclassifying and losing track on an object. It supports the requirement to generate tasking for metric and feature data concurrently and synergistically, and account for both tracking accuracy and object characterization, jointly, in computing reward and cost for optimizing tasking decisions.
A New Distributed Optimization for Community Microgrids Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starke, Michael R; Tomsovic, Kevin
This paper proposes a distributed optimization model for community microgrids considering the building thermal dynamics and customer comfort preference. The microgrid central controller (MCC) minimizes the total cost of operating the community microgrid, including fuel cost, purchasing cost, battery degradation cost and voluntary load shedding cost based on the customers' consumption, while the building energy management systems (BEMS) minimize their electricity bills as well as the cost associated with customer discomfort due to room temperature deviation from the set point. The BEMSs and the MCC exchange information on energy consumption and prices. When the optimization converges, the distributed generation scheduling,more » energy storage charging/discharging and customers' consumption as well as the energy prices are determined. In particular, we integrate the detailed thermal dynamic characteristics of buildings into the proposed model. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of proposed model.« less
NASA Astrophysics Data System (ADS)
Zhou, Sheng; Han, Yanling; Li, Bincheng
2018-02-01
Nitric oxide (NO) in exhaled breath has gained increasing interest in recent years mainly driven by the clinical need to monitor inflammatory status in respiratory disorders, such as asthma and other pulmonary conditions. Mid-infrared cavity ring-down spectroscopy (CRDS) using an external cavity, widely tunable continuous-wave quantum cascade laser operating at 5.3 µm was employed for NO detection. The detection pressure was reduced in steps to improve the sensitivity, and the optimal pressure was determined to be 15 kPa based on the fitting residual analysis of measured absorption spectra. A detection limit (1σ, or one time of standard deviation) of 0.41 ppb was experimentally achieved for NO detection in human breath under the optimized condition in a total of 60 s acquisition time (2 s per data point). Diurnal measurement session was conducted for exhaled NO. The experimental results indicated that mid-infrared CRDS technique has great potential for various applications in health diagnosis.
A feedback control for the advanced launch system
NASA Technical Reports Server (NTRS)
Seywald, Hans; Cliff, Eugene M.
1991-01-01
A robust feedback algorithm is presented for a near-minimum-fuel ascent of a two-stage launch vehicle operating in the equatorial plane. The development of the algorithm is based on the ideas of neighboring optimal control and can be derived into three phases. In phase 1, the formalism of optimal control is employed to calculate fuel-optimal ascent trajectories for a simple point-mass model. In phase 2, these trajectories are used to numerically calculate gain functions of time for the control(s), the total flight time, and possibly, for other variables of interest. In phase 3, these gains are used to determine feedback expressions for the controls associated with a more realistic model of a launch vehicle. With the Advanced Launch System in mind, all calculations are performed on a two-stage vehicle with fixed thrust history, but this restriction is by no means important for the approach taken. Performance and robustness of the algorithm is found to be excellent.
Four-body trajectory optimization
NASA Technical Reports Server (NTRS)
Pu, C. L.; Edelbaum, T. N.
1974-01-01
A comprehensive optimization program has been developed for computing fuel-optimal trajectories between the earth and a point in the sun-earth-moon system. It presents methods for generating fuel optimal two-impulse trajectories which may originate at the earth or a point in space and fuel optimal three-impulse trajectories between two points in space. The extrapolation of the state vector and the computation of the state transition matrix are accomplished by the Stumpff-Weiss method. The cost and constraint gradients are computed analytically in terms of the terminal state and the state transition matrix. The 4-body Lambert problem is solved by using the Newton-Raphson method. An accelerated gradient projection method is used to optimize a 2-impulse trajectory with terminal constraint. The Davidon's Variance Method is used both in the accelerated gradient projection method and the outer loop of a 3-impulse trajectory optimization problem.
The effect of dropout on the efficiency of D-optimal designs of linear mixed models.
Ortega-Azurduy, S A; Tan, F E S; Berger, M P F
2008-06-30
Dropout is often encountered in longitudinal data. Optimal designs will usually not remain optimal in the presence of dropout. In this paper, we study D-optimal designs for linear mixed models where dropout is encountered. Moreover, we estimate the efficiency loss in cases where a D-optimal design for complete data is chosen instead of that for data with dropout. Two types of monotonically decreasing response probability functions are investigated to describe dropout. Our results show that the location of D-optimal design points for the dropout case will shift with respect to that for the complete and uncorrelated data case. Owing to this shift, the information collected at the D-optimal design points for the complete data case does not correspond to the smallest variance. We show that the size of the displacement of the time points depends on the linear mixed model and that the efficiency loss is moderate.
Homeostasis of exercise hyperpnea and optimal sensorimotor integration: the internal model paradigm.
Poon, Chi-Sang; Tin, Chung; Yu, Yunguo
2007-10-15
Homeostasis is a basic tenet of biomedicine and an open problem for many physiological control systems. Among them, none has been more extensively studied and intensely debated than the dilemma of exercise hyperpnea - a paradoxical homeostatic increase of respiratory ventilation that is geared to metabolic demands instead of the normal chemoreflex mechanism. Classical control theory has led to a plethora of "feedback/feedforward control" or "set point" hypotheses for homeostatic regulation, yet so far none of them has proved satisfactory in explaining exercise hyperpnea and its interactions with other respiratory inputs. Instead, the available evidence points to a far more sophisticated respiratory controller capable of integrating multiple afferent and efferent signals in adapting the ventilatory pattern toward optimality relative to conflicting homeostatic, energetic and other objectives. This optimality principle parsimoniously mimics exercise hyperpnea, chemoreflex and a host of characteristic respiratory responses to abnormal gas exchange or mechanical loading/unloading in health and in cardiopulmonary diseases - all without resorting to a feedforward "exercise stimulus". Rather, an emergent controller signal encoding the projected metabolic level is predicted by the principle as an exercise-induced 'mental percept' or 'internal model', presumably engendered by associative learning (operant conditioning or classical conditioning) which achieves optimality through continuous identification of, and adaptation to, the causal relationship between respiratory motor output and resultant chemical-mechanical afferent feedbacks. This internal model self-tuning adaptive control paradigm opens a new challenge and exciting opportunity for experimental and theoretical elucidations of the mechanisms of respiratory control - and of homeostatic regulation and sensorimotor integration in general.
Multi-objective optimization of chromatographic rare earth element separation.
Knutson, Hans-Kristian; Holmqvist, Anders; Nilsson, Bernt
2015-10-16
The importance of rare earth elements in modern technological industry grows, and as a result the interest for developing separation processes increases. This work is a part of developing chromatography as a rare earth element processing method. Process optimization is an important step in process development, and there are several competing objectives that need to be considered in a chromatographic separation process. Most studies are limited to evaluating the two competing objectives productivity and yield, and studies of scenarios with tri-objective optimizations are scarce. Tri-objective optimizations are much needed when evaluating the chromatographic separation of rare earth elements due to the importance of product pool concentration along with productivity and yield as process objectives. In this work, a multi-objective optimization strategy considering productivity, yield and pool concentration is proposed. This was carried out in the frame of a model based optimization study on a batch chromatography separation of the rare earth elements samarium, europium and gadolinium. The findings from the multi-objective optimization were used to provide with a general strategy for achieving desirable operation points, resulting in a productivity ranging between 0.61 and 0.75 kgEu/mcolumn(3), h(-1) and a pool concentration between 0.52 and 0.79 kgEu/m(3), while maintaining a purity above 99% and never falling below an 80% yield for the main target component europium. Copyright © 2015 Elsevier B.V. All rights reserved.
Application of particle swarm optimization in path planning of mobile robot
NASA Astrophysics Data System (ADS)
Wang, Yong; Cai, Feng; Wang, Ying
2017-08-01
In order to realize the optimal path planning of mobile robot in unknown environment, a particle swarm optimization algorithm based on path length as fitness function is proposed. The location of the global optimal particle is determined by the minimum fitness value, and the robot moves along the points of the optimal particles to the target position. The process of moving to the target point is done with MATLAB R2014a. Compared with the standard particle swarm optimization algorithm, the simulation results show that this method can effectively avoid all obstacles and get the optimal path.
NASA Technical Reports Server (NTRS)
Dunbar, D. N.; Tunnah, B. G.
1978-01-01
A FORTRAN computer program is described for predicting the flow streams and material, energy, and economic balances of a typical petroleum refinery, with particular emphasis on production of aviation turbine fuel of varying end point and hydrogen content specifications. The program has provision for shale oil and coal oil in addition to petroleum crudes. A case study feature permits dependent cases to be run for parametric or optimization studies by input of only the variables which are changed from the base case.
Multimode resistive switching in nanoscale hafnium oxide stack as studied by atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Y., E-mail: houyi@pku.edu.cn, E-mail: lfliu@pku.edu.cn; IMEC, Kapeldreef 75, B-3001 Heverlee; Department of Physics and Astronomy, KU Leuven, Celestijnenlaan 200D, B-3001 Heverlee
2016-07-11
The nanoscale resistive switching in hafnium oxide stack is investigated by the conductive atomic force microscopy (C-AFM). The initial oxide stack is insulating and electrical stress from the C-AFM tip induces nanometric conductive filaments. Multimode resistive switching can be observed in consecutive operation cycles at one spot. The different modes are interpreted in the framework of a low defect quantum point contact theory. The model implies that the optimization of the conductive filament active region is crucial for the future application of nanoscale resistive switching devices.
NASA Technical Reports Server (NTRS)
Dunbar, D. N.; Tunnah, B. G.
1978-01-01
The FORTRAN computing program predicts flow streams and material, energy, and economic balances of a typical petroleum refinery, with particular emphasis on production of aviation turbine fuels of varying end point and hydrogen content specifications. The program has a provision for shale oil and coal oil in addition to petroleum crudes. A case study feature permits dependent cases to be run for parametric or optimization studies by input of only the variables which are changed from the base case.
Determination Of The Activity Space By The Stereometric Method
NASA Astrophysics Data System (ADS)
Deloison, Y.; Crete, N.; Mollard, R.
1980-07-01
To determine the activity space of a sitting subject, it is necessary to go beyond the mere statistical description of morphology and the knowledge of the displacement volume. An anlysis of the positions or variations of the positions of the diverse segmental elements (arms, hands, lower limbs, etc...) in the course of a given activity is required. Of the various methods used to locate quickly and accurately the spatial positions of anatomical points, stereometry makes it possible to plot the three-dimensional coordinates of any point in space in relation to a fixed trirectangle frame of reference determined by the stereome-tric measuring device. Thus, regardless of the orientation and posture of the subject, his segmental elements can be easily pin-pointed, throughout the experiment, within the space they occupy. Using this method, it is possible for a sample of operators seated at an operation station and applying either manual controls or pedals and belonging to a population statistically defined from the data collected and the analyses produced by the anthropometric study to determine a contour line of reach capability marking out the usable working space and to know, within this working space, a contour line of preferential activity that is limited, in space, by the whole range of optimal reach capability of all the subjects.
Developing a weather observation routine during ICARUS
NASA Astrophysics Data System (ADS)
Mei, F.; Hubbe, J. M.; de Boer, G.; Lawrence, D.; Shupe, M.; Ivey, M.; Dexheimer, D.; Schmid, B.
2016-12-01
Starting in 2014, the Atmospheric Radiation Measurement (ARM) program began a major reconfiguration to more tightly link measurements and atmospheric models. As part of this the reconfiguration, ARM's North Slope of Alaska (NSA) site is being upgraded to include additional observations to support modeling and process studies. The Inaugural Campaigns for ARM Research using Unmanned Systems (ICARUS) have been launched in 2016. This internal initiative at Oliktok Point, Alaska focus on developing routine operations of Unmanned Aerial Systems (UAS) and Tethered Balloon Systems (TBS). The main purpose of ICARUS is to collect spatial data about surface radiation, heat fluxes, and vertical profiles of the basic atmospheric state (temperature, humidity, and horizontal wind). Based on the data collected during ICARUS, we will develop the operation routines for each atmospheric state measurement, and then optimize the operation schedule to maximize the data collection capacity. The statistical representation of important atmospheric state parameters will be discussed.
NASA Astrophysics Data System (ADS)
Chen, B.; Su, J. H.; Guo, L.; Chen, J.
2017-06-01
This paper puts forward a maximum power estimation method based on the photovoltaic array (PVA) model to solve the optimization problems about group control of the PV water pumping systems (PVWPS) at the maximum power point (MPP). This method uses the improved genetic algorithm (GA) for model parameters estimation and identification in view of multi P-V characteristic curves of a PVA model, and then corrects the identification results through least square method. On this basis, the irradiation level and operating temperature under any condition are able to estimate so an accurate PVA model is established and the MPP none-disturbance estimation is achieved. The simulation adopts the proposed GA to determine parameters, and the results verify the accuracy and practicability of the methods.
Branch-pipe-routing approach for ships using improved genetic algorithm
NASA Astrophysics Data System (ADS)
Sui, Haiteng; Niu, Wentie
2016-09-01
Branch-pipe routing plays fundamental and critical roles in ship-pipe design. The branch-pipe-routing problem is a complex combinatorial optimization problem and is thus difficult to solve when depending only on human experts. A modified genetic-algorithm-based approach is proposed in this paper to solve this problem. The simplified layout space is first divided into threedimensional (3D) grids to build its mathematical model. Branch pipes in layout space are regarded as a combination of several two-point pipes, and the pipe route between two connection points is generated using an improved maze algorithm. The coding of branch pipes is then defined, and the genetic operators are devised, especially the complete crossover strategy that greatly accelerates the convergence speed. Finally, simulation tests demonstrate the performance of proposed method.
The Structure of Reclaiming Warehouse of Minerals at Open-Cut Mines with the Use Combined Transport
NASA Astrophysics Data System (ADS)
Ikonnikov, D. A.; Kovshov, S. V.
2017-07-01
In the article performed an analysis of ore reclaiming and overloading point characteristics at modern opencast mines. Ore reclaiming represents the most effective way of stability support of power-intensive and expensive technological dressing process, and, consequently, of maintenance of the optimal production and set-up parameters of extraction and quality of finished product. The paper proposed the construction of the warehouse describing the technology of its creation. Equipment used for the warehouse described in detail. All stages of development and operation was shown. Advantages and disadvantages of using mechanical shovel excavator and hydraulic excavator “backdigger” as a reloading and reclaiming equipment was compared. Ore reclaiming and overloading point construction at cyclical and continuous method of mining using a hydraulic excavator “backdigger” was proposed.
Defining a region of optimization based on engine usage data
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-08-04
Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.
Optimisation multi-objectif des systemes energetiques
NASA Astrophysics Data System (ADS)
Dipama, Jean
The increasing demand of energy and the environmental concerns related to greenhouse gas emissions lead to more and more private or public utilities to turn to nuclear energy as an alternative for the future. Nuclear power plants are then called to experience large expansion in the coming years. Improved technologies will then be put in place to support the development of these plants. This thesis considers the optimization of the thermodynamic cycle of the secondary loop of Gentilly-2 nuclear power plant in terms of output power and thermal efficiency. In this thesis, investigations are carried out to determine the optimal operating conditions of steam power cycles by the judicious use of the combination of steam extraction at the different stages of the turbines. Whether it is the case of superheating or regeneration, we are confronted in all cases to an optimization problem involving two conflicting objectives, as increasing the efficiency imply the decrease of mechanical work and vice versa. Solving this kind of problem does not lead to unique solution, but to a set of solutions that are tradeoffs between the conflicting objectives. To search all of these solutions, called Pareto optimal solutions, the use of an appropriate optimization algorithm is required. Before starting the optimization of the secondary loop, we developed a thermodynamic model of the secondary loop which includes models for the main thermal components (e.g., turbine, moisture separator-superheater, condenser, feedwater heater and deaerator). This model is used to calculate the thermodynamic state of the steam and water at the different points of the installation. The thermodynamic model has been developed with Matlab and validated by comparing its predictions with the operating data provided by the engineers of the power plant. The optimizer developed in VBA (Visual Basic for Applications) uses an optimization algorithm based on the principle of genetic algorithms, a stochastic optimization method which is very robust and widely used to solve problems usually difficult to handle by traditional methods. Genetic algorithms (GAs) have been used in previous research and proved to be efficient in optimizing heat exchangers networks (HEN) (Dipama et al., 2008). So, HEN have been synthesized to recover the maximum heat in an industrial process. The optimization problem formulated in the context of this work consists of a single objective, namely the maximization of energy recovery. The optimization algorithm developed in this thesis extends the ability of GAs by taking into account several objectives simultaneously. This algorithm provides an innovation in the method of finding optimal solutions, by using a technique which consist of partitioning the solutions space in the form of parallel grids called "watching corridors". These corridors permit to specify areas (the observation corridors) in which the most promising feasible solutions are found and used to guide the search towards optimal solutions. A measure of the progress of the search is incorporated into the optimization algorithm to make it self-adaptive through the use of appropriate genetic operators at each stage of optimization process. The proposed method allows a fast convergence and ensure a diversity of solutions. Moreover, this method gives the algorithm the ability to overcome difficulties associated with optimizing problems with complex Pareto front landscapes (e.g., discontinuity, disjunction, etc.). The multi-objective optimization algorithm has been first validated using numerical test problems found in the literature as well as energy systems optimization problems. Finally, the proposed optimization algorithm has been applied for the optimization of the secondary loop of Gentilly-2 nuclear power plant, and a set of solutions have been found which permit to make the power plant operate in optimal conditions. (Abstract shortened by UMI.)
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models
Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574
A novel method for vaginal cylinder treatment planning: a seamless transition to 3D brachytherapy
Wu, Vincent; Wang, Zhou; Patil, Sachin
2012-01-01
Purpose Standard treatment plan libraries are often used to ensure a quick turn-around time for vaginal cylinder treatments. Recently there is increasing interest in transitioning from conventional 2D radiograph based brachytherapy to 3D image based brachytherapy, which has resulted in a substantial increase in treatment planning time and decrease in patient through-put. We describe a novel technique that significantly reduces the treatment planning time for CT-based vaginal cylinder brachytherapy. Material and methods Oncentra MasterPlan TPS allows multiple sets of data points to be classified as applicator points which has been harnessed in this method. The method relies on two hard anchor points: the first dwell position in a catheter and an applicator configuration specific dwell position as the plan origin and a soft anchor point beyond the last active dwell position to define the axis of the catheter. The spatial location of various data points on the applicator's surface and at 5 mm depth are stored in an Excel file that can easily be transferred into a patient CT data set using window operations and then used for treatment planning. The remainder of the treatment planning process remains unaffected. Results The treatment plans generated on the Oncentra MasterPlan TPS using this novel method yielded results comparable to those generated on the Plato TPS using a standard treatment plan library in terms of treatment times, dwell weights and dwell times for a given optimization method and normalization points. Less than 2% difference was noticed between the treatment times generated between both systems. Using the above method, the entire planning process, including CT importing, catheter reconstruction, multiple data point definition, optimization and dose prescription, can be completed in ~5–10 minutes. Conclusion The proposed method allows a smooth and efficient transition to 3D CT based vaginal cylinder brachytherapy planning. PMID:23349650
von Glischinski, M; Willutzki, U; Stangier, U; Hiller, W; Hoyer, J; Leibing, E; Leichsenring, F; Hirschfeld, G
2018-02-11
The Liebowitz Social Anxiety Scale (LSAS) is the most frequently used instrument to assess social anxiety disorder (SAD) in clinical research and practice. Both a self-reported (LSAS-SR) and a clinician-administered (LSAS-CA) version are available. The aim of the present study was to define optimal cut-off (OC) scores for remission and response to treatment for the LSAS in a German sample. Data of N = 311 patients with SAD were used who had completed psychotherapeutic treatment within a multicentre randomized controlled trial. Diagnosis of SAD and reduction in symptom severity according to the Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders, 4th edition, served as gold standard. OCs yielding the best balance between sensitivity and specificity were determined using receiver operating characteristics. The variability of the resulting OCs was estimated by nonparametric bootstrapping. Using diagnosis of SAD (present vs. absent) as a criterion, results for remission indicated cut-off values of 35 for the LSAS-SR and 30 for the LSAS-CA, with acceptable sensitivity (LSAS-SR: .83, LSAS-CA: .88) and specificity (LSAS-SR: .82, LSAS-CA: .87). For detection of response to treatment, assessed by a 1-point reduction in the Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders, 4th edition, rating, a reduction of 28% for the LSAS-SR and 29% for the LSAS-CA yielded the best balance between sensitivity (LSAS-SR: .75, LSAS-CA: .83) and specificity (LSAS-SR: .76, LSAS-CA: .80). To our knowledge, we are the first to define cut points for the LSAS in a German sample. Overall, the cut points for remission and response corroborate previously reported cut points, now building on a broader data basis. Copyright © 2018 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Aardt, Jan; Romanczyk, Paul; van Leeuwen, Martin
Terrestrial laser scanning (TLS) has emerged as an effective tool for rapid comprehensive measurement of object structure. Registration of TLS data is an important prerequisite to overcome the limitations of occlusion. However, due to the high dissimilarity of point cloud data collected from disparate viewpoints in the forest environment, adequate marker-free registration approaches have not been developed. The majority of studies instead rely on the utilization of artificial tie points (e.g., reflective tooling balls) placed within a scene to aid in coordinate transformation. We present a technique for generating view-invariant feature descriptors that are intrinsic to the point cloud datamore » and, thus, enable blind marker-free registration in forest environments. To overcome the limitation of initial pose estimation, we employ a voting method to blindly determine the optimal pairwise transformation parameters, without an a priori estimate of the initial sensor pose. To provide embedded error metrics, we developed a set theory framework in which a circular transformation is traversed between disjoint tie point subsets. This provides an upper estimate of the Root Mean Square Error (RMSE) confidence associated with each pairwise transformation. Output RMSE errors are commensurate with the RMSE of input tie points locations. Thus, while the mean output RMSE=16.3cm, improved results could be achieved with a more precise laser scanning system. This study 1) quantifies the RMSE of the proposed marker-free registration approach, 2) assesses the validity of embedded confidence metrics using receiver operator characteristic (ROC) curves, and 3) informs optimal sample spacing considerations for TLS data collection in New England forests. Furthermore, while the implications for rapid, accurate, and precise forest inventory are obvious, the conceptual framework outlined here could potentially be extended to built environments.« less