Engineering to Control Noise, Loading, and Optimal Operating Points
Mitchell R. Swartz
2000-11-12
Successful engineering of low-energy nuclear systems requires control of noise, loading, and optimum operating point (OOP) manifolds. The latter result from the biphasic system response of low-energy nuclear reaction (LENR)/cold fusion systems, and their ash production rate, to input electrical power. Knowledge of the optimal operating point manifold can improve the reproducibility and efficacy of these systems in several ways. Improved control of noise, loading, and peak production rates is available through the study, and use, of OOP manifolds. Engineering of systems toward the OOP-manifold drive-point peak may, with inclusion of geometric factors, permit more accurate uniform determinations of the calibrated activity of these materials/systems.
Nonlinear Burn Control and Operating Point Optimization in ITER
NASA Astrophysics Data System (ADS)
Boyer, Mark; Schuster, Eugenio
2013-10-01
Control of the fusion power through regulation of the plasma density and temperature will be essential for achieving and maintaining desired operating points in fusion reactors and burning plasma experiments like ITER. In this work, a volume averaged model for the evolution of the density of energy, deuterium and tritium fuel ions, alpha-particles, and impurity ions is used to synthesize a multi-input multi-output nonlinear feedback controller for stabilizing and modulating the burn condition. Adaptive control techniques are used to account for uncertainty in model parameters, including particle confinement times and recycling rates. The control approach makes use of the different possible methods for altering the fusion power, including adjusting the temperature through auxiliary heating, modulating the density and isotopic mix through fueling, and altering the impurity density through impurity injection. Furthermore, a model-based optimization scheme is proposed to drive the system as close as possible to desired fusion power and temperature references. Constraints are considered in the optimization scheme to ensure that, for example, density and beta limits are avoided, and that optimal operation is achieved even when actuators reach saturation. Supported by the NSF CAREER award program (ECCS-0645086).
Optimal choice of cupola furnace nominal operating point
Abdelrahman, M.A.; Moore, K.L.
1998-08-01
One of the main goals in the operation of a cupola furnace is to keep the molten iron properties within prescribed bounds while maintaining the most economical operation for the cupola. In this paper the authors present a procedure to obtain the nominal values for the manipulated process variables. The nominal values are calculated by solving a constrained nonlinear programming optimization problem. Two different optimization problems are discussed and examples for using the procedure are presented.
Prediction of optimal operation point existence and parameters in lossy compression of noisy images
NASA Astrophysics Data System (ADS)
Zemliachenko, Alexander N.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2014-10-01
This paper deals with lossy compression of images corrupted by additive white Gaussian noise. For such images, compression can be characterized by existence of optimal operation point (OOP). In OOP, MSE or other metric derived between compressed and noise-free image might have optimum, i.e., maximal noise removal effect takes place. If OOP exists, then it is reasonable to compress an image in its neighbourhood. If no, more "careful" compression is reasonable. In this paper, we demonstrate that existence of OOP can be predicted based on very simple and fast analysis of discrete cosine transform (DCT) statistics in 8x8 blocks. Moreover, OOP can be predicted not only for conventional metrics as MSE or PSNR but also for visual quality metrics. Such prediction can be useful in automatic compression of multi- and hyperspectral remote sensing images.
Patnaik, Lalit; Umanand, Loganathan
2015-12-01
The inverted pendulum is a popular model for describing bipedal dynamic walking. The operating point of the walker can be specified by the combination of initial mid-stance velocity (v0) and step angle (φm) chosen for a given walk. In this paper, using basic mechanics, a framework of physical constraints that limit the choice of operating points is proposed. The constraint lines thus obtained delimit the allowable region of operation of the walker in the v0-φm plane. A given average forward velocity vx,avg can be achieved by several combinations of v0 and φm. Only one of these combinations results in the minimum mechanical power consumption and can be considered the optimum operating point for the given vx,avg. This paper proposes a method for obtaining this optimal operating point based on tangency of the power and velocity contours. Putting together all such operating points for various vx,avg, a family of optimum operating points, called the optimal locus, is obtained. For the energy loss and internal energy models chosen, the optimal locus obtained has a largely constant step angle with increasing speed but tapers off at non-dimensional speeds close to unity. PMID:26502096
LST data management and mission operations concept. [pointing control optimization for maximum data
NASA Technical Reports Server (NTRS)
Walker, R.; Hudson, F.; Murphy, L.
1977-01-01
A candidate design concept for an LST ground facility is described. The design objectives were to use NASA institutional hardware, software and facilities wherever practical, and to maximize efficiency of telescope use. The pointing control performance requirements of LST are summarized, and the major data interfaces of the candidate ground system are diagrammed.
ATLAS solar pointing operations
NASA Technical Reports Server (NTRS)
Tyler, C. A.; Zimmerman, C. J.
1994-01-01
The ATLAS-series of Spacelab missions are comprised of a diverse group of scientific instruments including instruments for studying the sun and how the sun's energy changes across an eleven-year solar cycle. The ATLAS solar instruments are located on one or more pallets in the Orbiter payload bay and use the Orbiter as a pointing platform for their examinations of the sun. One of the ATLAS instruments contained a sun sensor which allowed scientists and engineers on the ground to see the pointing error of the sun with respect to the instrument and correct for the error based upon the information coming from the ATLAS 1 and ATLAS 2 missions with particular attention given to identifying the sources of pointing discrepancies of the solar instruments and to describe the crew and ground controller procedures that were developed to correct for these discrepancies. The Orbiter pointing behavior from the ATLAS 1 and ATLAS 2 flights presented in this paper can be applied to future flights which use the Orbiter as a pointing platform.
Optimizing Operating Room Scheduling.
Levine, Wilton C; Dunn, Peter F
2015-12-01
This article reviews the management of an operating room (OR) schedule and use of the schedule to add value to an organization. We review the methodology of an OR block schedule, daily OR schedule management, and post anesthesia care unit patient flow. We discuss the importance of a well-managed OR schedule to ensure smooth patient care, not only in the OR, but throughout the entire hospital. PMID:26610624
Characterizations of fixed points of quantum operations
Li Yuan
2011-05-15
Let {phi}{sub A} be a general quantum operation. An operator B is said to be a fixed point of {phi}{sub A}, if {phi}{sub A}(B)=B. In this note, we shall show conditions under which B, a fixed point {phi}{sub A}, implies that B is compatible with the operation element of {phi}{sub A}. In particular, we offer an extension of the generalized Lueders theorem.
Optimal rate filters for biomedical point processes.
McNames, James
2005-01-01
Rate filters are used to estimate the mean event rate of many biomedical signals that can be modeled as point processes. Historically these filters have been designed using principles from two distinct fields. Signal processing principles are used to optimize the filter's frequency response. Kernel estimation principles are typically used to optimize the asymptotic statistical properties. This paper describes a design methodology that combines these principles from both fields to optimize the frequency response subject to constraints on the filter's order, symmetry, time-domain ripple, DC gain, and minimum impulse response. Initial results suggest that time-domain ripple and a negative impulse response are necessary to design a filter with a reasonable frequency response. This suggests that some of the common assumptions about the properties of rate filters should be reconsidered. PMID:17282132
Automated design of image operators that detect interest points.
Trujillo, Leonardo; Olague, Gustavo
2008-01-01
This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research. PMID:19053496
Linearization: Students Forget the Operating Point
ERIC Educational Resources Information Center
Roubal, J.; Husek, P.; Stecha, J.
2010-01-01
Linearization is a standard part of modeling and control design theory for a class of nonlinear dynamical systems taught in basic undergraduate courses. Although linearization is a straight-line methodology, it is not applied correctly by many students since they often forget to keep the operating point in mind. This paper explains the topic and…
Multi-Point Combinatorial Optimization Method with Distance Based Interaction
NASA Astrophysics Data System (ADS)
Yasuda, Keiichiro; Jinnai, Hiroyuki; Ishigame, Atsushi
This paper proposes a multi-point combinatorial optimization method based on Proximate Optimality Principle (POP), which method has several advantages for solving large-scale combinatorial optimization problems. The proposed algorithm uses not only the distance between search points but also the interaction among search points in order to utilize POP in several types of combinatorial optimization problems. The proposed algorithm is applied to several typical combinatorial optimization problems, a knapsack problem, a traveling salesman problem, and a flow shop scheduling problem, in order to verify the performance of the proposed algorithm. The simulation results indicate that the proposed method has higher optimality than the conventional combinatorial optimization methods.
Sensors operating at exceptional points: General theory
NASA Astrophysics Data System (ADS)
Wiersig, Jan
2016-03-01
A general theory of sensors based on the detection of splittings of resonant frequencies or energy levels operating at so-called exceptional points is presented. Exploiting the complex-square-root topology near such non-Hermitian degeneracies has a great potential for enhanced sensitivity. Passive and active systems are discussed. The theory is specified for whispering-gallery microcavity sensors for particle detection. As example, a microdisk with two holes is studied numerically. The theory and numerical simulations demonstrate a sevenfold enhancement of the sensitivity.
47 CFR 22.591 - Channels for point-to-point operation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false Channels for point-to-point operation. 22.591... PUBLIC MOBILE SERVICES Paging and Radiotelephone Service Point-To-Point Operation § 22.591 Channels for point-to-point operation. The following channels are allocated for assignment to fixed transmitters...
47 CFR 22.591 - Channels for point-to-point operation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Channels for point-to-point operation. 22.591... PUBLIC MOBILE SERVICES Paging and Radiotelephone Service Point-To-Point Operation § 22.591 Channels for point-to-point operation. The following channels are allocated for assignment to fixed transmitters...
OPTIMIZATION OF TREATMENT PLANT OPERATION
A review of the literature on upgrading the operation of wastewater treatment plants covers 61 citations concerning management, operation, maintenance, and training; process control and modelling; instrumentation and automation; and energy savings.
Universally optimal distribution of points on spheres
NASA Astrophysics Data System (ADS)
Cohn, Henry; Kumar, Abhinav
2007-01-01
We study configurations of points on the unit sphere that minimize potential energy for a broad class of potential functions (viewed as functions of the squared Euclidean distance between points). Call a configuration sharp if there are m distances between distinct points in it and it is a spherical (2m-1) -design. We prove that every sharp configuration minimizes potential energy for all completely monotonic potential functions. Examples include the minimal vectors of the E_8 and Leech lattices. We also prove the same result for the vertices of the 600 -cell, which do not form a sharp configuration. For most known cases, we prove that they are the unique global minima for energy, as long as the potential function is strictly completely monotonic. For certain potential functions, some of these configurations were previously analyzed by Yudin, Kolushov, and Andreev; we build on their techniques. We also generalize our results to other compact two-point homogeneous spaces, and we conclude with an extension to Euclidean space.
A Study on Optimal Operation of Power Generation by Waste
NASA Astrophysics Data System (ADS)
Sugahara, Hideo; Aoyagi, Yoshihiro; Kato, Masakazu
This paper proposes the optimal operation of power generation by waste. Refuse is taken as a new energy resource of biomass. Although some fossil fuel origin refuse like plastic may be mixed in, CO2 emission is not counted up except for above fossil fuel origin refuse for the Kyoto Protocol. Incineration is indispensable for refuse disposal and power generation by waste is environment-friendly and power system-friendly using synchronous generators. Optimal planning is a key point to make much of this merit. The optimal plan includes refuse incinerator operation plan with refuse collection and maintenance scheduling of refuse incinerator plant. In this paper, it has been made clear that the former plan increases generation energy through numerical simulations. Concerning the latter plan, a method to determine the maintenance schedule using genetic algorithm has been established. In addition, taking environmental load of CO2 emission into account, this is expected larger merits from environment and energy resource points of view.
Operation Fair Share Points the Way.
ERIC Educational Resources Information Center
Rodgers, Curtis E.
1982-01-01
Through "Operation Fair Share," the NAACP aims at (1) expanded Black access to entry level corporate jobs; (2) establishment of minority vendor procurement programs; (3) appointment of Blacks to the boards of directors of corporations; (4) more Black senior level corporate managers; and (5) legislation permitting contracts to be set aside for…
On the operating point of cortical computation
NASA Astrophysics Data System (ADS)
Martin, Robert; Stimberg, Marcel; Wimmer, Klaus; Obermayer, Klaus
2010-06-01
In this paper, we consider a class of network models of Hodgkin-Huxley type neurons arranged according to a biologically plausible two-dimensional topographic orientation preference map, as found in primary visual cortex (V1). We systematically vary the strength of the recurrent excitation and inhibition relative to the strength of the afferent input in order to characterize different operating regimes of the network. We then compare the map-location dependence of the tuning in the networks with different parametrizations with the neuronal tuning measured in cat V1 in vivo. By considering the tuning of neuronal dynamic and state variables, conductances and membrane potential respectively, our quantitative analysis is able to constrain the operating regime of V1: The data provide strong evidence for a network, in which the afferent input is dominated by strong, balanced contributions of recurrent excitation and inhibition, operating in vivo. Interestingly, this recurrent regime is close to a regime of "instability", characterized by strong, self-sustained activity. The firing rate of neurons in the best-fitting model network is therefore particularly sensitive to small modulations of model parameters, possibly one of the functional benefits of this particular operating regime.
Evaluation of stochastic reservoir operation optimization models
NASA Astrophysics Data System (ADS)
Celeste, Alcigeimes B.; Billib, Max
2009-09-01
This paper investigates the performance of seven stochastic models used to define optimal reservoir operating policies. The models are based on implicit (ISO) and explicit stochastic optimization (ESO) as well as on the parameterization-simulation-optimization (PSO) approach. The ISO models include multiple regression, two-dimensional surface modeling and a neuro-fuzzy strategy. The ESO model is the well-known and widely used stochastic dynamic programming (SDP) technique. The PSO models comprise a variant of the standard operating policy (SOP), reservoir zoning, and a two-dimensional hedging rule. The models are applied to the operation of a single reservoir damming an intermittent river in northeastern Brazil. The standard operating policy is also included in the comparison and operational results provided by deterministic optimization based on perfect forecasts are used as a benchmark. In general, the ISO and PSO models performed better than SDP and the SOP. In addition, the proposed ISO-based surface modeling procedure and the PSO-based two-dimensional hedging rule showed superior overall performance as compared with the neuro-fuzzy approach.
Optimal PGU operation strategy in CHP systems
NASA Astrophysics Data System (ADS)
Yun, Kyungtae
Traditional power plants only utilize about 30 percent of the primary energy that they consume, and the rest of the energy is usually wasted in the process of generating or transmitting electricity. On-site and near-site power generation has been considered by business, labor, and environmental groups to improve the efficiency and the reliability of power generation. Combined heat and power (CHP) systems are a promising alternative to traditional power plants because of the high efficiency and low CO2 emission achieved by recovering waste thermal energy produced during power generation. A CHP operational algorithm designed to optimize operational costs must be relatively simple to implement in practice such as to minimize the computational requirements from the hardware to be installed. This dissertation focuses on the following aspects pertaining the design of a practical CHP operational algorithm designed to minimize the operational costs: (a) real-time CHP operational strategy using a hierarchical optimization algorithm; (b) analytic solutions for cost-optimal power generation unit operation in CHP Systems; (c) modeling of reciprocating internal combustion engines for power generation and heat recovery; (d) an easy to implement, effective, and reliable hourly building load prediction algorithm.
Optimization of the bank's operating portfolio
NASA Astrophysics Data System (ADS)
Borodachev, S. M.; Medvedev, M. A.
2016-06-01
The theory of efficient portfolios developed by Markowitz is used to optimize the structure of the types of financial operations of a bank (bank portfolio) in order to increase the profit and reduce the risk. The focus of this paper is to check the stability of the model to errors in the original data.
Optimization of ejector design and operation
NASA Astrophysics Data System (ADS)
Kuzmenko, Konstantin; Yurchenko, Nina; Vynogradskyy, Pavlo; Paramonov, Yuriy
2016-03-01
The investigation aims at optimization of gas ejector operation. The goal consists in the improvement of the inflator design so that to enable 50 liters of gas inflation within ~30 milliseconds. For that, an experimental facility was developed and fabricated together with the measurement system to study pressure patterns in the inflator path.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-15
... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Indian Point 3, LLC.; Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit 3; Exemption 1.0 Background Entergy Nuclear Operations, Inc. (Entergy or the licensee) is the holder of Facility Operating License No....
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-27
... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Indian Point 2, LLC; Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit No. 2, Request for Action AGENCY: Nuclear Regulatory Commission. ACTION: Request for...
Optimizing robot placement for visit-point tasks
Hwang, Y.K.; Watterberg, P.A.
1996-06-01
We present a manipulator placement algorithm for minimizing the length of the manipulator motion performing a visit-point task such as spot welding. Given a set of points for the tool of a manipulator to visit, our algorithm finds the shortest robot motion required to visit the points from each possible base configuration. The base configurations resulting in the shortest motion is selected as the optimal robot placement. The shortest robot motion required for visiting multiple points from a given base configuration is computed using a variant of the traveling salesman algorithm in the robot joint space and a point-to-point path planner that plans collision free robot paths between two configurations. Our robot placement algorithm is expected to reduce the robot cycle time during visit- point tasks, as well as speeding up the robot set-up process when building a manufacturing line.
A superlinear interior points algorithm for engineering design optimization
NASA Technical Reports Server (NTRS)
Herskovits, J.; Asquier, J.
1990-01-01
We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.
Radar antenna pointing for optimized signal to noise ratio.
Doerry, Armin Walter; Marquette, Brandeis
2013-01-01
The Signal-to-Noise Ratio (SNR) of a radar echo signal will vary across a range swath, due to spherical wavefront spreading, atmospheric attenuation, and antenna beam illumination. The antenna beam illumination will depend on antenna pointing. Calculations of geometry are complicated by the curved earth, and atmospheric refraction. This report investigates optimizing antenna pointing to maximize the minimum SNR across the range swath.
Optimization of Regression Models of Experimental Data Using Confirmation Points
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.
Optimizing Integrated Terminal Airspace Operations Under Uncertainty
NASA Technical Reports Server (NTRS)
Bosson, Christabelle; Xue, Min; Zelinski, Shannon
2014-01-01
In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.
Planning time-optimal robotic manipulator motions and work places for point-to-point tasks
NASA Technical Reports Server (NTRS)
Dubowsky, S.; Blubaugh, T. D.
1989-01-01
A method is presented which combines simple time-optimal motions in an optimal manner to yield the minimum-time motions for an important class of complex manipulator tasks composed of point-to-point moves such as assembly, electronic component insertion, and spot welding. This method can also be used to design manipulator actions and work places so that tasks can be completed in minimum time. The method has been implemented in a computer-aided design software system. Several examples are presented. Experimental results show the method's validity and utility.
Robust stochastic optimization for reservoir operation
NASA Astrophysics Data System (ADS)
Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin
2015-01-01
Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.
An Optimization Study of Hot Stamping Operation
NASA Astrophysics Data System (ADS)
Ghoo, Bonyoung; Umezu, Yasuyoshi; Watanabe, Yuko; Ma, Ninshu; Averill, Ron
2010-06-01
In the present study, 3-dimensional finite element analyses for hot-stamping processes of Audi B-pillar product are conducted using JSTAMP/NV and HEEDS. Special attention is paid to the optimization of simulation technology coupling with thermal-mechanical formulations. Numerical simulation based on FEM technology and optimization design using the hybrid adaptive SHERPA algorithm are applied to hot stamping operation to improve productivity. The robustness of the SHERPA algorithm is found through the results of the benchmark example. The SHERPA algorithm is shown to be far superior to the GA (Genetic Algorithm) in terms of efficiency, whose calculation time is about 7 times faster than that of the GA. The SHERPA algorithm could show high performance in a large scale problem having complicated design space and long calculation time.
Optimal entanglement generation from quantum operations
Leifer, M.S.; Henderson, L.; Linden, N.
2003-01-01
We consider how much entanglement can be produced by a nonlocal two-qubit unitary operation, U{sub AB}--the entangling capacity of U{sub AB}. For a single application of U{sub AB}, with no ancillas, we find the entangling capacity and show that it generally helps to act with U{sub AB} on an entangled state. Allowing ancillas, we present numerical results from which we can conclude, quite generally, that allowing initial entanglement typically increases the optimal capacity in this case as well. Next, we show that allowing collective processing does not increase the entangling capacity if initial entanglement is allowed.
On Motivating Operations at the Point of Online Purchase Setting
ERIC Educational Resources Information Center
Fagerstrom, Asle; Arntzen, Erik
2013-01-01
Consumer behavior analysis can be applied over a wide range of economic topics in which the main focus is the contingencies that influence the behavior of the economic agent. This paper provides an overview on the work that has been done on the impact from motivating operations at the point of online purchase situation. Motivating operations, a…
Optimal Hedging Rule for Reservoir Refill Operation
NASA Astrophysics Data System (ADS)
Wan, W.; Zhao, J.; Lund, J. R.; Zhao, T.; Lei, X.; Wang, H.
2015-12-01
This paper develops an optimal reservoir Refill Hedging Rule (RHR) for combined water supply and flood operation using mathematical analysis. A two-stage model is developed to formulate the trade-off between operations for conservation benefit and flood damage in the reservoir refill season. Based on the probability distribution of the maximum refill water availability at the end of the second stage, three zones are characterized according to the relationship among storage capacity, expected storage buffer (ESB), and maximum safety excess discharge (MSED). The Karush-Kuhn-Tucker conditions of the model show that the optimality of the refill operation involves making the expected marginal loss of conservation benefit from unfilling (i.e., ending storage of refill period less than storage capacity) as nearly equal to the expected marginal flood damage from levee overtopping downstream as possible while maintaining all constraints. This principle follows and combines the hedging rules for water supply and flood management. A RHR curve is drawn analogously to water supply hedging and flood hedging rules, showing the trade-off between the two objectives. The release decision result has a linear relationship with the current water availability, implying the linearity of RHR for a wide range of water conservation functions (linear, concave, or convex). A demonstration case shows the impacts of factors. Larger downstream flood conveyance capacity and empty reservoir capacity allow a smaller current release and more water can be conserved. Economic indicators of conservation benefit and flood damage compete with each other on release, the greater economic importance of flood damage is, the more water should be released in the current stage, and vice versa. Below a critical value, improving forecasts yields less water release, but an opposing effect occurs beyond this critical value. Finally, the Danjiangkou Reservoir case study shows that the RHR together with a rolling
Multiple tipping points and optimal repairing in interacting networks
Majdandzic, Antonio; Braunstein, Lidia A.; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Eugene Stanley, H.; Havlin, Shlomo
2016-01-01
Systems composed of many interacting dynamical networks—such as the human body with its biological networks or the global economic network consisting of regional clusters—often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread and recovery. Here we develop a model for such systems and find a very rich phase diagram that becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions and two ‘forbidden' transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyse an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model. PMID:26926803
Multiple tipping points and optimal repairing in interacting networks.
Majdandzic, Antonio; Braunstein, Lidia A; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Stanley, H Eugene; Havlin, Shlomo
2016-01-01
Systems composed of many interacting dynamical networks-such as the human body with its biological networks or the global economic network consisting of regional clusters-often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread and recovery. Here we develop a model for such systems and find a very rich phase diagram that becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions and two 'forbidden' transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyse an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model. PMID:26926803
Multiple tipping points and optimal repairing in interacting networks
NASA Astrophysics Data System (ADS)
Majdandzic, Antonio; Braunstein, Lidia A.; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Eugene Stanley, H.; Havlin, Shlomo
2016-03-01
Systems composed of many interacting dynamical networks--such as the human body with its biological networks or the global economic network consisting of regional clusters--often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread and recovery. Here we develop a model for such systems and find a very rich phase diagram that becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions and two `forbidden' transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyse an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model.
CMB Polarization Detector Operating Parameter Optimization
NASA Astrophysics Data System (ADS)
Randle, Kirsten; Chuss, David; Rostem, Karwan; Wollack, Ed
2015-04-01
Examining the polarization of the Cosmic Microwave Background (CMB) provides the only known way to probe the physics of inflation in the early universe. Gravitational waves produced during inflation are posited to produce a telltale pattern of polarization on the CMB and if measured would provide both tangible evidence for inflation along with a measurement of inflation's energy scale. Leading the effort to detect and measure this phenomenon, Goddard Space Flight Center has been developing high-efficiency detectors. In order to optimize signal-to-noise ratios, sources like the atmosphere and the instrumentation must be considered. In this work we examine operating parameters of these detectors such as optical power loading and photon noise. SPS Summer Internship at NASA Goddard Spaceflight Center.
Detector characterization, optimization, and operation for ACTPol
NASA Astrophysics Data System (ADS)
Grace, Emily Ann
2016-01-01
Measurements of the temperature anisotropies of the Cosmic Microwave Background (CMB) have provided the foundation for much of our current knowledge of cosmology. Observations of the polarization of the CMB have already begun to build on this foundation and promise to illuminate open cosmological questions regarding the first moments of the universe and the properties of dark energy. The primary CMB polarization signal contains the signature of early universe physics including the possible imprint of inflationary gravitational waves, while a secondary signal arises due to late-time interactions of CMB photons which encode information about the formation and evolution of structure in the universe. The Atacama Cosmology Telescope Polarimeter (ACTPol), located at an elevation of 5200 meters in Chile and currently in its third season of observing, is designed to probe these signals with measurements of the CMB in both temperature and polarization from arcminute to degree scales. To measure the faint CMB polarization signal, ACTPol employs large, kilo-pixel detector arrays of transition edge sensor (TES) bolometers, which are cooled to a 100 mK operating temperature with a dilution refrigerator. Three such arrays are currently deployed, two with sensitivity to 150 GHz radiation and one dichroic array with 90 GHz and 150 GHz sensitivity. The operation of these large, monolithic detector arrays presents a number of challenges for both assembly and characterization. This thesis describes the design and assembly of the ACTPol polarimeter arrays and outlines techniques for their rapid characterization. These methods are employed to optimize the design and operating conditions of the detectors, select wafers for deployment, and evaluate the baseline array performance. The results of the application of these techniques to wafers from all three ACTPol arrays is described, including discussion of the measured thermal properties and time constants. Finally, aspects of the
Searching for the Optimal Working Point of the MEIC at JLab Using an Evolutionary Algorithm
Balsa Terzic, Matthew Kramer, Colin Jarvis
2011-03-01
The Medium-energy Electron Ion Collider (MEIC), a proposed medium-energy ring-ring electron-ion collider based on CEBAF at Jefferson Lab. The collider luminosity and stability are sensitive to the choice of a working point - the betatron and synchrotron tunes of the two colliding beams. Therefore, a careful selection of the working point is essential for stable operation of the collider, as well as for achieving high luminosity. Here we describe a novel approach for locating an optimal working point based on evolutionary algorithm techniques.
Automatic parameter optimizer (APO) for multiple-point statistics
NASA Astrophysics Data System (ADS)
Bani Najar, Ehsanollah; Sharghi, Yousef; Mariethoz, Gregoire
2016-04-01
Multiple Point statistics (MPS) have gained popularity in recent years for generating stochastic realizations of complex natural processes. The main principle is that a training image (TI) is used to represent the spatial patterns to be modeled. One important feature of MPS is that the spatial model of the fields generated is made of 1) the chosen TI and 2) a set of algorithmic parameters that are specific to each MPS algorithm. While the choice of a training image can be guided by expert knowledge (e.g. for geological modeling) or by data acquisition methods (e.g. remote sensing) determining the algorithmic parameters can be more challenging. To date, only specific guidelines have been proposed for some simulation methods, and a general parameters inference methodology is still lacking, in particular for complex modeling settings such as when using multivariate training images. The common practice consists in carrying out an extensive parameters sensitivity analysis which can be cumbersome. An additional complexity is that the algorithmic parameters do influence CPU cost, and therefore finding optimal parameters is not only a modeling question, but also a computational challenge. To overcome these issues, we propose the automatic parameter optimizer (MPS-APO), a generic method based on stochastic optimization to rapidly determine acceptable parameters, in different settings and for any MPS method. The MPS automatic parameter optimizer proceeds in a 2-step approach. In the first step, it considers the set of input parameters of a given MPS algorithm and formulates an objective function that quantifies the reproduction of spatial patterns. The Simultaneous Perturbation Stochastic Approximation (SPSA) optimization method is used to minimize the objective function. SPSA is chosen because it is able to deal with the stochastic nature of the objective function and for its computational efficiency. At each iteration, small gaps are randomly placed in the input image
Optimal periodic controller for formation flying on libration point orbits
NASA Astrophysics Data System (ADS)
Peng, Haijun; Zhao, Jun; Wu, Zhigang; Zhong, Wanxie
2011-09-01
An optimal periodic controller based on continuous low-thrust is proposed for the stabilization missions of spacecraft station-keeping and formation-keeping along periodic Libration point orbits of the Sun-Earth system. Additionally, a new numerical algorithm is proposed for solving the periodic Riccati differential equation in the design of the optimal periodic controller. Practical missions show that the optimal periodic controller (which is designed with the linear periodic time-varying equation of the relative dynamical model) overcomes the problems and limitations of the time-variant LQR controller. Furthermore, nonlinear numerical simulations are presented for the missions of a leader spacecraft station-keeping and three follower spacecraft formation-keeping. Numerical simulations show that the velocity increments for spacecraft control and relative position errors vary little with changes in the altitude of periodic orbits. In addition, the actual trajectories of the leader and the follower spacecraft track the periodic reference orbit with high accuracy under the perturbation of the eccentric nature of the Earth's orbit and the initial injection errors. In particular, the relative position errors obtained by the optimal periodic controller for spacecraft formation-keeping are all in the range of millimeters.
Optimal periodic control for spacecraft pointing and attitude determination
NASA Technical Reports Server (NTRS)
Pittelkau, Mark E.
1993-01-01
A new approach to autonomous magnetic roll/yaw control of polar-orbiting, nadir-pointing momentum bias spacecraft is considered as the baseline attitude control system for the next Tiros series. It is shown that the roll/yaw dynamics with magnetic control are periodically time varying. An optimal periodic control law is then developed. The control design features a state estimator that estimates attitude, attitude rate, and environmental torque disturbances from Earth sensor and sun sensor measurements; no gyros are needed. The state estimator doubles as a dynamic attitude determination and prediction function. In addition to improved performance, the optimal controller allows a much smaller momentum bias than would otherwise be necessary. Simulation results are given.
Improving Small Signal Stability through Operating Point Adjustment
Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.; Chen, Yousu; Trudnowski, Daniel J.; Mittelstadt, William; Hauer, John F.; Dagle, Jeffery E.
2010-09-30
ModeMeter techniques for real-time small signal stability monitoring continue to mature, and more and more phasor measurements are available in power systems. It has come to the stage to bring modal information into real-time power system operation. This paper proposes to establish a procedure for Modal Analysis for Grid Operations (MANGO). Complementary to PSS’s and other traditional modulation-based control, MANGO aims to provide suggestions such as increasing generation or decreasing load for operators to mitigate low-frequency oscillations. Different from modulation-based control, the MANGO procedure proactively maintains adequate damping for all time, instead of reacting to disturbances when they occur. Effect of operating points on small signal stability is presented in this paper. Implementation with existing operating procedures is discussed. Several approaches for modal sensitivity estimation are investigated to associate modal damping and operating parameters. The effectiveness of the MANGO procedure is confirmed through simulation studies of several test systems.
Stress-Based Crossover Operator for Structural Topology Optimization
NASA Astrophysics Data System (ADS)
Li, Cuimin; Hiroyasu, Tomoyuki; Miki, Mitsunori
In this paper, we propose a stress-based crossover (SX) operator to solve the checkerboard-like material distributation and disconnected topology that is common for simple genetic algorithm (SGA) to structural topology optimization problems (STOPs). A penalty function is defined to evaluate the fitness of each individual. A number of constrained problems are adopted to experiment the effectiveness of SX for STOPs. Comparison of 2-point crossover (2X) with SX indicates that SX can markedly suppress the checkerboard-like material distribution phenomena. Comparison of evolutionary structural optimization (ESO) and SX demonstrates the global search ability and flexibility of SX. Experiments of a Michell-type problem verifies the effectiveness of SX for STOPs. For a multi-loaded problem, SX searches out alternate solutions on the same parameters that shows the global search ability of GA.
Attitude Control Optimization for ROCSAT-2 Operation
NASA Astrophysics Data System (ADS)
Chern, Jeng-Shing; Wu, A.-M.
The second satellite of the Republic of China is named ROCSAT-2. It is a small satellite with total mass of 750 kg for remote sensing and scientific purposes. The Remote Sensing Instrument (RSI) has resolutions of 2 m for panchromatic and 8 m for multi-spectral bands, respectively. It is mainly designed for disaster monitoring and rescue, environment and pollution monitoring, forest and agriculture planning, city and country planning, etc. for Taiwan and its surrounding islands and oceans. In order to monitor Taiwan area constantly for a long time, the orbit is designed to be sun-synchronous with 14 revolutions per day. As to the scientific payload, it is an Imager of Sprite, the Upper Atmospheric Lightening (ISUAL). Since it is a small satellite, the RSI, ISUAL, and solar panel are all body-fixed. Consequently, the satellite has to maneuver as a whole body so that either RSI or ISUAL or solar panel can be pointing to the desired direction. When ROCSAT-2 rises from the horizon and catches the sunlight, it has to maneuver to face the sun for the battery to be charged. As soon as it flies to Taiwan area, several maneuvers must be made to cover the whole area for remote sensing mission. Since the swath of ROCSAT-2 is 24 km, it needs four stripes to form the mosaic of Taiwan area. Usually, four maneuvers are required to fulfill the mission in one flight path. The sequence is very important from the point of view of saving energy. However, in some cases, we may need to sacrifice energy in order to obtain good remote sensing data at a particularly specified ground region. After that mission, its solar panel has to face the sun again. Then when ROCSAT-2 sets the horizon, it has to maneuver to point the ISUAL in the specified direction for sprite imaging mission. It is the direction where scientists predict the sprite is most probable to exist. Further maneuver may be required for the down loading of onboard data. When ROCSAT-2 rises from the horizon again, it completes
Multi-resolution imaging with an optimized number and distribution of sampling points.
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo
2014-05-01
We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis. PMID:24921717
Silver, Gary L
2009-01-01
Equations for interpolating five data in rectangular array are seldom encountered in textbooks. This paper describes a new method that renders polynomial and exponential equations for the design. Operational center point estimators are often more more resistant to the effects of an outlying datum than the mean.
Optimization of wastewater treatment plant operation for greenhouse gas mitigation.
Kim, Dongwook; Bowen, James D; Ozelkan, Ertunga C
2015-11-01
This study deals with the determination of optimal operation of a wastewater treatment system for minimizing greenhouse gas emissions, operating costs, and pollution loads in the effluent. To do this, an integrated performance index that includes three objectives was established to assess system performance. The ASMN_G model was used to perform system optimization aimed at determining a set of operational parameters that can satisfy three different objectives. The complex nonlinear optimization problem was simulated using the Nelder-Mead Simplex optimization algorithm. A sensitivity analysis was performed to identify influential operational parameters on system performance. The results obtained from the optimization simulations for six scenarios demonstrated that there are apparent trade-offs among the three conflicting objectives. The best optimized system simultaneously reduced greenhouse gas emissions by 31%, reduced operating cost by 11%, and improved effluent quality by 2% compared to the base case operation. PMID:26292772
Fixed-Point Optimization of Atoms and Density in DFT.
Marks, L D
2013-06-11
I describe an algorithm for simultaneous fixed-point optimization (mixing) of the density and atomic positions in Density Functional Theory calculations which is approximately twice as fast as conventional methods, is robust, and requires minimal to no user intervention or input. The underlying numerical algorithm differs from ones previously proposed in a number of aspects and is an autoadaptive hybrid of standard Broyden methods. To understand how the algorithm works in terms of the underlying quantum mechanics, the concept of algorithmic greed for different Broyden methods is introduced, leading to the conclusion that if a linear model holds that the first Broyden method is optimal, the second if a linear model is a poor approximation. How this relates to the algorithm is discussed in terms of electronic phase transitions during a self-consistent run which results in discontinuous changes in the Jacobian. This leads to the need for a nongreedy algorithm when the charge density crosses phase boundaries, as well as a greedy algorithm within a given phase. An ansatz for selecting the algorithm structure is introduced based upon requiring the extrapolated component of the curvature condition to have projected positive eigenvalues. The general convergence of the fixed-point methods is briefly discussed in terms of the dielectric response and elastic waves using known results for quasi-Newton methods. The analysis indicates that both should show sublinear dependence with system size, depending more upon the number of different chemical environments than upon the number of atoms, consistent with the performance of the algorithm and prior literature. This is followed by details of algorithm ranging from preconditioning to trust region control. A number of results are shown, finishing up with a discussion of some of the many open questions. PMID:26583869
Process Parameters Optimization in Single Point Incremental Forming
NASA Astrophysics Data System (ADS)
Gulati, Vishal; Aryal, Ashmin; Katyal, Puneet; Goswami, Amitesh
2016-04-01
This work aims to optimize the formability and surface roughness of parts formed by the single-point incremental forming process for an Aluminium-6063 alloy. The tests are based on Taguchi's L18 orthogonal array selected on the basis of DOF. The tests have been carried out on vertical machining center (DMC70V); using CAD/CAM software (SolidWorks V5/MasterCAM). Two levels of tool radius, three levels of sheet thickness, step size, tool rotational speed, feed rate and lubrication have been considered as the input process parameters. Wall angle and surface roughness have been considered process responses. The influential process parameters for the formability and surface roughness have been identified with the help of statistical tool (response table, main effect plot and ANOVA). The parameter that has the utmost influence on formability and surface roughness is lubrication. In the case of formability, lubrication followed by the tool rotational speed, feed rate, sheet thickness, step size and tool radius have the influence in descending order. Whereas in surface roughness, lubrication followed by feed rate, step size, tool radius, sheet thickness and tool rotational speed have the influence in descending order. The predicted optimal values for the wall angle and surface roughness are found to be 88.29° and 1.03225 µm. The confirmation experiments were conducted thrice and the value of wall angle and surface roughness were found to be 85.76° and 1.15 µm respectively.
Multi-point optimization of recirculation flow type casing treatment in centrifugal compressors
NASA Astrophysics Data System (ADS)
Tun, Min Thaw; Sakaguchi, Daisaku
2016-06-01
High-pressure ratio and wide operating range are highly required for a turbocharger in diesel engines. A recirculation flow type casing treatment is effective for flow range enhancement of centrifugal compressors. Two ring grooves on a suction pipe and a shroud casing wall are connected by means of an annular passage and stable recirculation flow is formed at small flow rates from the downstream groove toward the upstream groove through the annular bypass. The shape of baseline recirculation flow type casing is modified and optimized by using a multi-point optimization code with a metamodel assisted evolutionary algorithm embedding a commercial CFD code CFX from ANSYS. The numerical optimization results give the optimized design of casing with improving adiabatic efficiency in wide operating flow rate range. Sensitivity analysis of design parameters as a function of efficiency has been performed. It is found that the optimized casing design provides optimized recirculation flow rate, in which an increment of entropy rise is minimized at grooves and passages of the rotating impeller.
24 CFR 902.47 - Management operations portion of total PHAS points.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Operations § 902.47 Management operations portion of total PHAS points. Of the total 100 points available for a PHAS score, a PHA may receive up to 30 points based on the Management Operations Indicator....
Hill, R.C.
1998-07-01
Precise orientation control of the International Space Station (ISS) Electrical Power System (EPS) photovoltaic (PV) solar arrays is required for a number of reasons, including the optimization of power delivery to ISS system loads and payloads. To maximize power generation and delivery in general, the PV arrays are pointed directly at the sun with some allowance for inaccuracies in determination of where to point and in the actuation of pointing the PV arrays. Control of PV array orientation in this sun pointing mode is performed automatically by onboard hardware and software. During certain conditions, maximum power cannot be generated in automatic sun tracking mode due to shadowing of the PV arrays cast by other ISS structures, primarily adjacent PV arrays. In order to maximize the power generated, the PV arrays must be pointed away from the ideal sun pointing targets to reduce the amount of shadowing. The amount of off-pointing to maximize power is a function of many parameters such as the physical configuration of the ISS structures during the assembly timeframe, the solar beta angle and vehicle attitude. Thus the off-pointing cannot be controlled automatically and must be determined by ground operators. This paper presents an overview of ISS PV array orientation control, PV array power performance under shadowed and off-pointing conditions, and a methodology to maximize power under those same conditions.
Charcoal bed operation for optimal organic carbon removal
Merritt, C.M.; Scala, F.R.
1995-05-01
Historically, evaporation, reverse osmosis or charcoal-demineralizer systems have been used to remove impurities in liquid radwaste processing systems. At Nine Mile point, we recently replaced our evaporators with charcoal-demineralizer systems to purify floor drain water. A comparison of the evaporator to the charcoal-demineralizer system has shown that the charcoal-demineralizer system is more effective in organic carbon removal. We also show the performance data of the Granulated Activated Charcoal (GAC) vessel as a mechanical filter. Actual data showing that frequent backflushing and controlled flow rates through the GAC vessel dramatically increases Total Organic Carbon (TOC) removal efficiency. GAC vessel dramatically increases Total Organic Carbon (TOC) removal efficiency. Recommendations are provided for operating the GAC vessel to ensure optimal performance.
Point-of-care testing in the cardiovascular operating theatre.
Nydegger, Urs E; Gygax, Erich; Carrel, Thierry
2006-01-01
Point-of-care testing (POCT) remains under scrutiny by healthcare professionals because of its ill-tried, young history. POCT methods are being developed by a few major equipment companies based on rapid progress in informatics and nanotechnology. Issues as POCT quality control, comparability with standard laboratory procedures, standardisation, traceability and round robin testing are being left to hospitals. As a result, the clinical and operational benefits of POCT were first evident for patients on the operating table. For the management of cardiovascular surgery patients, POCT technology is an indispensable aid. Improvement of the technology has meant that clinical laboratory pathologists now recognise the need for POCT beyond their high-throughput areas. PMID:16958595
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-25
..., between Beesleys Point and Somers Point, NJ, in the Federal Register (74 FR 30031). We received two... SECURITY Coast Guard 33 CFR Part 117 RIN 1625-AA09 Drawbridge Operation Regulations; Great Egg Harbor Bay... Bridge over Great Egg Harbor Bay, at mile 3.5, between Beesleys Point and Somers Point, NJ. This...
NASA Technical Reports Server (NTRS)
Brown, Jonathan M.; Petersen, Jeremy D.
2014-01-01
NASA's WIND mission has been operating in a large amplitude Lissajous orbit in the vicinity of the interior libration point of the Sun-Earth/Moon system since 2004. Regular stationkeeping maneuvers are required to maintain the orbit due to the instability around the collinear libration points. Historically these stationkeeping maneuvers have been performed by applying an incremental change in velocity, or (delta)v along the spacecraft-Sun vector as projected into the ecliptic plane. Previous studies have shown that the magnitude of libration point stationkeeping maneuvers can be minimized by applying the (delta)v in the direction of the local stable manifold found using dynamical systems theory. This paper presents the analysis of this new maneuver strategy which shows that the magnitude of stationkeeping maneuvers can be decreased by 5 to 25 percent, depending on the location in the orbit where the maneuver is performed. The implementation of the optimized maneuver method into operations is discussed and results are presented for the first two optimized stationkeeping maneuvers executed by WIND.
Optimizing Synchronization Operations for Remote Memory Communication Systems
Buntinas, Darius; Saify, Amina; Panda, Dhabaleswar K.; Nieplocha, Jarek; Bob Werner
2003-04-22
Synchronization operations, such as fence and locking, are used in many parallel operations accessing shared memory. However, a process which is blocked waiting for a fence operation to complete, or for a lock to be acquired, cannot perform useful computation. It is therefore critical that these operations be implemented as efficiently as possible to reduce the time a process waits idle. These operations also impact the scalability of the overall system. As system sizes get larger, the number of processes potentially requesting a lock increases. In this paper we describe the design and implementation of an optimized operation which combines a global fence operation and a barrier synchronization operation. We also describe our implementation of an optimized lock algorithm. The optimizations have been incorporated into the ARMCI communication library. The global fence and barrier operation gives a factor of improvement of up to 9 over the current implementation in a 16 node system, while the optimized lock implementation gives up to 1.25 factor of improvement. These optimizations allow for more efficient and scalable applications
FCCU operating changes optimize octane catalyst use
Desai, P.H.
1986-09-01
The use of octane-enhancing catalysts in a fluid catalytic cracking unit (FCCU) requires changes in the operation of the unit to derive maximum benefits from the octane catalyst. In addition to the impressive octane gain achieved by the octane catalyst, the catalyst also affects the yield structure, the unit heat balance, and the product slate by reducing hydrogen transfer reactions. Catalyst manufacturers have introduced new product lines based upon ultrastable Y type (USY) zeolites which can result in 2 to 3 research octane number (RON) gains over the more traditional rare earth exchanged Y type (REY) zeolites. Here are some operating techniques for the FCCU and associated processes that will allow maximum benefits from octane catalyst use.
How beam driven operations optimize ALICE efficiency and safety
NASA Astrophysics Data System (ADS)
Pinazza, Ombretta; Augustinus, André; Bond, Peter M.; Chochula, Peter C.; Kurepin, Alexander N.; Lechman, Mateusz; Rosinsky, Peter
2012-12-01
ALICE is one of the experiments at the Large Hadron Collider (LHC), CERN (Geneva, Switzerland). The ALICE DCS is responsible for the coordination and monitoring of the various detectors and of central systems, for collecting and managing alarms, data and commands. Furthermore, it's the central tool to monitor and verify the beam status with special emphasis on safety. In particular, it is important to ensure that the experiment's detectors are brought to and stay in a safe state, e.g. reduced voltages during the injection, acceleration, and adjusting phases of the LHC beams. Thanks to its central role, it's the appropriate system to implement automatic actions that were normally left to the initiative of the shift leader; where decisions come from the knowledge of detectors’ statuses and of the beam, combined together to fulfil the scientific requirements, keeping safety as a priority in all cases. This paper shows how the central DCS is interpreting the daily operations from a beam driven point of view. A tool is being implemented where automatic actions can be set and monitored through expert panels, with a custom level of automatization. Some routine operations are already automated, when a particular beam mode is declared by the LHC, which can represent a safety concern. This beam driven approach is proving to be a tool for the shift crew to optimize the efficiency of data taking, while improving the safety of the experiment.
Beam pointing angle optimization and experiments for vehicle laser Doppler velocimetry
NASA Astrophysics Data System (ADS)
Fan, Zhe; Hu, Shuling; Zhang, Chunxi; Nie, Yanju; Li, Jun
2015-10-01
Beam pointing angle (BPA) is one of the key parameters that affects the operation performance of the laser Doppler velocimetry (LDV) system. By considering velocity sensitivity and echo power, for the first time, the optimized BPA of vehicle LDV is analyzed. Assuming mounting error is within ±1.0 deg, the reflectivity and roughness are variable for different scenarios, the optimized BPA is obtained in the range from 29 to 43 deg. Therefore, velocity sensitivity is in the range of 1.25 to 1.76 MHz/(m/s), and the percentage of normalized echo power at optimized BPA with respect to that at 0 deg is greater than 53.49%. Laboratory experiments with a rotating table are done with different BPAs of 10, 35, and 66 deg, and the results coincide with the theoretical analysis. Further, vehicle experiment with optimized BPA of 35 deg is conducted by comparison with microwave radar (accuracy of ±0.5% full scale output). The root-mean-square error of LDV's results is smaller than the Microstar II's, 0.0202 and 0.1495 m/s, corresponding to LDV and Microstar II, respectively, and the mean velocity discrepancy is 0.032 m/s. It is also proven that with the optimized BPA both high velocity sensitivity and acceptable echo power can simultaneously be guaranteed.
Earth-Moon Libration Point Orbit Stationkeeping: Theory, Modeling and Operations
NASA Technical Reports Server (NTRS)
Folta, David C.; Pavlak, Thomas A.; Haapala, Amanda F.; Howell, Kathleen C.; Woodard, Mark A.
2013-01-01
Collinear Earth-Moon libration points have emerged as locations with immediate applications. These libration point orbits are inherently unstable and must be maintained regularly which constrains operations and maneuver locations. Stationkeeping is challenging due to relatively short time scales for divergence effects of large orbital eccentricity of the secondary body, and third-body perturbations. Using the Acceleration Reconnection and Turbulence and Electrodynamics of the Moon's Interaction with the Sun (ARTEMIS) mission orbit as a platform, the fundamental behavior of the trajectories is explored using Poincare maps in the circular restricted three-body problem. Operational stationkeeping results obtained using the Optimal Continuation Strategy are presented and compared to orbit stability information generated from mode analysis based in dynamical systems theory.
Constrained genetic algorithms for optimizing multi-use reservoir operation
NASA Astrophysics Data System (ADS)
Chang, Li-Chiu; Chang, Fi-John; Wang, Kuo-Wei; Dai, Shin-Yi
2010-08-01
To derive an optimal strategy for reservoir operations to assist the decision-making process, we propose a methodology that incorporates the constrained genetic algorithm (CGA) where the ecological base flow requirements are considered as constraints to water release of reservoir operation when optimizing the 10-day reservoir storage. Furthermore, a number of penalty functions designed for different types of constraints are integrated into reservoir operational objectives to form the fitness function. To validate the applicability of this proposed methodology for reservoir operations, the Shih-Men Reservoir and its downstream water demands are used as a case study. By implementing the proposed CGA in optimizing the operational performance of the Shih-Men Reservoir for the last 20 years, we find this method provides much better performance in terms of a small generalized shortage index (GSI) for human water demands and greater ecological base flows for most of the years than historical operations do. We demonstrate the CGA approach can significantly improve the efficiency and effectiveness of water supply capability to both human and ecological base flow requirements and thus optimize reservoir operations for multiple water users. The CGA can be a powerful tool in searching for the optimal strategy for multi-use reservoir operations in water resources management.
Synergy optimization and operation management on syndicate complementary knowledge cooperation
NASA Astrophysics Data System (ADS)
Tu, Kai-Jan
2014-10-01
The number of multi enterprises knowledge cooperation has grown steadily, as a result of global innovation competitions. I have conducted research based on optimization and operation studies in this article, and gained the conclusion that synergy management is effective means to break through various management barriers and solve cooperation's chaotic systems. Enterprises must communicate system vision and access complementary knowledge. These are crucial considerations for enterprises to exert their optimization and operation knowledge cooperation synergy to meet global marketing challenges.
Wang, Jian; Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping
2014-01-01
A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180
Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping
2014-01-01
A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180
NASA Astrophysics Data System (ADS)
Bhole, Gaurav; Anjusha, V. S.; Mahesh, T. S.
2016-04-01
A robust control over quantum dynamics is of paramount importance for quantum technologies. Many of the existing control techniques are based on smooth Hamiltonian modulations involving repeated calculations of basic unitaries resulting in time complexities scaling rapidly with the length of the control sequence. Here we show that bang-bang controls need one-time calculation of basic unitaries and hence scale much more efficiently. By employing a global optimization routine such as the genetic algorithm, it is possible to synthesize not only highly intricate unitaries, but also certain nonunitary operations. We demonstrate the unitary control through the implementation of the optimal fixed-point quantum search algorithm in a three-qubit nuclear magnetic resonance (NMR) system. Moreover, by combining the bang-bang pulses with the crusher gradients, we also demonstrate nonunitary transformations of thermal equilibrium states into effective pure states in three- as well as five-qubit NMR systems.
Implementation of a Point Algorithm for Real-Time Convex Optimization
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Motaghedi, Shui; Carson, John
2007-01-01
The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.
NASA Astrophysics Data System (ADS)
Sato, Yuki; Izui, Kazuhiro; Yamada, Takayuki; Nishiwaki, Shinji
2016-07-01
This paper proposes techniques to improve the diversity of the searching points during the optimization process in an Aggregative Gradient-based Multiobjective Optimization (AGMO) method, so that well-distributed Pareto solutions are obtained. First to be discussed is a distance constraint technique, applied among searching points in the objective space when updating design variables, that maintains a minimum distance between the points. Next, a scheme is introduced that deals with updated points that violate the distance constraint, by deleting the offending points and introducing new points in areas of the objective space where searching points are sparsely distributed. Finally, the proposed method is applied to example problems to illustrate its effectiveness.
Nickel-Cadmium Battery Operation Management Optimization Using Robust Design
NASA Technical Reports Server (NTRS)
Blosiu, Julian O.; Deligiannis, Frank; DiStefano, Salvador
1996-01-01
In recent years following several spacecraft battery anomalies, it was determined that managing the operational factors of NASA flight NiCd rechargeable battery was very important in order to maintain space flight battery nominal performance. The optimization of existing flight battery operational performance was viewed as something new for a Taguchi Methods application.
Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations
NASA Technical Reports Server (NTRS)
Zhao, Yiyuan; Chen, Robert T. N.
1996-01-01
This paper presents a summary of a series of recent analytical studies conducted to investigate One-Engine-Inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, Continued TakeOff (CTO), Rejected TakeOff (RTO), Balked Landing (BL), and Continued Landing (CL) are investigated for both Vertical-TakeOff-and-Landing (VTOL) and Short-TakeOff-and-Landing (STOL) terminal-area operations. The formulation of the nonlinear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajectory optimization studies are presented. In particular, a new balanced- weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.
Improvements in floating point addition/subtraction operations
Farmwald, P.M.
1984-02-24
Apparatus is described for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.
Decomposition and coordination of large-scale operations optimization
NASA Astrophysics Data System (ADS)
Cheng, Ruoyu
Nowadays, highly integrated manufacturing has resulted in more and more large-scale industrial operations. As one of the most effective strategies to ensure high-level operations in modern industry, large-scale engineering optimization has garnered a great amount of interest from academic scholars and industrial practitioners. Large-scale optimization problems frequently occur in industrial applications, and many of them naturally present special structure or can be transformed to taking special structure. Some decomposition and coordination methods have the potential to solve these problems at a reasonable speed. This thesis focuses on three classes of large-scale optimization problems: linear programming, quadratic programming, and mixed-integer programming problems. The main contributions include the design of structural complexity analysis for investigating scaling behavior and computational efficiency of decomposition strategies, novel coordination techniques and algorithms to improve the convergence behavior of decomposition and coordination methods, as well as the development of a decentralized optimization framework which embeds the decomposition strategies in a distributed computing environment. The complexity study can provide fundamental guidelines to practical applications of the decomposition and coordination methods. In this thesis, several case studies imply the viability of the proposed decentralized optimization techniques for real industrial applications. A pulp mill benchmark problem is used to investigate the applicability of the LP/QP decentralized optimization strategies, while a truck allocation problem in the decision support of mining operations is used to study the MILP decentralized optimization strategies.
Fuzzy multiobjective models for optimal operation of a hydropower system
NASA Astrophysics Data System (ADS)
Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.
2013-06-01
Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.
Point-process principal components analysis via geometric optimization.
Solo, Victor; Pasha, Syed Ahmed
2013-01-01
There has been a fast-growing demand for analysis tools for multivariate point-process data driven by work in neural coding and, more recently, high-frequency finance. Here we develop a true or exact (as opposed to one based on time binning) principal components analysis for preliminary processing of multivariate point processes. We provide a maximum likelihood estimator, an algorithm for maximization involving steepest ascent on two Stiefel manifolds, and novel constrained asymptotic analysis. The method is illustrated with a simulation and compared with a binning approach. PMID:23020106
Optimization of Operations Resources via Discrete Event Simulation Modeling
NASA Technical Reports Server (NTRS)
Joshi, B.; Morris, D.; White, N.; Unal, R.
1996-01-01
The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.
Mode-tracking based stationary-point optimization.
Bergeler, Maike; Herrmann, Carmen; Reiher, Markus
2015-07-15
In this work, we present a transition-state optimization protocol based on the Mode-Tracking algorithm [Reiher and Neugebauer, J. Chem. Phys., 2003, 118, 1634]. By calculating only the eigenvector of interest instead of diagonalizing the full Hessian matrix and performing an eigenvector following search based on the selectively calculated vector, we can efficiently optimize transition-state structures. The initial guess structures and eigenvectors are either chosen from a linear interpolation between the reactant and product structures, from a nudged-elastic band search, from a constrained-optimization scan, or from the minimum-energy structures. Alternatively, initial guess vectors based on chemical intuition may be defined. We then iteratively refine the selected vectors by the Davidson subspace iteration technique. This procedure accelerates finding transition states for large molecules of a few hundred atoms. It is also beneficial in cases where the starting structure is very different from the transition-state structure or where the desired vector to follow is not the one with lowest eigenvalue. Explorative studies of reaction pathways are feasible by following manually constructed molecular distortions. PMID:26073318
47 CFR 22.621 - Channels for point-to-multipoint operation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false Channels for point-to-multipoint operation. 22... SERVICES PUBLIC MOBILE SERVICES Paging and Radiotelephone Service Point-To-Multipoint Operation § 22.621 Channels for point-to-multipoint operation. The following channels are allocated for assignment...
Application of trajectory optimization principles to minimize aircraft operating costs
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Morello, S. A.; Erzberger, H.
1979-01-01
This paper summarizes various applications of trajectory optimization principles that have been or are being devised by both government and industrial researchers to minimize aircraft direct operating costs (DOC). These costs (time and fuel) are computed for aircraft constrained to fly over a fixed range. Optimization theory is briefly outlined, and specific algorithms which have resulted from application of this theory are described. Typical results which demonstrate use of these algorithms and the potential savings which they can produce are given. Finally, need for further trajectory optimization research is presented.
Optimal Operation of a Thermal Energy Storage Tank Using Linear Optimization
NASA Astrophysics Data System (ADS)
Civit Sabate, Carles
In this thesis, an optimization procedure for minimizing the operating costs of a Thermal Energy Storage (TES) tank is presented. The facility in which the optimization is based is the combined cooling, heating, and power (CCHP) plant at the University of California, Irvine. TES tanks provide the ability of decoupling the demand of chilled water from its generation, over the course of a day, from the refrigeration and air-conditioning plants. They can be used to perform demand-side management, and optimization techniques can help to approach their optimal use. The proposed optimization approach provides a fast and reliable methodology of finding the optimal use of the TES tank to reduce energy costs and provides a tool for future implementation of optimal control laws on the system. Advantages of the proposed methodology are studied using simulation with historical data.
Optimizing Reservoir Operation to Adapt to the Climate Change
NASA Astrophysics Data System (ADS)
Madadgar, S.; Jung, I.; Moradkhani, H.
2010-12-01
Climate change and upcoming variation in flood timing necessitates the adaptation of current rule curves developed for operation of water reservoirs as to reduce the potential damage from either flood or draught events. This study attempts to optimize the current rule curves of Cougar Dam on McKenzie River in Oregon addressing some possible climate conditions in 21th century. The objective is to minimize the failure of operation to meet either designated demands or flood limit at a downstream checkpoint. A simulation/optimization model including the standard operation policy and a global optimization method, tunes the current rule curve upon 8 GCMs and 2 greenhouse gases emission scenarios. The Precipitation Runoff Modeling System (PRMS) is used as the hydrology model to project the streamflow for the period of 2000-2100 using downscaled precipitation and temperature forcing from 8 GCMs and two emission scenarios. An ensemble of rule curves, each associated with an individual scenario, is obtained by optimizing the reservoir operation. The simulation of reservoir operation, for all the scenarios and the expected value of the ensemble, is conducted and performance assessment using statistical indices including reliability, resilience, vulnerability and sustainability is made.
Bilinear quark operator renormalization at generalized symmetric point
NASA Astrophysics Data System (ADS)
Bell, J. M.; Gracey, J. A.
2016-03-01
We compute Green's functions with a bilinear quark operator inserted at nonzero momentum for a generalized momentum configuration to two loops. These are required to assist lattice gauge theory measurements of the same quantity in matching to the high energy behavior. The flavor nonsinglet operators considered are the scalar, vector and tensor currents as well as the second moment of the twist-2 Wilson operator used in deep inelastic scattering for the measurement of nucleon structure functions.
76 FR 60733 - Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-30
... SECURITY Coast Guard 33 CFR Part 117 Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY AGENCY... the Smith Point Bridge, 6.1, across Narrow Bay, between Smith Point and Fire Island, New York. The.... SUPPLEMENTARY INFORMATION: The Smith Point Bridge, across Narrow Bay, mile 6.1, between Smith Point and...
SVOM pointing strategy: how to optimize the redshift measurements?
Cordier, B.; Schanne, S.
2008-05-22
The Sino-French SVOM mission (Space-based multi-band astronomical Variable Objects Monitor) has been designed to detect all known types of gamma-ray bursts (GRBs) and to provide fast and reliable GRB positions. In this study we present the SVOM pointing strategy which should ensure the largest number of localized bursts allowing a redshift measurement. The redshift measurement can only be performed by large telescopes located on Earth. The best scientific return will be achieved if we are able to combine constraints from both space segment (platform and payload) and ground telescopes (visibility)
Optimizing and controlling the operation of heat-exchanger networks
Aguilera, N.; Marchetti, J.L.
1998-05-01
A procedure was developed for on-line optimization and control systems of heat-exchanger networks, which features a two-level control structure, one for a constant configuration control system and the other for a supervisor on-line optimizer. The coordination between levels is achieved by adjusting the formulation of the optimization problem to meet requirements of the adopted control system. The general goal is always to work without losing stream temperature targets while keeping the highest energy integration. The operation constraints used for heat-exchanger and utility units emphasize the computation of heat-exchanger duties rather than intermediate stream temperatures. This simplifies the modeling task and provides clear links with the limits of the manipulated variables. The optimal condition is determined using LP or NLP, depending on the final problem formulation. Degrees of freedom for optimization and equation constraints for considering simple and multiple bypasses are rigorously discussed. An example used shows how the optimization problem can be adjusted to a specific network design, its expected operating space, and the control configuration. Dynamic simulations also show benefits and limitations of this procedure.
Driving external chemistry optimization via operations management principles.
Bi, F Christopher; Frost, Heather N; Ling, Xiaolan; Perry, David A; Sakata, Sylvie K; Bailey, Simon; Fobian, Yvette M; Sloan, Leslie; Wood, Anthony
2014-03-01
Confronted with the need to significantly raise the productivity of remotely located chemistry CROs Pfizer embraced a commitment to continuous improvement which leveraged the tools from both Lean Six Sigma and queue management theory to deliver positive measurable outcomes. During 2012 cycle times were reduced by 48% by optimization of the work in progress and conducting a detailed workflow analysis to identify and address pinch points. Compound flow was increased by 29% by optimizing the request process and de-risking the chemistry. Underpinning both achievements was the development of close working relationships and productive communications between Pfizer and CRO chemists. PMID:23973340
Trajectory optimization for intra-operative nuclear tomographic imaging.
Vogel, Jakob; Lasser, Tobias; Gardiazabal, José; Navab, Nassir
2013-10-01
Diagnostic nuclear imaging modalities like SPECT typically employ gantries to ensure a densely sampled geometry of detectors in order to keep the inverse problem of tomographic reconstruction as well-posed as possible. In an intra-operative setting with mobile freehand detectors the situation changes significantly, and having an optimal detector trajectory during acquisition becomes critical. In this paper we propose an incremental optimization method based on the numerical condition of the system matrix of the underlying iterative reconstruction method to calculate optimal detector positions during acquisition in real-time. The performance of this approach is evaluated using simulations. A first experiment on a phantom using a robot-controlled intra-operative SPECT-like setup demonstrates the feasibility of the approach. PMID:23706624
Near-Optimal Operation of Dual-Fuel Launch Vehicles
NASA Technical Reports Server (NTRS)
Ardema, M. D.; Chou, H. C.; Bowles, J. V.
1996-01-01
A near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. Of interest are both the optimal operation of the propulsion system and the optimal flight path. A methodology is developed to investigate the optimal throttle switching of dual-fuel engines. The method is based on selecting propulsion system modes and parameters that maximize a certain performance function. This function is derived from consideration of the energy-state model of the aircraft equations of motion. Because the density of liquid hydrogen is relatively low, the sensitivity of perturbations in volume need to be taken into consideration as well as weight sensitivity. The cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize vehicle empty weight for a given payload mass and volume in orbit.
A Transmittance-optimized, Point-focus Fresnel Lens Solar Concentrator
NASA Technical Reports Server (NTRS)
Oneill, M. J.
1984-01-01
The development of a point-focus Fresnel lens solar concentrator for high-temperature solar thermal energy system applications is discussed. The concentrator utilizes a transmittance-optimized, short-focal-length, dome-shaped refractive Fresnel lens as the optical element. This concentrator combines both good optical performance and a large tolerance for manufacturing, deflection, and tracking errors. The conceptual design of an 11-meter diameter concentrator which should provide an overall collector efficiency of about 70% at an 815 C (1500 F) receiver operating temperature and a 1500X geometric concentration ratio (lens aperture area/receiver aperture area) was completed. Results of optical and thermal analyses of the collector, a discussion of manufacturing methods for making the large lens, and an update on the current status and future plans of the development program are included.
NASA Technical Reports Server (NTRS)
Mehr, Ali Farhang; Tumer, Irem
2005-01-01
In this paper, we will present a new methodology that measures the "worth" of deploying an additional testing instrument (sensor) in terms of the amount of information that can be retrieved from such measurement. This quantity is obtained using a probabilistic model of RLV's that has been partially developed in the NASA Ames Research Center. A number of correlated attributes are identified and used to obtain the worth of deploying a sensor in a given test point from an information-theoretic viewpoint. Once the information-theoretic worth of sensors is formulated and incorporated into our general model for IHM performance, the problem can be formulated as a constrained optimization problem where reliability and operational safety of the system as a whole is considered. Although this research is conducted specifically for RLV's, the proposed methodology in its generic form can be easily extended to other domains of systems health monitoring.
Na-Faraday rotation filtering: The optimal point
Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja
2014-01-01
Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251
Na-Faraday rotation filtering: the optimal point.
Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja
2014-01-01
Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251
A fixed point theorem for certain operator valued maps
NASA Technical Reports Server (NTRS)
Brown, D. R.; Omalley, M. J.
1978-01-01
In this paper, we develop a family of Neuberger-like results to find points z epsilon H satisfying L(z)z = z and P(z) = z. This family includes Neuberger's theorem and has the additional property that most of the sequences q sub n converge to idempotent elements of B sub 1(H).
On point spread function modelling: towards optimal interpolation
NASA Astrophysics Data System (ADS)
Bergé, Joel; Price, Sedona; Amara, Adam; Rhodes, Jason
2012-01-01
Point spread function (PSF) modelling is a central part of any astronomy data analysis relying on measuring the shapes of objects. It is especially crucial for weak gravitational lensing, in order to beat down systematics and allow one to reach the full potential of weak lensing in measuring dark energy. A PSF modelling pipeline is made of two main steps: the first one is to assess its shape on stars, and the second is to interpolate it at any desired position (usually galaxies). We focus on the second part, and compare different interpolation schemes, including polynomial interpolation, radial basis functions, Delaunay triangulation and Kriging. For that purpose, we develop simulations of PSF fields, in which stars are built from a set of basis functions defined from a principal components analysis of a real ground-based image. We find that Kriging gives the most reliable interpolation, significantly better than the traditionally used polynomial interpolation. We also note that although a Kriging interpolation on individual images is enough to control systematics at the level necessary for current weak lensing surveys, more elaborate techniques will have to be developed to reach future ambitious surveys' requirements.
Optimal Operation of Energy Storage in Power Transmission and Distribution
NASA Astrophysics Data System (ADS)
Akhavan Hejazi, Seyed Hossein
In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider
NASA Astrophysics Data System (ADS)
Yang, Dong; Ren, Wei-Xin; Hu, Yi-Ding; Li, Dan
2015-08-01
The structural health monitoring (SHM) involves the sampled operational vibration measurements over time so that the structural features can be extracted accordingly. The recurrence plot (RP) and corresponding recurrence quantification analysis (RQA) have become a useful tool in various fields due to its efficiency. The threshold selection is one of key issues to make sure that the constructed recurrence plot contains enough recurrence points. Different signals have in nature different threshold values. This paper is aiming at presenting an approach to determine the optimal threshold for the operational vibration measurements of civil engineering structures. The surrogate technique and Taguchi loss function are proposed to generate reliable data and to achieve the optimal discrimination power point where the threshold is optimum. The impact of selecting recurrence thresholds on different signals is discussed. It is demonstrated that the proposed method to identify the optimal threshold is applicable to the operational vibration measurements. The proposed method provides a way to find the optimal threshold for the best RP construction of structural vibration measurements under operational conditions.
Optimality conditions for a two-stage reservoir operation problem
NASA Astrophysics Data System (ADS)
Zhao, Jianshi; Cai, Ximing; Wang, Zhongjing
2011-08-01
This paper discusses the optimality conditions for standard operation policy (SOP) and hedging rule (HR) for a two-stage reservoir operation problem using a consistent theoretical framework. The effects of three typical constraints, i.e., mass balance, nonnegative release, and storage constraints under both certain and uncertain conditions are analyzed. When all nonnegative constraints and storage constraints are unbinding, HR results in optimal reservoir operation following the marginal benefit (MB) principle (the MB is equal over current and future stages. However, if any of those constraints are binding, SOP results in the optimal solution, except in some special cases which need to carry over water in the current stage to the future stage, when extreme drought is certain and a higher marginal utility exists for the second stage. Furthermore, uncertainty complicates the effects of the various constraints. A higher uncertainty level in the future makes HR more favorable as water needs to be reserved to defend against the risk caused by uncertainty. Using the derived optimality conditions, an algorithm for solving a numerical model is developed and tested with the Miyun Reservoir in China.
Optimality Conditions for A Two-Stage Reservoir Operation Problem
NASA Astrophysics Data System (ADS)
Zhao, J.; Cai, X.; Wang, Z.
2010-12-01
This paper discusses the optimality conditions for standard operation policy (SOP) and hedging rule (HR) for a two-stage reservoir operation problem within a consistent theoretical framework. The effects of three typical constraints, which are mass balance, non-negative release and storage constraints under both certain and uncertain conditions have been analyzed. When all non-negative constraints and storage constraints are non-binding, HR results in optimal reservoir operation following the marginal benefit (MB) principle (the MB is equal over the two stages); while if any of the non-negative release or storage constraints is binding, in general SOP results in the optimal solution except two special cases. One of them is a complement of the traditional SOP/HR curve, which happens while the capacity constraint is binding; the other is a special hedging rule, which should be employed to carry over all water in the current stage to the future, when extreme drought is certain and higher marginal utility exists for the second stage. Furthermore, uncertainty complicates the effects of the various constraints but in general higher uncertainty level in the future makes HR a more favorable since water needs to be reserved to defense the risk caused by the uncertainty. Using the derived optimality conditions, an algorithm for solving the model numerically has been developed and tested with hypothetical examples.
AN OPTIMIZED 64X64 POINT TWO-DIMENSIONAL FAST FOURIER TRANSFORM
NASA Technical Reports Server (NTRS)
Miko, J.
1994-01-01
Scientists at Goddard have developed an efficient and powerful program-- An Optimized 64x64 Point Two-Dimensional Fast Fourier Transform-- which combines the performance of real and complex valued one-dimensional Fast Fourier Transforms (FFT's) to execute a two-dimensional FFT and its power spectrum coefficients. These coefficients can be used in many applications, including spectrum analysis, convolution, digital filtering, image processing, and data compression. The program's efficiency results from its technique of expanding all arithmetic operations within one 64-point FFT; its high processing rate results from its operation on a high-speed digital signal processor. For non-real-time analysis, the program requires as input an ASCII data file of 64x64 (4096) real valued data points. As output, this analysis produces an ASCII data file of 64x64 power spectrum coefficients. To generate these coefficients, the program employs a row-column decomposition technique. First, it performs a radix-4 one-dimensional FFT on each row of input, producing complex valued results. Then, it performs a one-dimensional FFT on each column of these results to produce complex valued two-dimensional FFT results. Finally, the program sums the squares of the real and imaginary values to generate the power spectrum coefficients. The program requires a Banshee accelerator board with 128K bytes of memory from Atlanta Signal Processors (404/892-7265) installed on an IBM PC/AT compatible computer (DOS ver. 3.0 or higher) with at least one 16-bit expansion slot. For real-time operation, an ASPI daughter board is also needed. The real-time configuration reads 16-bit integer input data directly into the accelerator board, operating on 64x64 point frames of data. The program's memory management also allows accumulation of the coefficient results. The real-time processing rate to calculate and accumulate the 64x64 power spectrum output coefficients is less than 17.0 mSec. Documentation is included
Break-Even Point for a Proof Slip Operation
ERIC Educational Resources Information Center
Anderson, James F.
1972-01-01
Break-even analysis is applied to determine what magnitude of titles added per year is sufficient to utilize economically Library of Congress proof slips and a Xerox 914 copying machine in the cataloging operation of a library. A formula is derived, and an example of its use is given. (1 reference) (Author/SJ)
Robust optimal sun-pointing control of a large solar power satellite
NASA Astrophysics Data System (ADS)
Wu, Shunan; Zhang, Kaiming; Peng, Haijun; Wu, Zhigang; Radice, Gianmarco
2016-10-01
The robust optimal sun-pointing control strategy for a large geostationary solar power satellite (SPS) is addressed in this paper. The SPS is considered as a huge rigid body, and the sun-pointing dynamics are firstly proposed in the state space representation. The perturbation effects caused by gravity gradient, solar radiation pressure and microwave reaction are investigated. To perform sun-pointing maneuvers, a periodically time-varying robust optimal LQR controller is designed to assess the pointing accuracy and the control inputs. It should be noted that, to reduce the pointing errors, the disturbance rejection technique is combined into the proposed LQR controller. A recursive algorithm is then proposed to solve the optimal LQR control gain. Simulation results are finally provided to illustrate the performance of the proposed closed-loop system.
78 FR 52987 - Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit 3
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-27
...The U.S. Nuclear Regulatory Commission (NRC) has concluded that existing exemptions from its regulations, ``Fire Protection Program for Nuclear Power Facilities Operating Prior to January 1, 1979,'' for Fire Areas ETN-4 and PAB-2, issued to Entergy Nuclear Operations, Inc. (the licensee), for operation of Indian Point Nuclear Generating Unit 3 (Indian Point 3), located in Westchester County,......
78 FR 23845 - Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-23
... SECURITY Coast Guard 33 CFR Part 117 Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY AGENCY... the Smith Point Bridge, mile 6.1, across Narrow Bay, between Smith Point and Fire Island, New York. The deviation is necessary to facilitate the Smith Point Triathlon. This deviation allows the...
CFD-based simulation of operational point influences on product changing processes
NASA Astrophysics Data System (ADS)
Szöke, L.; Wortberg, J.
2014-05-01
In the means of production, saving resources is becoming more and more a priority optimizing plastics extrusion processes. The analysis of color and material changes has become very interesting to prevent unnecessary material loss. This interest is justified especially due to increasing numbers of changing processes as a reaction of more individual product specifications and thus decreasing lot sizes in the last decades. It can be shown that commercial numerical tools are capable of plausible calculations of changing processes, enabling process observations and giving the possibility of predicting different influences. Due to the highly dynamical character of the flow behavior in the control volume, a transient approach is necessary to show the effects of different operational points on product changes. To determine the progress of the product change the volume of fluid model (VOF) as multiphase approach is used. The analysis of influences from operational points on the changing process is achieved by separate observations of two important input parameters. On one hand the effects of varied mass flow rates as inlet boundary conditions and on the other hand different mass temperatures are observed. To check the plausibility of the calculation method the results are discussed referring to exemplary experimental data in qualitative comparison. The experimental data is obtained using special laboratory equipment neglecting influences from the extruder and taking only the die as control volume into consideration.
Optimization of operating conditions in tunnel drying of food
Dong Sun Lee . Dept. of Food Engineering); Yu Ryang Pyun . Dept. of Food Engineering)
1993-01-01
A food drying process in a tunnel dryer was modeled from Keey's drying model and experimental drying curve, and optimized in operating conditions consisting of inlet air temperature, air recycle ratio and air flow rate. Radish was chosen as a typical food material to be dried, because it has the typical drying characteristics of food and quality indexes of ascorbic acid destruction and browning during drying. Optimization results of cocurrent and counter current tunnel drying showed higher inlet air temperature, lower recycle ratio and higher air flow rate with shorter total drying time. Compared with cocurrent operation counter current drying used lower air temperature, lower recycle ratio and lower air flow rate, and appeared to be more efficient in energy usage. Most of consumed energy was shown to be used for sir heating and then escaped from the dryer in the form of exhaust air.
Physics-Based Prognostics for Optimizing Plant Operation
Leonard J. Bond; Don B. Jarrell
2005-03-01
Scientists at the Pacific Northwest National Laboratory (PNNL) have examined the necessity for optimization of energy plant operation using 'DSOM{reg_sign}'--Decision Support Operation and Maintenance and this has been deployed at several sites. This approach has been expanded to include a prognostics components and tested on a pilot scale service water system, modeled on the design employed in a nuclear power plant. A key element in plant optimization is understanding and controlling the aging process of safety-specific nuclear plant components. This paper reports the development and demonstration of a physics-based approach to prognostic analysis that combines distributed computing, RF data links, the measurement of aging precursor metrics and their correlation with degradation rate and projected machine failure.
The optimization of operating parameters on microalgae upscaling process planning.
Ma, Yu-An; Huang, Hsin-Fu; Yu, Chung-Chyi
2016-03-01
The upscaling process planning developed in this study primarily involved optimizing operating parameters, i.e., dilution ratios, during process designs. Minimal variable cost was used as an indicator for selecting the optimal combination of dilution ratios. The upper and lower mean confidence intervals obtained from the actual cultured cell density data were used as the final cell density stability indicator after the operating parameters or dilution ratios were selected. The process planning method and results were demonstrated through three case studies of batch culture simulation. They are (1) final objective cell densities were adjusted, (2) high and low light intensities were used for intermediate-scale cultures, and (3) the number of culture days was expressed as integers for the intermediate-scale culture. PMID:26739144
ERIC Educational Resources Information Center
Ben-Yashar, Ruth; Nitzan, Shmuel; Vos, Hans J.
This paper compares the determination of optimal cutoff points for single and multiple tests in the field of personnel selection. Decisional skills of predictor tests composing the multiple test are assumed to be endogenous variables that depend on the cutting points to be set. The main result specifies the condition that determines the…
Optimal recovery of linear operators in non-Euclidean metrics
Osipenko, K Yu
2014-10-31
The paper looks at problems concerning the recovery of operators from noisy information in non-Euclidean metrics. A number of general theorems are proved and applied to recovery problems for functions and their derivatives from the noisy Fourier transform. In some cases, a family of optimal methods is found, from which the methods requiring the least amount of original information are singled out. Bibliography: 25 titles.
Optimizing integrated airport surface and terminal airspace operations under uncertainty
NASA Astrophysics Data System (ADS)
Bosson, Christabelle S.
In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is
Multi-objective nested algorithms for optimal reservoir operation
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj; Solomatine, Dimitri
2016-04-01
The optimal reservoir operation is in general a multi-objective problem, meaning that multiple objectives are to be considered at the same time. For solving multi-objective optimization problems there exist a large number of optimization algorithms - which result in a generation of a Pareto set of optimal solutions (typically containing a large number of them), or more precisely, its approximation. At the same time, due to the complexity and computational costs of solving full-fledge multi-objective optimization problems some authors use a simplified approach which is generically called "scalarization". Scalarization transforms the multi-objective optimization problem to a single-objective optimization problem (or several of them), for example by (a) single objective aggregated weighted functions, or (b) formulating some objectives as constraints. We are using the approach (a). A user can decide how many multi-objective single search solutions will generate, depending on the practical problem at hand and by choosing a particular number of the weight vectors that are used to weigh the objectives. It is not guaranteed that these solutions are Pareto optimal, but they can be treated as a reasonably good and practically useful approximation of a Pareto set, albeit small. It has to be mentioned that the weighted-sum approach has its known shortcomings because the linear scalar weights will fail to find Pareto-optimal policies that lie in the concave region of the Pareto front. In this context the considered approach is implemented as follows: there are m sets of weights {w1i, …wni} (i starts from 1 to m), and n objectives applied to single objective aggregated weighted sum functions of nested dynamic programming (nDP), nested stochastic dynamic programming (nSDP) and nested reinforcement learning (nRL). By employing the multi-objective optimization by a sequence of single-objective optimization searches approach, these algorithms acquire the multi-objective properties
NASA Astrophysics Data System (ADS)
Mao, Xuefeng; Zhou, Xinlei; Yu, Qingxu
2016-02-01
We describe a stabilizing operation point technique based on the tunable Distributed Feedback (DFB) laser for quadrature demodulation of interferometric sensors. By introducing automatic lock quadrature point and wavelength periodically tuning compensation into an interferometric system, the operation point of interferometric system is stabilized when the system suffers various environmental perturbations. To demonstrate the feasibility of this stabilizing operation point technique, experiments have been performed using a tunable-DFB-laser as light source to interrogate an extrinsic Fabry-Perot interferometric vibration sensor and a diaphragm-based acoustic sensor. Experimental results show that good tracing of Q-point was effectively realized.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-10
... implement emergency operating procedure (EOP) 2-FR- H.1, ``Response To Loss Of Secondary Heat Sink.'' The NRC does not consider implementing 2-FR-H.1 an OMA, as actions to establish reactor coolant system... OMA origin Area name actions 1 C Auxiliary Boiler Implement EOP FR- Feed Pump Room, H.l as...
47 CFR 90.471 - Points of operation in internal transmitter control systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) SAFETY AND SPECIAL RADIO SERVICES PRIVATE LAND MOBILE RADIO SERVICES Transmitter Control Internal Transmitter Control Systems § 90.471 Points of operation in internal transmitter control systems. The... licensee for internal communications and transmitter control purposes. Operating positions in...
Optimal reservoir operation policies using novel nested algorithms
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri
2015-04-01
Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).
Optimization of Maneuver Execution for Landsat-7 Routine Operations
NASA Technical Reports Server (NTRS)
Cox, E. Lucien, Jr.; Bauer, Frank H. (Technical Monitor)
2000-01-01
Multiple mission constraints were satisfied during a lengthy, strategic ascent phase. Once routine operations begin, the ongoing concern of maintaining mission requirements becomes an immediate priority. The Landsat-7 mission has tight longitude control box and Earth imaging that requires sub-satellite descending nodal equator crossing times to occur in a narrow 30minute range fifteen (15) times daily. Operationally, spacecraft maneuvers must'be executed properly to maintain mission requirements. The paper will discuss the importance of optimizing the altitude raising and plane change maneuvers, amidst known constraints, to satisfy requirements throughout mission lifetime. Emphasis will be placed not only on maneuver size and frequency but also on changes in orbital elements that impact maneuver execution decisions. Any associated trade-off arising from operations contingencies will be discussed as well. Results of actual altitude and plane change maneuvers are presented to clarify actions taken.
2016-01-01
Several published studies have reported the need to change the cutoff points of anthropometric indices for obesity. We therefore conducted a cross-sectional study to estimate anthropometric cutoff points predicting high coronary heart disease (CHD) risk in Korean adults. We analyzed the Korean National Health and Nutrition Examination Survey data from 2007 to 2010. A total of 21,399 subjects aged 20 to 79 yr were included in this study (9,204 men and 12,195 women). We calculated the 10-yr Framingham coronary heart disease risk score for all individuals. We then estimated receiver-operating characteristic (ROC) curves for body mass index (BMI), waist circumference, and waist-to-height ratio to predict a 10-yr CHD risk of 20% or more. For sensitivity analysis, we conducted the same analysis for a 10-yr CHD risk of 10% or more. For a CHD risk of 20% or more, the area under the curve of waist-to-height ratio was the highest, followed by waist circumference and BMI. The optimal cutoff points in men and women were 22.7 kg/m2 and 23.3 kg/m2 for BMI, 83.2 cm and 79.7 cm for waist circumference, and 0.50 and 0.52 for waist-to-height ratio, respectively. In sensitivity analysis, the results were the same as those reported above except for BMI in women. Our results support the re-classification of anthropometric indices and suggest the clinical use of waist-to-height ratio as a marker for obesity in Korean adults. PMID:26770039
Joe D. Wilson, Jr.
2003-04-01
The technology of Jefferson Laboratory's (JLab) Continuous Electron Beam Accelerator Facility (CEBAF) and Free Electron Laser (FEL) requires cooling from one of the world's largest 2K helium refrigerators known as the Central Helium Liquefier (CHL). The key characteristic of CHL is the ability to maintain a constant low vapor pressure over the large liquid helium inventory using a series of five cold compressors. The cold compressor system operates with a constrained discharge pressure over a range of suction pressures and mass flows to meet the operational requirements of CEBAF and FEL. The research topic is the prediction of the most thermodynamically efficient conditions for the system over its operating range of mass flows and vapor pressures with minimum disruption to JLab operations. The research goal is to find the operating points for each cold compressor for optimizing the overall system at any given flow and vapor pressure.
ERIC Educational Resources Information Center
Sobh, Tarek M.; Tibrewal, Abhilasha
2006-01-01
Operating systems theory primarily concentrates on the optimal use of computing resources. This paper presents an alternative approach to teaching and studying operating systems design and concepts by way of parametrically optimizing critical operating system functions. Detailed examples of two critical operating systems functions using the…
78 FR 58570 - Environmental Assessment; Entergy Nuclear Operations, Inc., Big Rock Point
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-24
...) requirements in Sec. Sec. 50.47 and 50.54, and Aapendix E of 10 CFR part 50 (76 FR 72560; November 23, 2011... COMMISSION Environmental Assessment; Entergy Nuclear Operations, Inc., Big Rock Point AGENCY: Nuclear... Nuclear Operations, Inc. (ENO) (the applicant or the licensee), for the Big Rock Point (BRP)...
76 FR 79066 - Drawbridge Operation Regulation; Escatawpa River, Moss Point, MS
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-21
... SECURITY Coast Guard 33 CFR Part 117 Drawbridge Operation Regulation; Escatawpa River, Moss Point, MS... of the Mississippi Export Railroad Company swing bridge across the Escatawpa River, mile 3.0, at Moss... operating schedule for the swing span bridge across Escatawpa River, mile 3.0, at Moss Point, Jackson...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-28
...The Commander, Fifth Coast Guard District, has issued a temporary deviation from the regulations governing the operation of the Route 88/Veterans Memorial Bridge across Point Pleasant Canal, at NJICW mile 3.0, in Point Pleasant, NJ. This closure is necessary to facilitate extensive mechanical rehabilitation and to maintain the bridge's operational...
Optimized Algorithms for Prediction within Robotic Tele-Operative Interfaces
NASA Technical Reports Server (NTRS)
Martin, Rodney A.; Wheeler, Kevin R.; SunSpiral, Vytas; Allan, Mark B.
2006-01-01
Robonaut, the humanoid robot developed at the Dexterous Robotics Laboratory at NASA Johnson Space Center serves as a testbed for human-robot collaboration research and development efforts. One of the primary efforts investigates how adjustable autonomy can provide for a safe and more effective completion of manipulation-based tasks. A predictive algorithm developed in previous work was deployed as part of a software interface that can be used for long-distance tele-operation. In this paper we provide the details of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmic approach. We show that all of the algorithms presented can be optimized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. Judicious feature selection also plays a significant role in the conclusions drawn.
NASA Technical Reports Server (NTRS)
Williams, Daniel M.
2006-01-01
Described is the research process that NASA researchers used to validate the Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) concept. The four phase building-block validation and verification process included multiple elements ranging from formal analysis of HVO procedures to flight test, to full-system architecture prototype that was successfully shown to the public at the June 2005 SATS Technical Demonstration in Danville, VA. Presented are significant results of each of the four research phases that extend early results presented at ICAS 2004. HVO study results have been incorporated into the development of the Next Generation Air Transportation System (NGATS) vision and offer a validated concept to provide a significant portion of the 3X capacity improvement sought after in the United States National Airspace System (NAS).
Optimizing Watershed Management by Coordinated Operation of Storing Facilities
NASA Astrophysics Data System (ADS)
Anghileri, Daniela; Castelletti, Andrea; Pianosi, Francesca; Soncini-Sessa, Rodolfo; Weber, Enrico
2013-04-01
Water storing facilities in a watershed are very often operated independently one to another to meet specific operating objectives, with no information sharing among the operators. This uncoordinated approach might result in upstream-downstream disputes and conflicts among different water users, or inefficiencies in the watershed management, when looked at from the viewpoint of an ideal central decision-maker. In this study, we propose an approach in two steps to design coordination mechanisms at the watershed scale with the ultimate goal of enlarging the space for negotiated agreements between competing uses and improve the overall system efficiency. First, we compute the multi-objective centralized solution to assess the maximum potential benefits of a shift from a sector-by-sector to an ideal fully coordinated perspective. Then, we analyze the Pareto-optimal operating policies to gain insight into suitable strategies to foster cooperation or impose coordination among the involved agents. The approach is demonstrated on an Alpine watershed in Italy where a long lasting conflict exists between upstream hydropower production and downstream irrigation water users. Results show that a coordination mechanism can be designed that drive the current uncoordinated structure towards the performance of the ideal centralized operation.
[Numerical simulation and operation optimization of biological filter].
Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing
2014-12-01
BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10. PMID:25826934
Optimal operation of a potable water distribution network.
Biscos, C; Mulholland, M; Le Lann, M V; Brouckaert, C J; Bailey, R; Roustan, M
2002-01-01
This paper presents an approach to an optimal operation of a potable water distribution network. The main control objective defined during the preliminary steps was to maximise the use of low-cost power, maintaining at the same time minimum emergency levels in all reservoirs. The combination of dynamic elements (e.g. reservoirs) and discrete elements (pumps, valves, routing) makes this a challenging predictive control and constrained optimisation problem, which is being solved by MINLP (Mixed Integer Non-linear Programming). Initial experimental results show the performance of this algorithm and its ability to control the water distribution process. PMID:12448464
Johnson, Gary E.; Khan, Fenton; Ploskey, Gene R.; Hughes, James S.; Fischer, Eric S.
2010-08-18
The goal of the study was to optimize performance of the fixed-location hydroacoustic systems at Lookout Point Dam (LOP) and the acoustic imaging system at Cougar Dam (CGR) by determining deployment and data acquisition methods that minimized structural, electrical, and acoustic interference. The general approach was a multi-step process from mount design to final system configuration. The optimization effort resulted in successful deployments of hydroacoustic equipment at LOP and CGR.
Optimization of shared autonomy vehicle control architectures for swarm operations.
Sengstacken, Aaron J; DeLaurentis, Daniel A; Akbarzadeh-T, Mohammad R
2010-08-01
The need for greater capacity in automotive transportation (in the midst of constrained resources) and the convergence of key technologies from multiple domains may eventually produce the emergence of a "swarm" concept of operations. The swarm, which is a collection of vehicles traveling at high speeds and in close proximity, will require technology and management techniques to ensure safe, efficient, and reliable vehicle interactions. We propose a shared autonomy control approach, in which the strengths of both human drivers and machines are employed in concert for this management. Building from a fuzzy logic control implementation, optimal architectures for shared autonomy addressing differing classes of drivers (represented by the driver's response time) are developed through a genetic-algorithm-based search for preferred fuzzy rules. Additionally, a form of "phase transition" from a safe to an unsafe swarm architecture as the amount of sensor capability is varied uncovers key insights on the required technology to enable successful shared autonomy for swarm operations. PMID:19963700
Excited meson radiative transitions from lattice QCD using variationally optimized operators
Shultz, Christian J.; Dudek, Jozef J.; Edwards, Robert G.
2015-06-02
We explore the use of 'optimized' operators, designed to interpolate only a single meson eigenstate, in three-point correlation functions with a vector-current insertion. These operators are constructed as linear combinations in a large basis of meson interpolating fields using a variational analysis of matrices of two-point correlation functions. After performing such a determination at both zero and non-zero momentum, we compute three-point functions and are able to study radiative transition matrix elements featuring excited state mesons. The required two- and three-point correlation functions are efficiently computed using the distillation framework in which there is a factorization between quark propagation and operator construction, allowing for a large number of meson operators of definite momentum to be considered. We illustrate the method with a calculation using anisotopic lattices having three flavors of dynamical quark all tuned to the physical strange quark mass, considering form-factors and transitions of pseudoscalar and vector meson excitations. In conclusion, the dependence on photon virtuality for a number of form-factors and transitions is extracted and some discussion of excited-state phenomenology is presented.
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
Biohydrogen Production from Simple Carbohydrates with Optimization of Operating Parameters.
Muri, Petra; Osojnik-Črnivec, Ilja Gasan; Djinovič, Petar; Pintar, Albin
2016-01-01
Hydrogen could be alternative energy carrier in the future as well as source for chemical and fuel synthesis due to its high energy content, environmentally friendly technology and zero carbon emissions. In particular, conversion of organic substrates to hydrogen via dark fermentation process is of great interest. The aim of this study was fermentative hydrogen production using anaerobic mixed culture using different carbon sources (mono and disaccharides) and further optimization by varying a number of operating parameters (pH value, temperature, organic loading, mixing intensity). Among all tested mono- and disaccharides, glucose was shown as the preferred carbon source exhibiting hydrogen yield of 1.44 mol H(2)/mol glucose. Further evaluation of selected operating parameters showed that the highest hydrogen yield (1.55 mol H(2)/mol glucose) was obtained at the initial pH value of 6.4, T=37 °C and organic loading of 5 g/L. The obtained results demonstrate that lower hydrogen yield at all other conditions was associated with redirection of metabolic pathways from butyric and acetic (accompanied by H(2) production) to lactic (simultaneous H(2) production is not mandatory) acid production. These results therefore represent an important foundation for the optimization and industrial-scale production of hydrogen from organic substrates. PMID:26970800
Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces
NASA Technical Reports Server (NTRS)
Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas
2010-01-01
Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1
Applications of Optimal Building Energy System Selection and Operation
Marnay, Chris; Stadler, Michael; Siddiqui, Afzal; DeForest, Nicholas; Donadee, Jon; Bhattacharya, Prajesh; Lai, Judy
2011-04-01
Berkeley Lab has been developing the Distributed Energy Resources Customer Adoption Model (DER-CAM) for several years. Given load curves for energy services requirements in a building microgrid (u grid), fuel costs and other economic inputs, and a menu of available technologies, DER-CAM finds the optimum equipment fleet and its optimum operating schedule using a mixed integer linear programming approach. This capability is being applied using a software as a service (SaaS) model. Optimisation problems are set up on a Berkeley Lab server and clients can execute their jobs as needed, typically daily. The evolution of this approach is demonstrated by description of three ongoing projects. The first is a public access web site focused on solar photovoltaic generation and battery viability at large commercial and industrial customer sites. The second is a building CO2 emissions reduction operations problem for a University of California, Davis student dining hall for which potential investments are also considered. And the third, is both a battery selection problem and a rolling operating schedule problem for a large County Jail. Together these examples show that optimization of building u grid design and operation can be effectively achieved using SaaS.
NASA Astrophysics Data System (ADS)
Balian, S. J.; Liu, Ren-Bao; Monteiro, T. S.
2015-06-01
There are two distinct techniques of proven effectiveness for extending the coherence lifetime of spin qubits in environments of other spins. One is dynamical decoupling, whereby the qubit is subjected to a carefully timed sequence of control pulses; the other is tuning the qubit towards "optimal working points" (OWPs), which are sweet spots for reduced decoherence in magnetic fields. By means of quantum many-body calculations, we investigate the effects of dynamical decoupling pulse sequences far from and near OWPs for a central donor qubit subject to decoherence from a nuclear spin bath. Key to understanding the behavior is to analyze the degree of suppression of the usually dominant contribution from independent pairs of flip-flopping spins within the many-body quantum bath. We find that to simulate recently measured Hahn echo decays at OWPs (lowest-order dynamical decoupling), one must consider clusters of three interacting spins since independent pairs do not even give finite-T2 decay times. We show that while operating near OWPs, dynamical decoupling sequences require hundreds of pulses for a single order of magnitude enhancement of T2, in contrast to regimes far from OWPs, where only about 10 pulses are required.
Optimal Spectral Regions For Laser Excited Fluorescence Diagnostics For Point Of Care Application
NASA Astrophysics Data System (ADS)
Vaitkuviene, A.; Gėgžna, V.; Varanius, D.; Vaitkus, J.
2011-09-01
The tissue fluorescence gives the response of light emitting molecule signature, and characterizes the cell composition and peculiarities of metabolism. Both are useful for the biomedical diagnostics, as reported in previous our and others works. The present work demonstrates the results of application of laser excited autofluorescence for diagnostics of pathology in genital tissues, and the feasibility for the bedside at "point of care—off lab" application. A portable device using the USB spectrophotometer, micro laser (355 nm Nd:YAG, 0,5 ns pulse, repetition rate 10 kHz, output power 15 mW), three channel optical fiber and computer with diagnostic program was designed and ready for clinical trial to be used for cytology and biopsy specimen on site diagnostics, and for the endoscopy/puncture procedures. The biopsy and cytology samples, as well as intervertebral disc specimen were evaluated by pathology experts and the fluorescence spectra were investigated in the fresh and preserved specimens. The spectra were recorded in the spectral range 350-900 nm. At the initial stage the Gaussian components of spectra were found and the Mann-Whitney test was used for the groups' differentiation and the spectral regions for optimal diagnostics purpose were found. Then a formal dividing of spectra in the components or the definite width bands, where the main difference of the different group spectra was observed, was used to compare these groups. The ROC analysis based diagnostic algorithms were created for medical prognosis. The positive prognostic values and negative prediction values were determined for cervical Liquid PAP smear supernatant sediment diagnosis of being Cervicitis and Norma versus CIN2+. In a case of intervertebral disc the analysis allows to get the additional information about the disc degeneration status. All these results demonstrated an efficiency of the proposed procedure and the designed device could be tested at the point-of-care site or for
Optimizing and controlling earthmoving operations using spatial technologies
NASA Astrophysics Data System (ADS)
Alshibani, Adel
This thesis presents a model designed for optimizing, tracking, and controlling earthmoving operations. The proposed model utilizes, Genetic Algorithm (GA), Linear Programming (LP), and spatial technologies including Global Positioning Systems (GPS) and Geographic Information Systems (GIS) to support the management functions of the developed model. The model assists engineers and contractors in selecting near optimum crew formations in planning phase and during construction, using GA and LP supported by the Pathfinder Algorithm developed in a GIS environment. GA is used in conjunction with a set of rules developed to accelerate the optimization process and to avoid generating and evaluating hypothetical and unrealistic crew formations. LP is used to determine quantities of earth to be moved from different borrow pits and to be placed at different landfill sites to meet project constraints and to minimize the cost of these earthmoving operations. On the one hand, GPS is used for onsite data collection and for tracking construction equipment in near real-time. On the other hand, GIS is employed to automate data acquisition and to analyze the collected spatial data. The model is also capable of reconfiguring crew formations dynamically during the construction phase while site operations are in progress. The optimization of the crew formation considers: (1) construction time, (2) construction direct cost, or (3) construction total cost. The model is also capable of generating crew formations to meet, as close as possible, specified time and/or cost constraints. In addition, the model supports tracking and reporting of project progress utilizing the earned-value concept and the project ratio method with modifications that allow for more accurate forecasting of project time and cost at set future dates and at completion. The model is capable of generating graphical and tabular reports. The developed model has been implemented in prototype software, using Object
An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification
ERIC Educational Resources Information Center
Wang, Jun; Samal, Ashok; Rong, Panying; Green, Jordan R.
2016-01-01
Purpose: The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. Method: The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of…
Mechanical optimization of superconducting cavities in continuous wave operation
NASA Astrophysics Data System (ADS)
Posen, Sam; Liepe, Matthias
2012-02-01
Several planned accelerator facilities call for hundreds of elliptical cavities operating cw with low effective beam loading, and therefore require cavities that have been mechanically optimized to operate at high QL by minimizing df/dp, the sensitivity to microphonics detuning from fluctuations in helium pressure. Without such an optimization, the facilities would suffer either power costs driven up by millions of dollars or an extremely high per-cavity trip rate. ANSYS simulations used to predict df/dp are presented as well as a model that illustrates factors that contribute to this parameter in elliptical cavities. For the Cornell Energy Recovery Linac (ERL) main linac cavity, df/dp is found to range from 2.5 to 17.4Hz/mbar, depending on the radius of the stiffening rings, with minimal df/dp for very small or very large radii. For the Cornell ERL injector cavity, simulations predict a df/dp of 124Hz/mbar, which fits well within the range of measurements performed with the injector cryomodule. Several methods for reducing df/dp are proposed, including decreasing the diameter of the tuner bellows and increasing the stiffness of the enddishes and the tuner. Using measurements from a Tesla Test Facility cavity as the baseline, if both of these measures were implemented and the stiffening rings were optimized, simulations indicate that df/dp would be reduced from ˜30Hz/mbar to just 2.9Hz/mbar, and the power required to maintain the accelerating field would be reduced by an order of magnitude. Finally, other consequences of optimizing the stiffening ring radius are investigated. It is found that stiffening rings larger than 70% of the iris-equator distance make the cavity impossible to tune. Small rings, on the other hand, leave the cavity susceptible to plastic deformation during handling and have lower frequency mechanical resonances, which is undesirable for active compensation of microphonics. Additional simulations of Lorentz force detuning are discussed, and
Strategies for optimal operation of the tellurium electrowinning process
Broderick, G.; Handle, B.; Paschen, P.
1999-02-01
Empirical models predicting the purity of electrowon tellurium have been developed using data from 36 pilot-plant trials. Based on these models, a numerical optimization of the process was performed to identify conditions which minimize the total contamination in Pb and Se while reducing electrical consumption per kilogram of electrowon tellurium. Results indicate that product quality can be maintained and even improved while operating at the much higher electroplating production rates obtained at high current densities. Using these same process settings, the electrical consumption of the process can be reduced by up to 10 pct by operating at midrange temperatures of close to 50 C. This is particularly attractive when waste heat is available at the plant to help preheat the electrolyte feed. When both Pb and Se are present as contaminants, the most energy-efficient strategy involves the use of a high current density, at a moderate temperature with high flow, for low concentrations of TeO{sub 2}. If Pb is removed prior to the electrowinning process, the use of a low current density and low electrolyte feed concentration, while operating at a low temperature and moderate flow rates, provides the most significant reduction in Se codeposition.
NASA Astrophysics Data System (ADS)
Ghorbani, Mehrdad; Assadian, Nima
2013-12-01
In this study the gravitational perturbations of the Sun and other planets are modeled on the dynamics near the Earth-Moon Lagrange points and optimal continuous and discrete station-keeping maneuvers are found to maintain spacecraft about these points. The most critical perturbation effect near the L1 and L2 Lagrange points of the Earth-Moon is the ellipticity of the Moon's orbit and the Sun's gravity, respectively. These perturbations deviate the spacecraft from its nominal orbit and have been modeled through a restricted five-body problem (R5BP) formulation compatible with circular restricted three-body problem (CR3BP). The continuous control or impulsive maneuvers can compensate the deviation and keep the spacecraft on the closed orbit about the Lagrange point. The continuous control has been computed using linear quadratic regulator (LQR) and is compared with nonlinear programming (NP). The multiple shooting (MS) has been used for the computation of impulsive maneuvers to keep the trajectory closed and subsequently an optimized MS (OMS) method and multiple impulses optimization (MIO) method have been introduced, which minimize the summation of multiple impulses. In these two methods the spacecraft is allowed to deviate from the nominal orbit; however, the spacecraft trajectory should close itself. In this manner, some closed or nearly closed trajectories around the Earth-Moon Lagrange points are found that need almost zero station-keeping maneuver.
Optimization of a point-focusing, distributed receiver solar thermal electric system
NASA Technical Reports Server (NTRS)
Pons, R. L.
1979-01-01
This paper presents an approach to optimization of a solar concept which employs solar-to-electric power conversion at the focus of parabolic dish concentrators. The optimization procedure is presented through a series of trade studies, which include the results of optical/thermal analyses and individual subsystem trades. Alternate closed-cycle and open-cycle Brayton engines and organic Rankine engines are considered to show the influence of the optimization process, and various storage techniques are evaluated, including batteries, flywheels, and hybrid-engine operation.
Optimization of Insertion Cost for Transfer Trajectories to Libration Point Orbits
NASA Technical Reports Server (NTRS)
Howell, K. C.; Wilson, R. S.; Lo, M. W.
1999-01-01
The objective of this work is the development of efficient techniques to optimize the cost associated with transfer trajectories to libration point orbits in the Sun-Earth-Moon four body problem, that may include lunar gravity assists. Initially, dynamical systems theory is used to determine invariant manifolds associated with the desired libration point orbit. These manifolds are employed to produce an initial approximation to the transfer trajectory. Specific trajectory requirements such as, transfer injection constraints, inclusion of phasing loops, and targeting of a specified state on the manifold are then incorporated into the design of the transfer trajectory. A two level differential corrections process is used to produce a fully continuous trajectory that satisfies the design constraints, and includes appropriate lunar and solar gravitational models. Based on this methodology, and using the manifold structure from dynamical systems theory, a technique is presented to optimize the cost associated with insertion onto a specified libration point orbit.
An Efficient Operator for the Change Point Estimation in Partial Spline Model
Han, Sung Won; Zhong, Hua; Putt, Mary
2015-01-01
In bio-informatics application, the estimation of the starting and ending points of drop-down in the longitudinal data is important. One possible approach to estimate such change times is to use the partial spline model with change points. In order to use estimate change time, the minimum operator in terms of a smoothing parameter has been widely used, but we showed that the minimum operator causes large MSE of change point estimates. In this paper, we proposed the summation operator in terms of a smoothing parameter, and our simulation study showed that the summation operator gives smaller MSE for estimated change points than the minimum one. We also applied the proposed approach to the experiment data, blood flow during photodynamic cancer therapy. PMID:25705072
78 FR 39018 - Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating Unit Nos. 2 and 3
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-28
... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating Unit Nos. 2 and 3 AGENCY: Nuclear Regulatory Commission. ACTION: Supplement to Final Supplement 38 to the Generic...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-03
...The U.S. Nuclear Regulatory Commission (NRC) is issuing an exemption in response to a request submitted by Entergy Nuclear Operations, Inc. (ENO) on June 20, 2012, for the Big Rock Point (BRP) Independent Spent Fuel Storage Installation...
Optimal design of river nutrient monitoring points based on an export coefficient model
NASA Astrophysics Data System (ADS)
Do, Huu Tuan; Lo, Shang-Lien; Chiueh, Pei-Te; Thi, Lan Anh Phan; Shang, Wei-Ting
2011-08-01
SummaryNutrient concentration is an important factor in identifying the quality of water sources and the likelihood of eutrophication. A nutrient monitoring network is an important information source that provides data on the nutrient pollution status of rivers. Export coefficient models have been widely used to study non-point source pollution. However, there has been little discussion about applying non-point source pollution and export coefficient modeling to design sampling points for monitoring. In this study, a new procedure providing a comprehensive solution was proposed to design nutrient monitoring points, from identifying pollution sources to designing sampling points and frequencies. Application of this procedure to design nutrient monitoring points upstream from the Feitsui reservoirs, Taipei, Taiwan, indicated that agriculture occupied only 7.24% of the area, but it released 45,795 kg/yr, or 41%, of the total nutrient load from non-point sources. Additionally, the optimization conditions defined four sampling points as well as the frequency of sampling at those points in the study area.
Listak, J.M.; Goodman, G.V.R.; Jankowski, R.A.
1999-07-01
Respirable dust studies were conducted at several underground coal mining operations to evaluate and compare the dust measurements of fixed-point machine-mounted samples on a continuous miner and personal samples of the remote miner operator. Fixed-point sampling was conducted at the right rear corner of the continuous miner which corresponded to the traditional location of the operator's cab. Although it has been documented that higher concentrations of dust are present at the machine-mounted position, this work sought to determine whether a relationship exists between the concentrations at the fixed-point position and the dust levels experienced at the remote operator position and whether this relationship could be applied on an industry-wide basis. To achieve this objective, gravimetric samplers were used to collect respirable dust data on continuous miner sections. These samplers were placed at a fixed position at the cab location of the continuous mining machine and on or near the remote miner operator during the 1 shift/day sampling periods. Dust sampling took place at mines with a variety of geographic locations and in-mine conditions. The dust concentration data collected at each site and for each sampling period were reduced to ratios of fixed-point to operator concentration. The ratios were calculated to determine similarities, differences, and/or variability at the two positions. The data show that dust concentrations at the remote operator position were always lower than dust concentrations measured at the fixed-point continuous miner location. However, the ratios of fixed-point to remote operator dust levels showed little consistency from shift to shift or from operation to operation. The fact that these ratios are so variable may introduce some uncertainty into attempting to correlate dust exposures of the remote operator to dust levels measured on the continuous mining machine.
NASA Astrophysics Data System (ADS)
Parkinson, S.; Morehead, M. D.; Conner, J. T.; Frye, C.
2012-12-01
Increasing demand for water and electricity, increasing variability in weather and climate and stricter requirements for riverine ecosystem health has put ever more stringent demands on hydropower operations. Dam operators are being impacted by these constraints and are looking for methods to meet these requirements while retaining the benefits hydropower offers. Idaho Power owns and operates 17 hydroelectric plants in Idaho and Oregon which have both Federal and State compliance requirements. Idaho Power has started building Decision Support Systems (DSS) to aid the hydroelectric plant operators in maximizing hydropower operational efficiency, while meeting regulatory compliance constraints. Regulatory constraints on dam operations include: minimum in-stream flows, maximum ramp rate of river stage, reservoir volumes, and reservoir ramp rate for draft and fill. From the hydroelectric standpoint, the desire is to vary the plant discharge (ramping) such that generation matches electricity demand (load-following), but ramping is limited by the regulatory requirements. Idaho Power desires DSS that integrate real time and historic data, simulates the rivers behavior from the hydroelectric plants downstream to the compliance measurement point and presents the information in an easily understandable display that allows the operators to make informed decisions. Creating DSS like these has a number of scientific and technical challenges. Real-time data are inherently noisy and automated data cleaning routines are required to filter the data. The DSS must inform the operators when incoming data are outside of predefined bounds. Complex river morphologies can make the timing and shape of a discharge change traveling downstream from a power plant nearly impossible to represent with a predefined lookup table. These complexities require very fast hydrodynamic models of the river system that simulate river characteristics (ex. Stage, discharge) at the downstream compliance point
Applications of operational calculus: trigonometric interpolating equation for the eight-point cube
Silver, Gary L
2009-01-01
A general method for obtaining a trigonometric-type interpolating equation for the eight-point cubical array is illustrated. It can often be used to reproduce a ninth datum at an arbitrary point near the center of the array by adjusting a variable exponent. The new method complements operational polynomial and exponential methods for the same design.
Operational optimization and real-time control of fuel-cell systems
NASA Astrophysics Data System (ADS)
Hasikos, J.; Sarimveis, H.; Zervas, P. L.; Markatos, N. C.
Fuel cells is a rapidly evolving technology with applications in many industries including transportation, and both portable and stationary power generation. The viability, efficiency and robustness of fuel-cell systems depend strongly on optimization and control of their operation. This paper presents the development of an integrated optimization and control tool for Proton Exchange Membrane Fuel-Cell (PEMFC) systems. Using a detailed simulation model, a database is generated first, which contains steady-state values of the manipulated and controlled variables over the full operational range of the fuel-cell system. In a second step, the database is utilized for producing Radial Basis Function (RBF) neural network "meta-models". In the third step, a Non-Linear Programming Problem (NLP) is formulated, that takes into account the constraints and limitations of the system and minimizes the consumption of hydrogen, for a given value of power demand. Based on the formulation and solution of the NLP problem, a look-up table is developed, containing the optimal values of the system variables for any possible value of power demand. In the last step, a Model Predictive Control (MPC) methodology is designed, for the optimal control of the system response to successive sep-point changes of power demand. The efficiency of the produced MPC system is illustrated through a number of simulations, which show that a successful dynamic closed-loop behaviour can be achieved, while at the same time the consumption of hydrogen is minimized.
The influence of transducer operating point on distortion generation in the cochlea
NASA Astrophysics Data System (ADS)
Sirjani, Davud B.; Salt, Alec N.; Gill, Ruth M.; Hale, Shane A.
2004-03-01
Distortion generated by the cochlea can provide a valuable indicator of its functional state. In the present study, the dependence of distortion on the operating point of the cochlear transducer and its relevance to endolymph volume disturbances has been investigated. Calculations have suggested that as the operating point moves away from zero, second harmonic distortion would increase. Cochlear microphonic waveforms were analyzed to derive the cochlear transducer operating point and to quantify harmonic distortions. Changes in operating point and distortion were measured during endolymph manipulations that included 200-Hz tone exposures at 115-dB SPL, injections of artificial endolymph into scala media at 80, 200, or 400 nl/min, and treatment with furosemide given intravenously or locally into the cochlea. Results were compared with other functional changes that included action potential thresholds at 2.8 or 8 kHz, summating potential, endocochlear potential, and the 2 f1-f2 and f2-f1 acoustic emissions. The results demonstrated that volume disturbances caused changes in the operating point that resulted in predictable changes in distortion. Understanding the factors influencing operating point is important in the interpretation of distortion measurements and may lead to tests that can detect abnormal endolymph volume states.
Implementation of a near-optimal global set point control method in a DDC controller
Cascia, M.A.
2000-07-01
A near-optimal global set point control method that can be implemented in an energy management system's (EMS) DDC controller is described in this paper. Mathematical models are presented for the power consumption of electric chillers, hot water boilers, chilled and hot water pumps, and air handler fans, which allow the calculation of near-optimal chilled water, hot water, and coil discharge air set points to minimize power consumption, based on data collected by the EMS. Also optimized are the differential and static pressure set points for the variable speed pumps and fans. A pilot test of this control methodology was implemented for a cooling plant at a pharmaceutical manufacturing facility near Dallas, Texas. Data collected at this site showed good agreement between the actual power consumed by the chillers, chilled water pumps, and air handlers and that predicted by the models. An approximate model was developed to calculate real-time power savings in the DDC controller. A third-party energy accounting program was used to track savings due to the near-optimal control, and results show a monthly KWH reduction ranging from 3% to 14%.
NASA Astrophysics Data System (ADS)
Sue-Ann, Goh; Ponnambalam, S. G.
This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.
Friedrich, Tobias; Neumann, Frank; Thyssen, Christian
2015-01-01
Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multi-objective problems as the population of such an algorithm can be used to represent the trade-offs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multi-objective problems. We consider indicator-based algorithms whose goal is to maximize the hypervolume for a given problem by distributing [Formula: see text] points on the Pareto front. To gain new theoretical insights into the behavior of hypervolume-based algorithms, we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of bi-objective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolume-based approaches and examine Pareto fronts of different shapes by numerical calculations. PMID:24654679
NASA Astrophysics Data System (ADS)
Goldberg, Daniel N.; Krishna Narayanan, Sri Hari; Hascoet, Laurent; Utke, Jean
2016-05-01
We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enabling larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. The methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.
Data Mining Method for Battery Operation Optimization in Photovoltaics
NASA Astrophysics Data System (ADS)
Sato, Katsunori; Wakao, Shinji
Recently, a photovoltaic (PV) system has attracted attention because of serious environmental and energy problems. In near future, PV systems intensively connected to the grid will bring about the difficulties in the power system operation. As a countermeasure, this paper deals with the introduction of storage battery for making the unstable PV power controllable. In this regard, when we introduce a storage battery into a PV system, we have to consider the advantages and disadvantages. In order to evaluate the system from various perspectives, we have carried out multi-objective optimization of battery operation in PV system design. However, as the number of objective functions increases, it becomes difficult to appropriately interpret the correlation among objective functions and design variables. With this background, in this paper, a novel computational method is proposed for data mining of PV system design, in which we make an attempt to effectively extract the design information of the battery system with the use of Self-Organizing Map (SOM).
NASA Astrophysics Data System (ADS)
Hui, Zhenyang; Hu, Youjian; Jin, Shuanggen; Yevenyo, Yao Ziggah
2016-08-01
Road information acquisition is an important part of city informatization construction. Airborne LiDAR provides a new means of acquiring road information. However, the existing road extraction methods using LiDAR point clouds always decide the road intensity threshold based on experience, which cannot obtain the optimal threshold to extract a road point cloud. Moreover, these existing methods are deficient in removing the interference of narrow roads and several attached areas (e.g., parking lot and bare ground) to main roads extraction, thereby imparting low completeness and correctness to the city road network extraction result. Aiming at resolving the key technical issues of road extraction from airborne LiDAR point clouds, this paper proposes a novel method to extract road centerlines from airborne LiDAR point clouds. The proposed approach is mainly composed of three key algorithms, namely, Skewness balancing, Rotating neighborhood, and Hierarchical fusion and optimization (SRH). The skewness balancing algorithm used for the filtering was adopted as a new method for obtaining an optimal intensity threshold such that the "pure" road point cloud can be obtained. The rotating neighborhood algorithm on the other hand was developed to remove narrow roads (corridors leading to parking lots or sidewalks), which are not the main roads to be extracted. The proposed hierarchical fusion and optimization algorithm caused the road centerlines to be unaffected by certain attached areas and ensured the road integrity as much as possible. The proposed method was tested using the Vaihingen dataset. The results demonstrated that the proposed method can effectively extract road centerlines in a complex urban environment with 91.4% correctness and 80.4% completeness.
Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.; Chen, Yousu; Trudnowski, Daniel J.; Diao, Ruisheng; Fuller, Jason C.; Mittelstadt, William A.; Hauer, John F.; Dagle, Jeffery E.
2010-10-18
Small signal stability problems are one of the major threats to grid stability and reliability in the U.S. power grid. An undamped mode can cause large-amplitude oscillations and may result in system breakups and large-scale blackouts. There have been several incidents of system-wide oscillations. Of those incidents, the most notable is the August 10, 1996 western system breakup, a result of undamped system-wide oscillations. Significant efforts have been devoted to monitoring system oscillatory behaviors from measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision, time-synchronized data needed for detecting oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measurements to identify system oscillation modes and their damping. Low damping indicates potential system stability issues. Modal analysis has been demonstrated with phasor measurements to have the capability of estimating system modes from both oscillation signals and ambient data. With more and more phasor measurements available and ModeMeter techniques maturing, there is yet a need for methods to bring modal analysis from monitoring to actions. The methods should be able to associate low damping with grid operating conditions, so operators or automated operation schemes can respond when low damping is observed. The work presented in this report aims to develop such a method and establish a Modal Analysis for Grid Operation (MANGO) procedure to aid grid operation decision making to increase inter-area modal damping. The procedure can provide operation suggestions (such as increasing generation or decreasing load) for mitigating inter-area oscillations.
Optimizing Wind And Hydropower Generation Within Realistic Reservoir Operating Policy
NASA Astrophysics Data System (ADS)
Magee, T. M.; Clement, M. A.; Zagona, E. A.
2012-12-01
Previous studies have evaluated the benefits of utilizing the flexibility of hydropower systems to balance the variability and uncertainty of wind generation. However, previous hydropower and wind coordination studies have simplified non-power constraints on reservoir systems. For example, some studies have only included hydropower constraints on minimum and maximum storage volumes and minimum and maximum plant discharges. The methodology presented here utilizes the pre-emptive linear goal programming optimization solver in RiverWare to model hydropower operations with a set of prioritized policy constraints and objectives based on realistic policies that govern the operation of actual hydropower systems, including licensing constraints, environmental constraints, water management and power objectives. This approach accounts for the fact that not all policy constraints are of equal importance. For example target environmental flow levels may not be satisfied if it would require violating license minimum or maximum storages (pool elevations), but environmental flow constraints will be satisfied before optimizing power generation. Additionally, this work not only models the economic value of energy from the combined hydropower and wind system, it also captures the economic value of ancillary services provided by the hydropower resources. It is recognized that the increased variability and uncertainty inherent with increased wind penetration levels requires an increase in ancillary services. In regions with liberalized markets for ancillary services, a significant portion of hydropower revenue can result from providing ancillary services. Thus, ancillary services should be accounted for when determining the total value of a hydropower system integrated with wind generation. This research shows that the end value of integrated hydropower and wind generation is dependent on a number of factors that can vary by location. Wind factors include wind penetration level
Analysis of an optimization-based atomistic-to-continuum coupling method for point defects
Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; Luskin, Mitchell
2015-11-16
Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Operation of internal transmitter control... Transmitter Control Internal Transmitter Control Systems § 90.473 Operation of internal transmitter control systems through licensed fixed control points. An internal transmitter control system may be...
An invariance principle for maintaining the operating point of a neuron.
Elliott, Terry; Kuang, Xutao; Shadbolt, Nigel R; Zauner, Klaus-Peter
2008-01-01
Sensory neurons adapt to changes in the natural statistics of their environments through processes such as gain control and firing threshold adjustment. It has been argued that neurons early in sensory pathways adapt according to information-theoretic criteria, perhaps maximising their coding efficiency or information rate. Here, we draw a distinction between how a neuron's preferred operating point is determined and how its preferred operating point is maintained through adaptation. We propose that a neuron's preferred operating point can be characterised by the probability density function (PDF) of its output spike rate, and that adaptation maintains an invariant output PDF, regardless of how this output PDF is initially set. Considering a sigmoidal transfer function for simplicity, we derive simple adaptation rules for a neuron with one sensory input that permit adaptation to the lower-order statistics of the input, independent of how the preferred operating point of the neuron is set. Thus, if the preferred operating point is, in fact, set according to information-theoretic criteria, then these rules nonetheless maintain a neuron at that point. Our approach generalises from the unimodal case to the multimodal case, for a neuron with inputs from distinct sensory channels, and we briefly consider this case too. PMID:18946837
Optimal feature point selection and automatic initialization in active shape model search.
Lekadir, Karim; Yang, Guang-Zhong
2008-01-01
This paper presents a novel approach for robust and fully automatic segmentation with active shape model search. The proposed method incorporates global geometric constraints during feature point search by using interlandmark conditional probabilities. The A* graph search algorithm is adapted to identify in the image the optimal set of valid feature points. The technique is extended to enable reliable and fast automatic initialization of the ASM search. Validation with 2-D and 3-D MR segmentation of the left ventricular epicardial border demonstrates significant improvement in robustness and overall accuracy, while eliminating the need for manual initialization. PMID:18979776
Confidence intervals for the symmetry point: an optimal cutpoint in continuous diagnostic tests.
López-Ratón, Mónica; Cadarso-Suárez, Carmen; Molanes-López, Elisa M; Letón, Emilio
2016-01-01
Continuous diagnostic tests are often used for discriminating between healthy and diseased populations. For this reason, it is useful to select an appropriate discrimination threshold. There are several optimality criteria: the North-West corner, the Youden index, the concordance probability and the symmetry point, among others. In this paper, we focus on the symmetry point that maximizes simultaneously the two types of correct classifications. We construct confidence intervals for this optimal cutpoint and its associated specificity and sensitivity indexes using two approaches: one based on the generalized pivotal quantity and the other on empirical likelihood. We perform a simulation study to check the practical behaviour of both methods and illustrate their use by means of three real biomedical datasets on melanoma, prostate cancer and coronary artery disease. PMID:26756550
Performing a scatterv operation on a hierarchical tree network optimized for collective operations
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2013-10-22
Performing a scatterv operation on a hierarchical tree network optimized for collective operations including receiving, by the scatterv module installed on the node, from a nearest neighbor parent above the node a chunk of data having at least a portion of data for the node; maintaining, by the scatterv module installed on the node, the portion of the data for the node; determining, by the scatterv module installed on the node, whether any portions of the data are for a particular nearest neighbor child below the node or one or more other nodes below the particular nearest neighbor child; and sending, by the scatterv module installed on the node, those portions of data to the nearest neighbor child if any portions of the data are for a particular nearest neighbor child below the node or one or more other nodes below the particular nearest neighbor child.
Point-based warping with optimized weighting factors of displacement vectors
NASA Astrophysics Data System (ADS)
Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas
2000-06-01
The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.
Sensitivity analysis and optimization of nodal point placement for vibration reduction
NASA Technical Reports Server (NTRS)
Pritchard, J. I.; Adelman, H. M.; Haftka, R. T.
1987-01-01
A method is developed for sensitivity analysis and optimization of nodal point locations in connection with vibration reduction. A straightforward derivation of the expression for the derivative of nodal locations is given, and the role of the derivative in assessing design trends is demonstrated. An optimization process is developed which uses added lumped masses on the structure as design variables to move the node to a preselected location - for example, where low response amplitude is required or to a point which makes the mode shape nearly orthogonal to the force distribution, thereby minimizing the generalized force. The optimization formulation leads to values for added masses that adjust a nodal location while minimizing the total amount of added mass required to do so. As an example, the node of the second mode of a cantilever box beam is relocated to coincide with the centroid of a prescribed force distribution, thereby reducing the generalized force substantially without adding excessive mass. A comparison with an optimization formulation that directly minimizes the generalized force indicates that nodal placement gives essentially a minimum generalized force when the node is appropriately placed.
Sensitivity derivatives and optimization of nodal point locations for vibration reduction
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Haftka, Raphael T.
1987-01-01
A method is developed for sensitivity analysis and optimization of nodal point locations in connection with vibration reduction. A straightforward derivation of the expression for the derivative of nodal locations is given, and the role of the derivative in assessing design trends is demonstrated. An optimization process is developed which uses added lumped masses on the structure as design variables to move the node to a preselected location; for example, where low response amplitude is required or to a point which makes the mode shape nearly orthogonal to the force distribution, thereby minimizing the generalized force. The optimization formulation leads to values for added masses that adjust a nodal location while minimizing the total amount of added mass required to do so. As an example, the node of the second mode of a cantilever box beam is relocated to coincide with the centroid of a prescribed force distribution, thereby reducing the generalized force substantially without adding excessive mass. A comparison with an optimization formulation that directly minimizes the generalized force indicates that nodal placement gives essentially a minimum generalized force when the node is appropriately placed.
Senstitivty analysis and optimization of nodal point placement for vibration reduction
NASA Technical Reports Server (NTRS)
Pritchard, J. I.; Adelman, H. M.; Haftka, R. T.
1986-01-01
A method is developed for sensitivity analysis and optimization of nodal point locations in connection with vibration reduction. A straightforward derivation of the expression for the derivative of nodal locations is given, and the role of the derivative in assessing design trends is demonstrated. An optimization process is developed which uses added lumped masses on the structure as design variables to move the node to a preselected location - for example, where low response amplitude is required or to a point which makes the mode shape nearly orthogonal to the force distribution, thereby minimizing the generalized force. The optimization formulation leads to values for added masses that adjust a nodal location while minimizing the total amount of added mass required to do so. As an example, the node of the second mode of a cantilever box beam is relocated to coincide with the centroid of a prescribed force distribution, thereby reducing the generalized force substantially without adding excessive mass. A comparison with an optimization formulation that directly minimizes the generalized force indicates that nodal placement gives essentially a minimum generalized force when the node is appropriately placed.
NASA Astrophysics Data System (ADS)
Guo, Jie; Zhu, Dalin; Tang, Shengjing
2012-11-01
The initial trajectory design of the missile is an important part of the overall design, but often a tedious calculation and analysis process due to the large dimension nonlinear differential equations and the traditional statistical analysis methods. To improve the traditional design methods, a robust optimization concept and method are introduced in this paper to deal with the determination of the initial control point. First, the Gaussian Radial Basis Network is adopted to establish the approximate model of the missile's disturbance motion based on the disturbance motion and disturbance factors analysis. Then, a direct analytical relationship between the disturbance input and statistical results is deduced on the basis of Gaussian Radial Basis Network model. Subsequently, a robust optimization model is established aiming at the initial control point design problem and the niche Pareto genetic algorithm for multi-objective optimization is adopted to solve this optimization model. An integral design example is give at last and the simulation results have verified the validity of this method.
Optimizing Wellfield Operation in a Variable Power Price Regime.
Bauer-Gottwein, Peter; Schneider, Raphael; Davidsen, Claus
2016-01-01
Wellfield management is a multiobjective optimization problem. One important objective has been energy efficiency in terms of minimizing the energy footprint (EFP) of delivered water (MWh/m(3) ). However, power systems in most countries are moving in the direction of deregulated markets and price variability is increasing in many markets because of increased penetration of intermittent renewable power sources. In this context the relevant management objective becomes minimizing the cost of electric energy used for pumping and distribution of groundwater from wells rather than minimizing energy use itself. We estimated EFP of pumped water as a function of wellfield pumping rate (EFP-Q relationship) for a wellfield in Denmark using a coupled well and pipe network model. This EFP-Q relationship was subsequently used in a Stochastic Dynamic Programming (SDP) framework to minimize total cost of operating the combined wellfield-storage-demand system over the course of a 2-year planning period based on a time series of observed price on the Danish power market and a deterministic, time-varying hourly water demand. In the SDP setup, hourly pumping rates are the decision variables. Constraints include storage capacity and hourly water demand fulfilment. The SDP was solved for a baseline situation and for five scenario runs representing different EFP-Q relationships and different maximum wellfield pumping rates. Savings were quantified as differences in total cost between the scenario and a constant-rate pumping benchmark. Minor savings up to 10% were found in the baseline scenario, while the scenario with constant EFP and unlimited pumping rate resulted in savings up to 40%. Key factors determining potential cost savings obtained by flexible wellfield operation under a variable power price regime are the shape of the EFP-Q relationship, the maximum feasible pumping rate and the capacity of available storage facilities. PMID:25964991
Optimal Operation Method of Smart House by Controllable Loads based on Smart Grid Topology
NASA Astrophysics Data System (ADS)
Yoza, Akihiro; Uchida, Kosuke; Yona, Atsushi; Senju, Tomonobu
2013-08-01
From the perspective of global warming suppression and depletion of energy resources, renewable energy such as wind generation (WG) and photovoltaic generation (PV) are getting attention in distribution systems. Additionally, all electrification apartment house or residence such as DC smart house have increased in recent years. However, due to fluctuating power from renewable energy sources and loads, supply-demand balancing fluctuations of power system become problematic. Therefore, "smart grid" has become very popular in the worldwide. This article presents a methodology for optimal operation of a smart grid to minimize the interconnection point power flow fluctuations. To achieve the proposed optimal operation, we use distributed controllable loads such as battery and heat pump. By minimizing the interconnection point power flow fluctuations, it is possible to reduce the maximum electric power consumption and the electric cost. This system consists of photovoltaics generator, heat pump, battery, solar collector, and load. In order to verify the effectiveness of the proposed system, MATLAB is used in simulations.
A PERFECT MATCH CONDITION FOR POINT-SET MATCHING PROBLEMS USING THE OPTIMAL MASS TRANSPORT APPROACH
CHEN, PENGWEN; LIN, CHING-LONG; CHERN, I-LIANG
2013-01-01
We study the performance of optimal mass transport-based methods applied to point-set matching problems. The present study, which is based on the L2 mass transport cost, states that perfect matches always occur when the product of the point-set cardinality and the norm of the curl of the non-rigid deformation field does not exceed some constant. This analytic result is justified by a numerical study of matching two sets of pulmonary vascular tree branch points whose displacement is caused by the lung volume changes in the same human subject. The nearly perfect match performance verifies the effectiveness of this mass transport-based approach. PMID:23687536
Archer, Charles J.; Faraj, Ahmad A.; Inglett, Todd A.; Ratterman, Joseph D.
2012-10-23
Methods, apparatus, and products are disclosed for providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: identifying each link in the global combining network for each compute node of the operational group; designating one of a plurality of point-to-point class routing identifiers for each link such that no compute node in the operational group is connected to two adjacent compute nodes in the operational group with links designated for the same class routing identifiers; and configuring each compute node of the operational group for point-to-point communications with each adjacent compute node in the global combining network through the link between that compute node and that adjacent compute node using that link's designated class routing identifier.
NASA Astrophysics Data System (ADS)
Kiran, B. S.; Singh, Satyendra; Negi, Kuldeep
The GSAT-12 spacecraft is providing Communication services from the INSAT/GSAT system in the Indian region. The spacecraft carries 12 extended C-band transponders. GSAT-12 was launched by ISRO’s PSLV from Sriharikota, into a sub-geosynchronous Transfer Orbit (sub-GTO) of 284 x 21000 km with inclination 18 deg. This Mission successfully accomplished combined optimization of launch vehicle and satellite capabilities to maximize operational life of the s/c. This paper describes mission analysis carried out for GSAT-12 comprising launch window, orbital events study and orbit raising maneuver strategies considering various Mission operational constraints. GSAT-12 is equipped with two earth sensors (ES), three gyroscopes and digital sun sensor. The launch window was generated considering mission requirement of minimum 45 minutes of ES data for calibration of gyros with Roll-sun-pointing orientation in T.O. Since the T.O. period was a rather short 6.1 hr, required pitch biases were worked out to meet the gyro-calibration requirement. A 440 N Liquid Apogee Motor (LAM) is used for orbit raising. The objective of the maneuver strategy is to achieve desired drift orbit satisfying mission constraints and minimizing propellant expenditure. In case of sub-GTO, the optimal strategy is to first perform an in-plane maneuver at perigee to raise the apogee to synchronous level and then perform combined maneuvers at the synchronous apogee to achieve desired drift orbit. The perigee burn opportunities were examined considering ground station visibility requirement for monitoring the burn. Two maneuver strategies were proposed: an optimal five-burn strategy with two perigee burns centered around perigee#5 and perigee#8 with partial ground station visibility and three apogee burns with dual station visibility, a near-optimal five-burn strategy with two off-perigee burns at perigee#5 and perigee#8 with single ground station visibility and three apogee burns with dual station visibility
Phase-operation for conduction electron by atomic-scale scattering via single point-defect
Nagaoka, Katsumi Yaginuma, Shin; Nakayama, Tomonobu
2014-03-17
In order to propose a phase-operation technique for conduction electrons in solid, we have investigated, using scanning tunneling microscopy, an atomic-scale electron-scattering phenomenon on a 2D subband state formed in Si. Particularly, we have noticed a single surface point-defect around which a standing-wave pattern created, and a dispersion of scattering phase-shifts by the defect-potential against electron-energy has been measured. The behavior is well-explained with appropriate scattering parameters: the potential height and radius. This result experimentally proves that the atomic-scale potential scattering via the point defect enables phase-operation for conduction electrons.
Li/CFx Cells Optimized for Low-Temperature Operation
NASA Technical Reports Server (NTRS)
Smart, Marshall C.; Whitacre, Jay F.; Bugga, Ratnakumar V.; Prakash, G. K. Surya; Bhalla, Pooja; Smith, Kiah
2009-01-01
Some developments reported in prior NASA Tech Briefs articles on primary electrochemical power cells containing lithium anodes and fluorinated carbonaceous (CFx) cathodes have been combined to yield a product line of cells optimized for relatively-high-current operation at low temperatures at which commercial lithium-based cells become useless. These developments have involved modifications of the chemistry of commercial Li/CFx cells and batteries, which are not suitable for high-current and low-temperature applications because they are current-limited and their maximum discharge rates decrease with decreasing temperature. One of two developments that constitute the present combination is, itself, a combination of developments: (1) the use of sub-fluorinated carbonaceous (CFx wherein x<1) cathode material, (2) making the cathodes thinner than in most commercial units, and (3) using non-aqueous electrolytes formulated especially to enhance low-temperature performance. This combination of developments was described in more detail in High-Energy-Density, Low- Temperature Li/CFx Primary Cells (NPO-43219), NASA Tech Briefs, Vol. 31, No. 7 (July 2007), page 43. The other development included in the present combination is the use of an anion receptor as an electrolyte additive, as described in the immediately preceding article, "Additive for Low-Temperature Operation of Li-(CF)n Cells" (NPO- 43579). A typical cell according to the present combination of developments contains an anion-receptor additive solvated in an electrolyte that comprises LiBF4 dissolved at a concentration of 0.5 M in a mixture of four volume parts of 1,2 dimethoxyethane with one volume part of propylene carbonate. The proportion, x, of fluorine in the cathode in such a cell lies between 0.5 and 0.9. The best of such cells fabricated to date have exhibited discharge capacities as large as 0.6 A h per gram at a temperature of 50 C when discharged at a rate of C/5 (where C is the magnitude of the
Johnson, David K; Lewis, Matthew J; Pavlich, Jane C; Wright, Alan D; Johnson, Kathryn E; Pace, Andrew M
2013-02-01
The goal of this Department of Energy (DOE) project is to increase wind turbine efficiency and reliability with the use of a Light Detection and Ranging (LIDAR) system. The LIDAR provides wind speed and direction data that can be used to help mitigate the fatigue stress on the turbine blades and internal components caused by wind gusts, sub-optimal pointing and reactionary speed or RPM changes. This effort will have a significant impact on the operation and maintenance costs of turbines across the industry. During the course of the project, Michigan Aerospace Corporation (MAC) modified and tested a prototype direct detection wind LIDAR instrument; the resulting LIDAR design considered all aspects of wind turbine LIDAR operation from mounting, assembly, and environmental operating conditions to laser safety. Additionally, in co-operation with our partners, the National Renewable Energy Lab and the Colorado School of Mines, progress was made in LIDAR performance modeling as well as LIDAR feed forward control system modeling and simulation. The results of this investigation showed that using LIDAR measurements to change between baseline and extreme event controllers in a switching architecture can reduce damage equivalent loads on blades and tower, and produce higher mean power output due to fewer overspeed events. This DOE project has led to continued venture capital investment and engagement with leading turbine OEMs, wind farm developers, and wind farm owner/operators.
NASA Astrophysics Data System (ADS)
Stuparu, A.; Susan-Resiga, R.; Anton, L. E.; Muntean, S.
2010-08-01
The paper presents a new method for the analysis of the cavitational behaviour of hydraulic turbomachines. This new method allows determining the coefficient of the cavitation inception and the cavitation sensitivity of the turbomachines. We apply this method to study the cavitational behaviour of a large storage pump. By plotting in semi-logarithmic coordinates the vapour volume versus the cavitation coefficient, we show that all numerical data collapse in an exponential manner. This storage pump is located in a power plant and operating without the presence of the developed cavitation is vital. We investigate the behaviour of the pump from the cavitational point of view while the pump is operating for variable discharge. A distribution of the vapour volume upon the blade of the impeller is presented for all the four operating points. It can be seen how the volume of vapour evolves from one operating point to another. In order to study the influence of the cavitation phenomena upon the pump, the evolution of the pumping head against the cavitation coefficient is presented. That will show how the pumping head drops while the cavitation coefficient decreases. From analysing the data obtained from the numerical simulation it results that the cavitation phenomena is present for all the investigated operating points. By analysis of the slope of the curve describing the evolution of the vapour volume against the cavitation coefficient we determine the cavitation sensitivity of the pump for each operating point. It is showed that the cavitation sensitivity of the investigated storage pump increases while the flow rate decreases.
Building Restoration Operations Optimization Model Beta Version 1.0
Energy Science and Technology Software Center (ESTSC)
2007-05-31
The Building Restoration Operations Optimization Model (BROOM), developed by Sandia National Laboratories, is a software product designed to aid in the restoration of large facilities contaminated by a biological material. BROOMs integrated data collection, data management, and visualization software improves the efficiency of cleanup operations, minimizes facility downtime, and provides a transparent basis for reopening the facility. Secure remote access to building floor plans Floor plan drawings and knowledge of the HVAC system are criticalmore » to the design and implementation of effective sampling plans. In large facilities, access to these data may be complicated by the sheer abundance and disorganized state they are often stored in. BROOM avoids potentially costly delays by providing a means of organizing and storing mechanical and floor plan drawings in a secure remote database that is easily accessed. Sampling design tools BROOM provides an array of tools to answer the question of where to sample and how many samples to take. In addition to simple judgmental and random sampling plans, the software includes two sophisticated methods of adaptively developing a sampling strategy. Both tools strive to choose sampling locations that best satisfy a specified objective (i.e. minimizing kriging variance) but use numerically different strategies to do so. Surface samples are collected early in the restoration process to characterize the extent of contamination and then again later to verify that the facility is safe to reenter. BROOM supports sample collection using a ruggedized PDA equipped with a barcode scanner and laser range finder. The PDA displays building floor drawings, sampling plans, and electronic forms for data entry. Barcodes are placed on sample containers for the purpose of tracking the specimen and linking acquisition data (i.e. location, surface type, texture) to laboratory results. Sample location is determined by activating the integrated
Building Restoration Operations Optimization Model Beta Version 1.0
2007-05-31
The Building Restoration Operations Optimization Model (BROOM), developed by Sandia National Laboratories, is a software product designed to aid in the restoration of large facilities contaminated by a biological material. BROOMs integrated data collection, data management, and visualization software improves the efficiency of cleanup operations, minimizes facility downtime, and provides a transparent basis for reopening the facility. Secure remote access to building floor plans Floor plan drawings and knowledge of the HVAC system are critical to the design and implementation of effective sampling plans. In large facilities, access to these data may be complicated by the sheer abundance and disorganized state they are often stored in. BROOM avoids potentially costly delays by providing a means of organizing and storing mechanical and floor plan drawings in a secure remote database that is easily accessed. Sampling design tools BROOM provides an array of tools to answer the question of where to sample and how many samples to take. In addition to simple judgmental and random sampling plans, the software includes two sophisticated methods of adaptively developing a sampling strategy. Both tools strive to choose sampling locations that best satisfy a specified objective (i.e. minimizing kriging variance) but use numerically different strategies to do so. Surface samples are collected early in the restoration process to characterize the extent of contamination and then again later to verify that the facility is safe to reenter. BROOM supports sample collection using a ruggedized PDA equipped with a barcode scanner and laser range finder. The PDA displays building floor drawings, sampling plans, and electronic forms for data entry. Barcodes are placed on sample containers for the purpose of tracking the specimen and linking acquisition data (i.e. location, surface type, texture) to laboratory results. Sample location is determined by activating the integrated laser
Polarizable six-point water models from computational and empirical optimization.
Tröster, Philipp; Lorenzen, Konstantin; Tavan, Paul
2014-02-13
Tröster et al. (J. Phys. Chem B 2013, 117, 9486-9500) recently suggested a mixed computational and empirical approach to the optimization of polarizable molecular mechanics (PMM) water models. In the empirical part the parameters of Buckingham potentials are optimized by PMM molecular dynamics (MD) simulations. The computational part applies hybrid calculations, which combine the quantum mechanical description of a H2O molecule by density functional theory (DFT) with a PMM model of its liquid phase environment generated by MD. While the static dipole moments and polarizabilities of the PMM water models are fixed at the experimental gas phase values, the DFT/PMM calculations are employed to optimize the remaining electrostatic properties. These properties cover the width of a Gaussian inducible dipole positioned at the oxygen and the locations of massless negative charge points within the molecule (the positive charges are attached to the hydrogens). The authors considered the cases of one and two negative charges rendering the PMM four- and five-point models TL4P and TL5P. Here we extend their approach to three negative charges, thus suggesting the PMM six-point model TL6P. As compared to the predecessors and to other PMM models, which also exhibit partial charges at fixed positions, TL6P turned out to predict all studied properties of liquid water at p0 = 1 bar and T0 = 300 K with a remarkable accuracy. These properties cover, for instance, the diffusion constant, viscosity, isobaric heat capacity, isothermal compressibility, dielectric constant, density, and the isobaric thermal expansion coefficient. This success concurrently provides a microscopic physical explanation of corresponding shortcomings of previous models. It uniquely assigns the failures of previous models to substantial inaccuracies in the description of the higher electrostatic multipole moments of liquid phase water molecules. Resulting favorable properties concerning the transferability to
77 FR 40091 - Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating, Units 2 and 3
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-06
... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating, Units 2 and 3 AGENCY: Nuclear... statement for license renewal of nuclear plants; availability. SUMMARY: The U.S. Nuclear...
SOLCUS: Update On Point-of-Care Ultrasound In Special Operations Medicine.
Hampton, Katarzyna Kasia; Vasios, William N; Loos, Paul E
2016-01-01
Point-of-care ultrasonography has been recognized as a relevant and versatile tool in Special Operations Forces (SOF) medicine. The Special Operator Level Clinical Ultrasound (SOLCUS) program has been developed specifically for SOF Medics. A number of challenges, including skill sustainment, high-volume training, and quality assurance, have been identified. Potential solutions, including changes to content delivery methods and application of tele-ultrasound, are described in this article. Given the shift in operational context toward extended care in austere environments, a curriculum adjustment for the SOLCUS program is also proposed. PMID:27045495
Existence and data dependence of fixed points for multivalued operators on gauge spaces
NASA Astrophysics Data System (ADS)
Espínola, Rafael; Petrusel, Adrian
2005-09-01
The purpose of this note is to present some fixed point and data dependence theorems in complete gauge spaces and in hyperconvex metric spaces for the so-called Meir-Keeler multivalued operators and admissible multivalued a[alpha]-contractions. Our results extend and generalize several theorems of Espínola and Kirk [R. Espínola, W.A. Kirk, Set-valued contractions and fixed points, Nonlinear Anal. 54 (2003) 485-494] and Rus, Petrusel, and Sîntamarian [I.A. Rus, A. Petrusel, A. Sîntamarian, Data dependence of the fixed point set of some multivalued weakly Picard operators, Nonlinear Anal. 52 (2003) 1947-1959].
Approximation of functions by asymmetric two-point hermite polynomials and its optimization
NASA Astrophysics Data System (ADS)
Shustov, V. V.
2015-12-01
A function is approximated by two-point Hermite interpolating polynomials with an asymmetric orders-of-derivatives distribution at the endpoints of the interval. The local error estimate is examined theoretically and numerically. As a result, the position of the maximum of the error estimate is shown to depend on the ratio of the numbers of conditions imposed on the function and its derivatives at the endpoints of the interval. The shape of a universal curve representing a reduced error estimate is found. Given the sum of the orders of derivatives at the endpoints of the interval, the ordersof-derivatives distribution is optimized so as to minimize the approximation error. A sufficient condition for the convergence of a sequence of general two-point Hermite polynomials to a given function is given.
Optimization of a catchment-scale coupled surface-subsurface hydrological model using pilot points
NASA Astrophysics Data System (ADS)
Danapour, Mehrdis; Stisen, Simon; Lajer Højberg, Anker
2016-04-01
Transient coupled surface-subsurface models are usually complex and contain a large amount of spatio-temporal information. In the traditional calibration approach, model parameters are adjusted against only few spatially aggregated observations of discharge or individual point observations of groundwater head. However, this approach doesn't enable an assessment of spatially explicit predictive model capabilities at the intermediate scale relevant for many applications. The overall objectives of this project is to develop a new model calibration and evaluation framework by combining distributed model parameterization and regularization with new types of objective functions focusing on optimizing spatial patterns rather than individual points or catchment scale features. Inclusion of detailed observed spatial patterns of hydraulic head gradients or relevant information obtained from remote sensing data in the calibration process could allow for a better representation of spatial variability of hydraulic properties. Pilot Points as an alternative to classical parameterization approaches, introduce great flexibility when calibrating heterogeneous systems without neglecting expert knowledge (Doherty, 2003). A highly parameterized optimization of complex distributed hydrological models at catchment scale is challenging due to the computational burden that comes with it. In this study the physically-based coupled surface-subsurface model MIKE SHE is calibrated for the 8,500 km2 area of central Jylland (Denmark) that is characterized by heterogeneous geology and considerable groundwater flow across topographical catchment boundaries. The calibration of the distributed conductivity fields is carried out with a pilot point-based approach, implemented using the PEST parameter estimation tool. To reduce the high number of calibration parameters, PEST's advanced singular value decomposition combined with regularization was utilized and a reduction of the model's complexity was
The Hubble Space Telescope fine guidance system operating in the coarse track pointing control mode
NASA Technical Reports Server (NTRS)
Whittlesey, Richard
1993-01-01
The Hubble Space Telescope (HST) Fine Guidance System has set new standards in pointing control capability for earth orbiting spacecraft. Two precision pointing control modes are implemented in the Fine Guidance System; one being a Coarse Track Mode which employs a pseudo-quadrature detector approach and the second being a Fine Mode which uses a two axis interferometer implementation. The Coarse Track Mode was designed to maintain FGS pointing error to within 20 milli-arc seconds (rms) when guiding on a 14.5 Mv star. The Fine Mode was designed to maintain FGS pointing error to less than 3 milli-arc seconds (rms). This paper addresses the HST FGS operating in the Coarse Track Mode. An overview of the implementation, the operation, and both the predicted and observed on orbit performance is presented. The discussion includes a review of the Fine Guidance System hardware which uses two beam steering Star Selector servos, four photon counting photomultiplier tube detectors, as well as a 24 bit microprocessor, which executes the control system firmware. Unanticipated spacecraft operational characteristics are discussed as they impact pointing performance. These include the influence of spherically aberrated star images as well as the mechanical shocks induced in the spacecraft during and following orbital day/night terminator crossings. Computer modeling of the Coarse Track Mode verifies the observed on orbit performance trends in the presence of these optical and mechanical disturbances. It is concluded that the coarse track pointing control function is performing as designed and is providing a robust pointing control capability for the Hubble Space Telescope.
NASA Astrophysics Data System (ADS)
Kim, U.; Parker, J.; Borden, R. C.
2014-12-01
In-situ chemical oxidation (ISCO) has been applied at many dense non-aqueous phase liquid (DNAPL) contaminated sites. A stirred reactor-type model was developed that considers DNAPL dissolution using a field-scale mass transfer function, instantaneous reaction of oxidant with aqueous and adsorbed contaminant and with readily oxidizable natural oxygen demand ("fast NOD"), and second-order kinetic reactions with "slow NOD." DNAPL dissolution enhancement as a function of oxidant concentration and inhibition due to manganese dioxide precipitation during permanganate injection are included in the model. The DNAPL source area is divided into multiple treatment zones with different areas, depths, and contaminant masses based on site characterization data. The performance model is coupled with a cost module that involves a set of unit costs representing specific fixed and operating costs. Monitoring of groundwater and/or soil concentrations in each treatment zone is employed to assess ISCO performance and make real-time decisions on oxidant reinjection or ISCO termination. Key ISCO design variables include the oxidant concentration to be injected, time to begin performance monitoring, groundwater and/or soil contaminant concentrations to trigger reinjection or terminate ISCO, number of monitoring wells or geoprobe locations per treatment zone, number of samples per sampling event and location, and monitoring frequency. Design variables for each treatment zone may be optimized to minimize expected cost over a set of Monte Carlo simulations that consider uncertainty in site parameters. The model is incorporated in the Stochastic Cost Optimization Toolkit (SCOToolkit) program, which couples the ISCO model with a dissolved plume transport model and with modules for other remediation strategies. An example problem is presented that illustrates design tradeoffs required to deal with characterization and monitoring uncertainty. Monitoring soil concentration changes during ISCO
Towards 3D lidar point cloud registration improvement using optimal neighborhood knowledge
NASA Astrophysics Data System (ADS)
Gressin, Adrien; Mallet, Clément; Demantké, Jérôme; David, Nicolas
2013-05-01
Automatic 3D point cloud registration is a main issue in computer vision and remote sensing. One of the most commonly adopted solution is the well-known Iterative Closest Point (ICP) algorithm. This standard approach performs a fine registration of two overlapping point clouds by iteratively estimating the transformation parameters, assuming good a priori alignment is provided. A large body of literature has proposed many variations in order to improve each step of the process (namely selecting, matching, rejecting, weighting and minimizing). The aim of this paper is to demonstrate how the knowledge of the shape that best fits the local geometry of each 3D point neighborhood can improve the speed and the accuracy of each of these steps. First we present the geometrical features that form the basis of this work. These low-level attributes indeed describe the neighborhood shape around each 3D point. They allow to retrieve the optimal size to analyze the neighborhoods at various scales as well as the privileged local dimension (linear, planar, or volumetric). Several variations of each step of the ICP process are then proposed and analyzed by introducing these features. Such variants are compared on real datasets with the original algorithm in order to retrieve the most efficient algorithm for the whole process. Therefore, the method is successfully applied to various 3D lidar point clouds from airborne, terrestrial, and mobile mapping systems. Improvement for two ICP steps has been noted, and we conclude that our features may not be relevant for very dissimilar object samplings.
Hemmateenejad, Bahram; Shamsipur, Mojtaba; Zare-Shahabadi, Vali; Akhond, Morteza
2011-10-17
The classification and regression trees (CART) possess the advantage of being able to handle large data sets and yield readily interpretable models. A conventional method of building a regression tree is recursive partitioning, which results in a good but not optimal tree. Ant colony system (ACS), which is a meta-heuristic algorithm and derived from the observation of real ants, can be used to overcome this problem. The purpose of this study was to explore the use of CART and its combination with ACS for modeling of melting points of a large variety of chemical compounds. Genetic algorithm (GA) operators (e.g., cross averring and mutation operators) were combined with ACS algorithm to select the best solution model. In addition, at each terminal node of the resulted tree, variable selection was done by ACS-GA algorithm to build an appropriate partial least squares (PLS) model. To test the ability of the resulted tree, a set of approximately 4173 structures and their melting points were used (3000 compounds as training set and 1173 as validation set). Further, an external test set containing of 277 drugs was used to validate the prediction ability of the tree. Comparison of the results obtained from both trees showed that the tree constructed by ACS-GA algorithm performs better than that produced by recursive partitioning procedure. PMID:21907021
NASA Astrophysics Data System (ADS)
Saleh, Joseph H.; Hastings, Daniel E.; Newman, Dava J.
2004-03-01
An augmented perspective on system architecture is proposed (diachronic) that complements the traditional views on system architecture (synchronic). This paper proposes to view in a system architecture the flow of service (or utility) that the system will provide over its design lifetime. It suggests that the design lifetime is a fundamental component of system architecture although one cannot see it or touch it. Consequently, cost, utility, and value per unit time metrics are introduced. A framework is then developed that identifies optimal design lifetimes for complex systems in general, and space systems in particular, based on this augmented perspective of system architecture and on these metrics. It is found that an optimal design lifetime for a satellite exists, even in the case of constant expected revenues per day over the system's lifetime, and that it changes substantially with the expected Time to Obsolescence of the system and the volatility of the market the system is serving in the case of a commercial venture. The analysis thus proves that it is essential for a system architect to match the design lifetime with the dynamical characteristics of the environment the system is/will be operating in. It is also shown that as the uncertainty in the dynamical characteristics of the environment the system is operating in increases, the value of having the option to upgrade, modify, or extend the lifetime of a system at a later point in time increases depending on how events unfold.
An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification
Samal, Ashok; Rong, Panying; Green, Jordan R.
2016-01-01
Purpose The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. Method The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of words, and a set of short phrases during the recording. We used a machine-learning classifier (support-vector machine) to classify the speech stimuli on the basis of articulatory movements. We then compared classification accuracies of the flesh-point combinations to determine an optimal set of sensors. Results When data from the 4 sensors (T1: the vicinity between the tongue tip and tongue blade; T4: the tongue-body back; UL: the upper lip; and LL: the lower lip) were combined, phoneme and word classifications were most accurate and were comparable with the full set (including T2: the tongue-body front; and T3: the tongue-body front). Conclusion We identified a 4-sensor set—that is, T1, T4, UL, LL—that yielded a classification accuracy (91%–95%) equivalent to that using all 6 sensors. These findings provide an empirical basis for selecting sensors and their locations for scientific and emerging clinical applications that incorporate articulatory movements. PMID:26564030
Selective internal operations in the recognition of locally and globally point-inverted patterns.
Bischof, W F; Foster, D H; Kahn, J I
1985-01-01
Performance in discriminating rotated 'same' patterns from 'different' patterns may decrease with rotation angle up to about 90 degrees and then increase with angle up to 180 degrees. This anomalously improved performance under 180 degrees pattern rotation or point-inversion can be explained by assuming that patterns are internally represented in terms of local features and their spatial-order relations ('left of', 'above', etc.), and that, in pattern comparison, an efficient internal sense-reversal operation occurs (transforming 'left of' to 'right of', etc.). Previous experiments suggested that local features and spatial relations could not be efficiently separated in some pattern-comparison tasks. This hypothesis was tested by measuring 'same-different' discrimination performance under four transformation: point-inversion 1 of the whole pattern, point-inversion 1F of local features alone, point-inversion 1P of local-feature positions alone, and identity transformation Id. The results suggested that internal sense-reversal operations could be applied selectively and efficiently, provided that local features were well separated. Under this condition performances for 1F and 1 were about the same whereas performance for 1P was significantly worse, the latter performance resulting possibly from an attempt to apply internal global and local sense-reversal operations serially. PMID:3940058
NASA Technical Reports Server (NTRS)
Rowland, John R.; Goldhirsh, Julius; Vogel, Wolfhard J.; Torrence, Geoffrey W.
1991-01-01
An overview and a status description of the planned LMSS mobile K band experiment with ACTS is presented. As a precursor to the ACTS mobile measurements at 20.185 GHz, measurements at 19.77 GHz employing the Olympus satellite were originally planned. However, because of the demise of Olympus in June of 1991, the efforts described here are focused towards the ACTS measurements. In particular, we describe the design and testing results of a gyro controlled mobile-antenna pointing system. Preliminary pointing measurements during mobile operations indicate that the present system is suitable for measurements employing a 15 cm aperture (beamwidth at approximately 7 deg) receiving antenna operating with ACTS in the high gain transponder mode. This should enable measurements with pattern losses smaller than plus or minus 1 dB over more than 95 percent of the driving distance. Measurements with the present mount system employing a 60 cm aperture (beamwidth at approximately 1.7 deg) results in pattern losses smaller than plus or minus 3 dB for 70 percent of the driving distance. Acceptable propagation measurements may still be made with this system by employing developed software to flag out bad data points due to extreme pointing errors. The receiver system including associated computer control software has been designed and assembled. Plans are underway to integrate the antenna mount with the receiver on the University of Texas mobile receiving van and repeat the pointing tests on highways employing a recently designed radome system.
Target point correction optimized based on the dose distribution of each fraction in daily IGRT
NASA Astrophysics Data System (ADS)
Stoll, Markus; Giske, Kristina; Stoiber, Eva M.; Schwarz, Michael; Bendl, Rolf
2014-03-01
Purpose: To use daily re-calculated dose distributions for optimization of target point corrections (TPCs) in image guided radiation therapy (IGRT). This aims to adapt fractioned intensity modulated radiation therapy (IMRT) to changes in the dose distribution induced by anatomical changes. Methods: Daily control images from an in-room on-rail spiral CT-Scanner of three head-and-neck cancer patients were analyzed. The dose distribution was re-calculated on each control CT after an initial TPC, found by a rigid image registration method. The clinical target volumes (CTVs) were transformed from the planning CT to the rigidly aligned control CTs using a deformable image registration method. If at least 95% of each transformed CTV was covered by the initially planned D95 value, the TPC was considered acceptable. Otherwise the TPC was iteratively altered to maximize the dose coverage of the CTVs. Results: In 14 (out of 59) fractions the criterion was already fulfilled after the initial TPC. In 10 fractions the TPC can be optimized to fulfill the coverage criterion. In 31 fractions the coverage can be increased but the criterion is not fulfilled. In another 4 fractions the coverage cannot be increased by the TPC optimization. Conclusions: The dose coverage criterion allows selection of patients who would benefit from replanning. Using the criterion to include daily re-calculated dose distributions in the TPC reduces the replanning rate in the analysed three patients from 76% to 59% compared to the rigid image registration TPC.
Optimization with Telios of the Polar-Drive Point Design for the National Ignition Facility
NASA Astrophysics Data System (ADS)
Collins, T. J. B.; Marozas, J. A.; McKenty, P. W.
2012-10-01
Polar drivefootnotetextS. Skupsky et al., Phys. Plasmas 11, 2763 (2004). (PD) will make it possible to conduct direct-drive--ignition experiments at the National Ignition Facilityfootnotetext G. H. Miller, E. I. Moses, and C. R. Wuest, Opt. Eng. 43, 2841 (2004). while the facility is configured for x-ray drive. A PD-ignition design has been developedfootnotetextT. J. B. Collins et al., Phys. Plasmas 19, 056308 (2012). achieving high gain in simulations including single- and multiple-beam nonuniformities, and ice and outer-surface roughness. This design has been further optimized to reduce the in-flight aspect ratio and implosion speed, increasing target stability while maintaining moderately high thermonuclear gains. The dependence of target properties on implosion speed has been examined using the optimization shell Telios. Telios has the capability to drive complex radiation--hydrodynamic simulations and optimized results over an arbitrarily large parameter space, including ring pointing angles, spot-shape parameters, target dimensions, pulse timing, and relative pulse energies. Telios is capable of extracting output from a variety of sources and combining them to form arbitrarily complex, user-specified metrics. This work was supported by the U.S. Department of Energy Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC52-08NA28302.
Sturm, C.; Soni, A.; Aoki, Y.; Christ, N. H.; Izubuchi, T.; Sachrajda, C. T. C.
2009-07-01
We extend the Rome-Southampton regularization independent momentum-subtraction renormalization scheme (RI/MOM) for bilinear operators to one with a nonexceptional, symmetric subtraction point. Two-point Green's functions with the insertion of quark bilinear operators are computed with scalar, pseudoscalar, vector, axial-vector and tensor operators at one-loop order in perturbative QCD. We call this new scheme RI/SMOM, where the S stands for 'symmetric'. Conversion factors are derived, which connect the RI/SMOM scheme and the MS scheme and can be used to convert results obtained in lattice calculations into the MS scheme. Such a symmetric subtraction point involves nonexceptional momenta implying a lattice calculation with substantially suppressed contamination from infrared effects. Further, we find that the size of the one-loop corrections for these infrared improved kinematics is substantially decreased in the case of the pseudoscalar and scalar operator, suggesting a much better behaved perturbative series. Therefore it should allow us to reduce the error in the determination of the quark mass appreciably.
Performance of FORTRAN floating-point operations on the Flex/32 multicomputer
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1987-01-01
A series of experiments has been run to examine the floating-point performance of FORTRAN programs on the Flex/32 (Trademark) computer. The experiments are described, and the timing results are presented. The time required to execute a floating-point operation is found to vary considerbaly depending on a number of factors. One factor of particular interest from an algorithm design standpoint is the difference in speed between common memory accesses and local memory accesses. Common memory accesses were found to be slower, and guidelines are given for determinig when it may be cost effective to copy data from common to local memory.
NASA Astrophysics Data System (ADS)
Xia, Shu; Ge, Xiaolin
2016-04-01
In this study, according to various grid-connected demands, the optimization scheduling models of Combined Heat and Power (CHP) units are established with three scheduling modes, which are tracking the total generation scheduling mode, tracking steady output scheduling mode and tracking peaking curve scheduling mode. In order to reduce the solution difficulty, based on the principles of modern algebraic integers, linearizing techniques are developed to handle complex nonlinear constrains of the variable conditions, and the optimized operation problem of CHP units is converted into a mixed-integer linear programming problem. Finally, with specific examples, the 96 points day ahead, heat and power supply plans of the systems are optimized. The results show that, the proposed models and methods can develop appropriate coordination heat and power optimization programs according to different grid-connected control.
Optimization of the Operation of Green Buildings applying the Facility Management
NASA Astrophysics Data System (ADS)
Somorová, Viera
2014-06-01
Nowadays, in the field of civil engineering there exists an upward trend towards environmental sustainability. It relates mainly to the achievement of energy efficiency and also to the emission reduction throughout the whole life cycle of the building, i.e. in the course of its implementation, use and liquidation. These requirements are fulfilled, to a large extent, by green buildings. The characteristic feature of green buildings are primarily highly-sophisticated technical and technological equipments which are installed therein. The sophisticated systems of technological equipments need also the sophisticated management. From this point of view the facility management has all prerequisites to meet this requirement. The paper is aimed to define the facility management as an effective method which enables the optimization of the management of supporting activities by creating conditions for the optimum operation of green buildings viewed from the aspect of the environmental conditions
Design optimization of composite structures operating in acoustic environments
NASA Astrophysics Data System (ADS)
Chronopoulos, D.
2015-10-01
The optimal mechanical and geometric characteristics for layered composite structures subject to vibroacoustic excitations are derived. A Finite Element description coupled to Periodic Structure Theory is employed for the considered layered panel. Structures of arbitrary anisotropy as well as geometric complexity can thus be modelled by the presented approach. Damping can also be incorporated in the calculations. Initially, a numerical continuum-discrete approach for computing the sensitivity of the acoustic wave characteristics propagating within the modelled periodic composite structure is exhibited. The first- and second-order sensitivities of the acoustic transmission coefficient expressed within a Statistical Energy Analysis context are subsequently derived as a function of the computed acoustic wave characteristics. Having formulated the gradient vector as well as the Hessian matrix, the optimal mechanical and geometric characteristics satisfying the considered mass, stiffness and vibroacoustic performance criteria are sought by employing Newton's optimization method.
Design, Performance and Optimization for Multimodal Radar Operation
Bhat, Surendra S.; Narayanan, Ram M.; Rangaswamy, Muralidhar
2012-01-01
This paper describes the underlying methodology behind an adaptive multimodal radar sensor that is capable of progressively optimizing its range resolution depending upon the target scattering features. It consists of a test-bed that enables the generation of linear frequency modulated waveforms of various bandwidths. This paper discusses a theoretical approach to optimizing the bandwidth used by the multimodal radar. It also discusses the various experimental results obtained from measurement. The resolution predicted from theory agrees quite well with that obtained from experiments for different target arrangements.
NASA Astrophysics Data System (ADS)
Afghan-Toloee, A.; Heidari, A. A.; Joibari, Y.
2013-09-01
The problem of specifying the minimum number of sensors to deploy in a certain area to face multiple targets has been generally studied in the literatures. In this paper, we are arguing the multi-sensors deployment problem (MDP). The Multi-sensor placement problem can be clarified as minimizing the cost required to cover the multi target points in the area. We propose a more feasible method for the multi-sensor placement problem. Our method makes provision the high coverage of grid based placements while minimizing the cost as discovered in perimeter placement techniques. The NICA algorithm as improved ICA (Imperialist Competitive Algorithm) is used to decrease the performance time to explore an enough solution compared to other meta-heuristic schemes such as GA, PSO and ICA. A three dimensional area is used for clarify the multiple target and placement points, making provision x, y, and z computations in the observation algorithm. A structure of model for the multi-sensor placement problem is proposed: The problem is constructed as an optimization problem with the objective to minimize the cost while covering all multiple target points upon a given probability of observation tolerance.
NASA Astrophysics Data System (ADS)
Branz, H. M.
1982-09-01
A new computer simulation of the annual operation of degraded flat-plate photovoltaic (PV) arrays is used to evaluate the need for maximum-power-point tracking in real PV systems. The simulations are based on single-glitch I-V curve shapes rather than particular array degradations, making the data reported applicable to any system whose likely failure modes are predictable and result in single-glitch I-V curves. The simulations show that with a reasonable array wiring strategy, effective maintenance, periodic I-V curve tracing, and avoidance of frequent and serious array shadowing, there is no reason that considerations of degradation should force the adoption of maximum-power-point-tracking power conditioning on a PV system that would otherwise operate economically at fixed voltage.
An optimal operational advisory system for a brewery's energy supply plant
Ito, K.; Shiba, T.; Yokoyama, R. . Dept. of Energy Systems Engineering); Sakashita, S. . Mayekawa Energy Management Research Center)
1994-03-01
An optimal operational advisory system is proposed to operate rationally a brewery's energy supply plant from the economical viewpoint. A mixed-integer linear programming problem is formulated so as to minimize the daily operational cost subject to constraints such as equipment performance characteristics, energy supply-demand relations, and some practical operational restrictions. This problem includes lots of unknown variables and a hierarchical approach is adopted to derive numerical solutions. The optimal solution obtained by this methods is indicated to the plant operators so as to support their decision making. Through the numerical study for a real brewery plant, the possibility of saving operational cost is ascertained.
Code of Federal Regulations, 2013 CFR
2013-10-01
...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....
Code of Federal Regulations, 2012 CFR
2012-10-01
...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....
Code of Federal Regulations, 2014 CFR
2014-10-01
...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....
Code of Federal Regulations, 2011 CFR
2011-10-01
...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....
Code of Federal Regulations, 2010 CFR
2010-10-01
...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....
TH-C-19A-11: Toward An Optimized Multi-Point Scintillation Detector
Duguay-Drouin, P; Delage, ME; Therriault-Proulx, F; Beddar, S; Beaulieu, L
2014-06-15
Purpose: The purpose of this work is to characterize a 2-points mPSDs' optical chain using a spectral analysis to help selecting the optimal components for the detector. Methods: Twenty different 2-points mPSD combinations were built using 4 plastic scintillators (BCF10, BCF12, BCF60, BC430; St-Gobain) and quantum dots (QDs). The scintillator is said to be proximal when near the photodetector, and distal otherwise. A 15m optical fiber (ESKA GH-4001) was coupled to the scintillating component and connected to a spectrometer (Shamrock, Andor and QEPro, OceanOptics). These scintillation components were irradiated at 125kVp; a spectrum for each scintillator was obtained by irradiation of individual scintillator and shielding the second component, thus talking into account light propagation in all components and interfaces. The combined total spectrum was also acquired and involved simultaneous irradiation of the two scintillators for each possible combination. The shape and intensity were characterized. Results: QDs in proximal position absorb almost all the light signal from distal plastic scintillators and emit in its own emission wavelength, with 100% of the signal in the QD range (625–700nm) for the combination BCF12/QD. However, discrimination is possible when QD is in distal position in combination with blue scintillators, total signal being 73% in the blue range (400-550nm) and 27% in QD range. Similar results are obtained with the orange scintillator (BC430). For optimal signal intensity, BCF12 should always be in proximal position, e.g. having 50% more intensity when coupled with BCF60 in distal position (BCF12/BCF60) compared to the BCF60/BCF12 combination. Conclusion: Different combinations of plastic scintillators and QD were built and their emission spectra were studied. We established a preferential order for the scintillating components in the context of an optimized 2-points mPSD. In short, the components with higher wavelength emission spectrum
Sound source localization on an axial fan at different operating points
NASA Astrophysics Data System (ADS)
Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes
2016-08-01
A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.
NASA Astrophysics Data System (ADS)
Zain, N. N. M.; Abu Bakar, N. K.; Mohamad, S.; Saleh, N. Md.
2014-01-01
A greener method based on cloud point extraction was developed for removing phenol species including 2,4-dichlorophenol (2,4-DCP), 2,4,6-trichlorophenol (2,4,6-TCP) and 4-nitrophenol (4-NP) in water samples by using the UV-Vis spectrophotometric method. The non-ionic surfactant DC193C was chosen as an extraction solvent due to its low water content in a surfactant rich phase and it is well-known as an environmentally-friendly solvent. The parameters affecting the extraction efficiency such as pH, temperature and incubation time, concentration of surfactant and salt, amount of surfactant and water content were evaluated and optimized. The proposed method was successfully applied for removing phenol species in real water samples.
Optimizing the rotating point spread function by SLM aided spiral phase modulation
NASA Astrophysics Data System (ADS)
Baránek, M.; Bouchal, Z.
2014-12-01
We demonstrate the vortex point spread function (PSF) whose shape and the rotation sensitivity to defocusing can be controlled by a phase-only modulation implemented in the spatial or frequency domains. Rotational effects are studied in detail as a result of the spiral modulation carried out in discrete radial and azimuthal sections with different topological charges. As the main result, a direct connection between properties of the PSF and the parameters of the spiral mask is found and subsequently used for an optimal shaping of the PSF and control of its defocusing rotation rate. Experiments on the PSF rotation verify a good agreement with theoretical predictions and demonstrate potential of the method for applications in microscopy, tracking of particles and 3D imaging.
Melting point prediction employing k-nearest neighbor algorithms and genetic parameter optimization.
Nigsch, Florian; Bender, Andreas; van Buuren, Bernd; Tissen, Jos; Nigsch, Eduard; Mitchell, John B O
2006-01-01
We have applied the k-nearest neighbor (kNN) modeling technique to the prediction of melting points. A data set of 4119 diverse organic molecules (data set 1) and an additional set of 277 drugs (data set 2) were used to compare performance in different regions of chemical space, and we investigated the influence of the number of nearest neighbors using different types of molecular descriptors. To compute the prediction on the basis of the melting temperatures of the nearest neighbors, we used four different methods (arithmetic and geometric average, inverse distance weighting, and exponential weighting), of which the exponential weighting scheme yielded the best results. We assessed our model via a 25-fold Monte Carlo cross-validation (with approximately 30% of the total data as a test set) and optimized it using a genetic algorithm. Predictions for drugs based on drugs (separate training and test sets each taken from data set 2) were found to be considerably better [root-mean-squared error (RMSE)=46.3 degrees C, r2=0.30] than those based on nondrugs (prediction of data set 2 based on the training set from data set 1, RMSE=50.3 degrees C, r2=0.20). The optimized model yields an average RMSE as low as 46.2 degrees C (r2=0.49) for data set 1, and an average RMSE of 42.2 degrees C (r2=0.42) for data set 2. It is shown that the kNN method inherently introduces a systematic error in melting point prediction. Much of the remaining error can be attributed to the lack of information about interactions in the liquid state, which are not well-captured by molecular descriptors. PMID:17125183
Design and development of a point focus concentrated PV module operating above 100 suns
NASA Astrophysics Data System (ADS)
Olah, S.; Ho, F.; Khemthong, S.
The present objective was to design, develop, fabricate and performance-test a highly efficient and cost-effective concentrated photovoltaic module which can operate above 100-suns concentration, which can be mass produced, and is reliable, with minimum maintenance. A point-focus module design was chosen, operating at 120 suns using a molded acrylic Fresnel lens and passive cooling. Four modules were built and tested, and a manufacturing cost analysis was made. The module and components were designed for future high volume production with the use of automated equipment in mind. The module consisted of a lightweight module body fabricated from aluminum sheet stock, a lens parquet assembly, and a 15 high efficiency solar cell-heat sink assembly, connected in series to produce 55 W under normal operating conditions.
Loop Heat Pipe Operation Using Heat Source Temperature for Set Point Control
NASA Technical Reports Server (NTRS)
Ku, Jentung; Paiva, Kleber; Mantelli, Marcia
2011-01-01
Loop heat pipes (LHPs) have been used for thermal control of several NASA and commercial orbiting spacecraft. The LHP operating temperature is governed by the saturation temperature of its compensation chamber (CC). Most LHPs use the CC temperature for feedback control of its operating temperature. There exists a thermal resistance between the heat source to be cooled by the LHP and the LHP's CC. Even if the CC set point temperature is controlled precisely, the heat source temperature will still vary with its heat output. For most applications, controlling the heat source temperature is of most interest. A logical question to ask is: "Can the heat source temperature be used for feedback control of the LHP operation?" A test program has been implemented to answer the above question. Objective is to investigate the LHP performance using the CC temperature and the heat source temperature for feedback control
Experimental Investigation of a Point Design Optimized Arrow Wing HSCT Configuration
NASA Technical Reports Server (NTRS)
Narducci, Robert P.; Sundaram, P.; Agrawal, Shreekant; Cheung, S.; Arslan, A. E.; Martin, G. L.
1999-01-01
The M2.4-7A Arrow Wing HSCT configuration was optimized for straight and level cruise at a Mach number of 2.4 and a lift coefficient of 0.10. A quasi-Newton optimization scheme maximized the lift-to-drag ratio (by minimizing drag-to-lift) using Euler solutions from FL067 to estimate the lift and drag forces. A 1.675% wind-tunnel model of the Opt5 HSCT configuration was built to validate the design methodology. Experimental data gathered at the NASA Langley Unitary Plan Wind Tunnel (UPWT) section #2 facility verified CFL3D Euler and Navier-Stokes predictions of the Opt5 performance at the design point. In turn, CFL3D confirmed the improvement in the lift-to-drag ratio obtained during the optimization, thus validating the design procedure. A data base at off-design conditions was obtained during three wind-tunnel tests. The entry into NASA Langley UPWT section #2 obtained data at a free stream Mach number, M(sub infinity), of 2.55 as well as the design Mach number, M(sub infinity)=2.4. Data from a Mach number range of 1.8 to 2.4 was taken at UPWT section #1. Transonic and low supersonic Mach numbers, M(sub infinity)=0.6 to 1.2, was gathered at the NASA Langley 16 ft. Transonic Wind Tunnel (TWT). In addition to good agreement between CFD and experimental data, highlights from the wind-tunnel tests include a trip dot study suggesting a linear relationship between trip dot drag and Mach number, an aeroelastic study that measured the outboard wing deflection and twist, and a flap scheduling study that identifies the possibility of only one leading-edge and trailing-edge flap setting for transonic cruise and another for low supersonic acceleration.
Science Operations for the 2008 NASA Lunar Analog Field Test at Black Point Lava Flow, Arizona
NASA Technical Reports Server (NTRS)
Garry W. D.; Horz, F.; Lofgren, G. E.; Kring, D. A.; Chapman, M. G.; Eppler, D. B.; Rice, J. W., Jr.; Nelson, J.; Gernhardt, M. L.; Walheim, R. J.
2009-01-01
Surface science operations on the Moon will require merging lessons from Apollo with new operation concepts that exploit the Constellation Lunar Architecture. Prototypes of lunar vehicles and robots are already under development and will change the way we conduct science operations compared to Apollo. To prepare for future surface operations on the Moon, NASA, along with several supporting agencies and institutions, conducted a high-fidelity lunar mission simulation with prototypes of the small pressurized rover (SPR) and unpressurized rover (UPR) (Fig. 1) at Black Point lava flow (Fig. 2), 40 km north of Flagstaff, Arizona from Oct. 19-31, 2008. This field test was primarily intended to evaluate and compare the surface mobility afforded by unpressurized and pressurized rovers, the latter critically depending on the innovative suit-port concept for efficient egress and ingress. The UPR vehicle transports two astronauts who remain in their EVA suits at all times, whereas the SPR concept enables astronauts to remain in a pressurized shirt-sleeve environment during long translations and while making contextual observations and enables rapid (less than or equal to 10 minutes) transfer to and from the surface via suit-ports. A team of field geologists provided realistic science scenarios for the simulations and served as crew members, field observers, and operators of a science backroom. Here, we present a description of the science team s operations and lessons learned.
Loop Heat Pipe Operation Using Heat Source Temperature for Set Point Control
NASA Technical Reports Server (NTRS)
Ku, Jentung; Paiva, Kleber; Mantelli, Marcia
2011-01-01
The LHP operating temperature is governed by the saturation temperature of its reservoir. Controlling the reservoir saturation temperature is commonly accomplished by cold biasing the reservoir and using electrical heaters to provide the required control power. Using this method, the loop operating temperature can be controlled within +/- 0.5K. However, because of the thermal resistance that exists between the heat source and the LHP evaporator, the heat source temperature will vary with its heat output even if LHP operating temperature is kept constant. Since maintaining a constant heat source temperature is of most interest, a question often raised is whether the heat source temperature can be used for LHP set point temperature control. A test program with a miniature LHP has been carried out to investigate the effects on the LHP operation when the control temperature sensor is placed on the heat source instead of the reservoir. In these tests, the LHP reservoir is cold-biased and is heated by a control heater. Tests results show that it is feasible to use the heat source temperature for feedback control of the LHP operation. Using this method, the heat source temperature can be maintained within a tight range for moderate and high powers. At low powers, however, temperature oscillations may occur due to interactions among the reservoir control heater power, the heat source mass, and the heat output from the heat source. In addition, the heat source temperature could temporarily deviate from its set point during fast thermal transients. The implication is that more sophisticated feedback control algorithms need to be implemented for LHP transient operation when the heat source temperature is used for feedback control.
Optimization of the Nano-Dust Analyzer (NDA) for operation under solar UV illumination
NASA Astrophysics Data System (ADS)
O`Brien, L.; Grün, E.; Sternovsky, Z.
2015-12-01
The performance of the Nano-Dust Analyzer (NDA) instrument is analyzed for close pointing to the Sun, finding the optimal field-of-view (FOV), arrangement of internal baffles and measurement requirements. The laboratory version of the NDA instrument was recently developed (O'Brien et al., 2014) for the detection and elemental composition analysis of nano-dust particles. These particles are generated near the Sun by the collisional breakup of interplanetary dust particles (IDP), and delivered to Earth's orbit through interaction with the magnetic field of the expanding solar wind plasma. NDA is operating on the basis of impact ionization of the particle and collecting the generated ions in a time-of-flight fashion. The challenge in the measurement is that nano-dust particles arrive from a direction close to that of the Sun and thus the instrument is exposed to intense ultraviolet (UV) radiation. The performed optical ray-tracing analysis shows that it is possible to suppress the number of UV photons scattering into NDA's ion detector to levels that allow both high signal-to-noise ratio measurements, and long-term instrument operation. Analysis results show that by avoiding direct illumination of the target, the photon flux reaching the detector is reduced by a factor of about 103. Furthermore, by avoiding the target and also implementing a low-reflective coating, as well as an optimized instrument geometry consisting of an internal baffle system and a conical detector housing, the photon flux can be reduced by a factor of 106, bringing it well below the operation requirement. The instrument's FOV is optimized for the detection of nano-dust particles, while excluding the Sun. With the Sun in the FOV, the instrument can operate with reduced sensitivity and for a limited duration. The NDA instrument is suitable for future space missions to provide the unambiguous detection of nano-dust particles, to understand the conditions in the inner heliosphere and its temporal
NASA Astrophysics Data System (ADS)
Weinmann, Martin; Jutzi, Boris; Hinz, Stefan; Mallet, Clément
2015-07-01
3D scene analysis in terms of automatically assigning 3D points a respective semantic label has become a topic of great importance in photogrammetry, remote sensing, computer vision and robotics. In this paper, we address the issue of how to increase the distinctiveness of geometric features and select the most relevant ones among these for 3D scene analysis. We present a new, fully automated and versatile framework composed of four components: (i) neighborhood selection, (ii) feature extraction, (iii) feature selection and (iv) classification. For each component, we consider a variety of approaches which allow applicability in terms of simplicity, efficiency and reproducibility, so that end-users can easily apply the different components and do not require expert knowledge in the respective domains. In a detailed evaluation involving 7 neighborhood definitions, 21 geometric features, 7 approaches for feature selection, 10 classifiers and 2 benchmark datasets, we demonstrate that the selection of optimal neighborhoods for individual 3D points significantly improves the results of 3D scene analysis. Additionally, we show that the selection of adequate feature subsets may even further increase the quality of the derived results while significantly reducing both processing time and memory consumption.
Optimization of the thermogauge furnace for realizing high temperature fixed points
Wang, T.; Dong, W.; Liu, F.
2013-09-11
The thermogauge furnace was commonly used in many NMIs as a blackbody source for calibration of the radiation thermometer. It can also be used for realizing the high temperature fixed point(HTFP). According to our experience, when realizing HTFP we need the furnace provide relative good temperature uniformity to avoid the possible damage to the HTFP. To improve temperature uniformity in the furnace, the furnace tube was machined near the tube ends with a help of a simulation analysis by 'ansys workbench'. Temperature distributions before and after optimization were measured and compared at 1300 °C, 1700°C, 2500 °C, which roughly correspond to Co-C(1324 °C), Pt-C(1738 °C) and Re-C(2474 °C), respectively. The results clearly indicate that through machining the tube the temperature uniformity of the Thermogage furnace can be remarkably improved. A Pt-C high temperature fixed point was realized in the modified Thermogauge furnace subsequently, the plateaus were compared with what obtained using old heater, and the results were presented in this paper.
Monitoring fleets of electric vehicles: optimizing operational use and maintenance
NASA Astrophysics Data System (ADS)
Lenain, P.; Kechmire, M.; Smaha, J. P.
Electric vehicles can make a substantial contribution to an improved urban environment. Reduced atmospheric pollution and noise emissions make the increased use of electric vehicles highly desirable and their suitability for dedicated fleets of vehicles is well recognized. As a result, a suitable system of supervision and management is necessary for fleet operators, to allow them to see the key parameters for the optimum use of the electric vehicle at all times. A computer-based data acquisition and analysis system will allow access to critical control parameters and display the operation of chargers and batteries in real time. Battery condition and charging can be followed. Information is stored in a database and can be readily analyzed and retrieved to manage extensive charging installations. In this paper, the operation of a battery/charger management system is described. The effective use of the system in electric utility vans is demonstrated.
Optimization or Simulation? Comparison of approaches to reservoir operation on the Senegal River
NASA Astrophysics Data System (ADS)
Raso, Luciano; Bader, Jean-Claude; Pouget, Jean-Christophe; Malaterre, Pierre-Olivier
2015-04-01
Design of reservoir operation rules follows, traditionally, two approaches: optimization and simulation. In simulation, the analyst hypothesizes operation rules, and selects them by what-if analysis based on effects of model simulations on different objectives indicators. In optimization, the analyst selects operational objective indicators, finding operation rules as an output. Optimization rules guarantee optimality, but they often require further model simplification, and can be hard to communicate. Selecting the most proper approach depends on the system under analysis, and the analyst expertise and objectives. We present advantage and disadvantages of both approaches, and we test them for the Manantali reservoir operation rule design, on the Senegal River, West Africa. We compare their performance in attaining the system objectives. Objective indicators are defined a-priori, in order to quantify the system performance. Results from this application are not universally generalizable to the entire class, but they allow us to draw conclusions on this system, and to give further information on their application.
Optimal operating frequency in wireless power transmission for implantable devices.
Poon, Ada S Y; O'Driscoll, Stephen; Meng, Teresa H
2007-01-01
This paper examines short-range wireless powering for implantable devices and shows that existing analysis techniques are not adequate to conclude the characteristics of power transfer efficiency over a wide frequency range. It shows, theoretically and experimentally, that the optimal frequency for power transmission in biological media can be in the GHz-range while existing solutions exclusively focus on the MHz-range. This implies that the size of the receive coil can be reduced by 10(4) times which enables the realization of fully integrated implantable devices. PMID:18003300
Optimizing operational flexibility and enforcement liability in Title V permits
McCann, G.T.
1997-12-31
Now that most states have interim or full approval of the portions of their state implementation plans (SIPs) implementing Title V (40 CFR Part 70) of the Clean Air Act Amendments (CAAA), most sources which require a Title V permit have submitted or are well on the way to submitting a Title V operating permit application. Numerous hours have been spent preparing applications to ensure the administrative completeness of the application and operational flexibility for the facility. Although much time and effort has been spent on Title V permit applications, the operating permit itself is the final goal. This paper outlines the major Federal requirements for Title V permits as given in the CAAA at 40 CFR 70.6, Permit Content. These Federal requirements and how they will effect final Title V permits and facilities will be discussed. This paper will provide information concerning the Federal requirements for Title V permits and suggestions on how to negotiate a Title V permit to maximize operational flexibility and minimize enforcement liability.
Street curb recognition in 3d point cloud data using morphological operations
NASA Astrophysics Data System (ADS)
Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino
2015-04-01
Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a
Cost optimization for series-parallel execution of a collection of intersecting operation sets
NASA Astrophysics Data System (ADS)
Dolgui, Alexandre; Levin, Genrikh; Rozin, Boris; Kasabutski, Igor
2016-05-01
A collection of intersecting sets of operations is considered. These sets of operations are performed successively. The operations of each set are activated simultaneously. Operation durations can be modified. The cost of each operation decreases with the increase in operation duration. In contrast, the additional expenses for each set of operations are proportional to its time. The problem of selecting the durations of all operations that minimize the total cost under constraint on completion time for the whole collection of operation sets is studied. The mathematical model and method to solve this problem are presented. The proposed method is based on a combination of Lagrangian relaxation and dynamic programming. The results of numerical experiments that illustrate the performance of the proposed method are presented. This approach was used for optimization multi-spindle machines and machining lines, but the problem is common in engineering optimization and thus the techniques developed could be useful for other applications.
Utilities optimize operations by cycling base-load fossil units
Not Available
1986-05-01
In the summer of 1985, an East Coast utility ''gave away'' approximately 200 MW of electricity. The utility found itself having to operate, at full capability, a 400-MW, 20-yr-old fossil station when its power pool had requested only half that load. The power went into the network and was sold, but another member of the pool got the credit. This situation developed because the utility had two stations it had to operate in the base-load mode: One was brand new, the other could operate economically only at full capacity. This predicament is becoming commonplace for many utilities with one or more base-load units that have recently come on-line. Utilities are using their older fossil units to satisfy generating capacity at these peak-demand periods by introducing them to cyclic operation. For example, in 1987, when Duke Power Co's Catawba 2 nuclear station is scheduled for commercial operation, approximately 50% of the utility's system will be base-load nuclear generation. During periods of low system demand, Duke's larger fossil units will be required either to attain sufficiently low loads or to cycle on and off daily to meet system dispatch requirements. A figure shows how Duke's fossil units will have to meet daily demand projected for the sumer of 1988. Of course, cycling a fossil plant does not involve simply turning the boiler off at 5 p.m. and switching it on again at 9 a.m. This action creates stress on equipment that can lead to severe availability problems. Utilities that opt to cycle all or some of their units do so only after careful analysis. This article describes the more serious problems associated with it.
Field-scale operation of methane biofiltration systems to mitigate point source methane emissions.
Hettiarachchi, Vijayamala C; Hettiaratchi, Patrick J; Mehrotra, Anil K; Kumar, Sunil
2011-06-01
Methane biofiltration (MBF) is a novel low-cost technique for reducing low volume point source emissions of methane (CH₄). MBF uses a granular medium, such as soil or compost, to support the growth of methanotrophic bacteria responsible for converting CH₄ to carbon dioxide (CO₂) and water (H₂O). A field research program was undertaken to evaluate the potential to treat low volume point source engineered CH₄ emissions using an MBF at a natural gas monitoring station. A new comprehensive three-dimensional numerical model was developed incorporating advection-diffusive flow of gas, biological reactions and heat and moisture flow. The one-dimensional version of this model was used as a guiding tool for designing and operating the MBF. The long-term monitoring results of the field MBF are also presented. The field MBF operated with no control of precipitation, evaporation, and temperature, provided more than 80% of CH₄ oxidation throughout spring, summer, and fall seasons. The numerical model was able to predict the CH₄ oxidation behavior of the field MBF with high accuracy. The numerical model simulations are presented for estimating CH₄ oxidation efficiencies under various operating conditions, including different filter bed depths and CH₄ flux rates. The field observations as well as numerical model simulations indicated that the long-term performance of MBFs is strongly dependent on environmental factors, such as ambient temperature and precipitation. PMID:21414700
Reservoir Stimulation Optimization with Operational Monitoring for Creation of EGS
Carlos A. Fernandez
2014-09-15
EGS field projects have not sustained production at rates greater than ½ of what is needed for economic viability. The primary limitation that makes commercial EGS infeasible is our current inability to cost-effectively create high-permeability reservoirs from impermeable, igneous rock within the 3,000-10,000 ft depth range. Our goal is to develop a novel fracturing fluid technology that maximizes reservoir permeability while reducing stimulation cost and environmental impact. Laboratory equipment development to advance laboratory characterization/monitoring is also a priority of this project to study and optimize the physicochemical properties of these fracturing fluids in a range of reservoir conditions. Barrier G is the primarily intended GTO barrier to be addressed as well as support addressing barriers D, E and I.
Reservoir Stimulation Optimization with Operational Monitoring for Creation of EGS
Fernandez, Carlos A.
2013-09-25
EGS field projects have not sustained production at rates greater than ½ of what is needed for economic viability. The primary limitation that makes commercial EGS infeasible is our current inability to cost-effectively create high-permeability reservoirs from impermeable, igneous rock within the 3,000-10,000 ft depth range. Our goal is to develop a novel fracturing fluid technology that maximizes reservoir permeability while reducing stimulation cost and environmental impact. Laboratory equipment development to advance laboratory characterization/monitoring is also a priority of this project to study and optimize the physicochemical properties of these fracturing fluids in a range of reservoir conditions. Barrier G is the primarily intended GTO barrier to be addressed as well as support addressing barriers D, E and I.
Optimizing wartime en route nursing care in Operation Iraqi Freedom.
Nagra, Michael
2011-01-01
Throughout combat operations in Iraq and Afghanistan, Army nurses have served in a new role--providing en route care in military helicopters for patients being transported to a higher level of care. From aid stations on the battlefield where forward surgical teams save lives, limbs, and eyesight, to the next higher level of care at combat support hospitals, these missions require specialized nursing skills to safely care for the high acuity patients. Little information exists about patient outcomes associated with the nursing assessment and care provided during helicopter medical evacuation (MEDEVAC) of such unstable patients and the consequent impact on the patient's condition after transport. In addition, there are no valid and reliable tools to capture care delivery, patient outcomes, and associated nursing workload and staffing requirements. During Operation Iraqi Freedom, a new process was implemented over a 2-year period to measure nursing related patient outcomes during MEDEVAC, and to capture the nursing workload. The use of standard metrics to establish patient priorities and improve nursing care during MEDEVAC allowed the level II forward surgical teams or their equivalents and level III combat support hospitals to make structural, process, and outcome improvements in the en route care programs throughout the Iraq theater of operations. Implications of this program were broad, including establishment of a process to support decision making based on data driven metrics, improvement of quality of nursing care, and defining nurse staffing requirements. PMID:22124873
Critical Point Facility (CPE) Group in the Spacelab Payload Operations Control Center (SL POCC)
NASA Technical Reports Server (NTRS)
1992-01-01
The primary payload for Space Shuttle Mission STS-42, launched January 22, 1992, was the International Microgravity Laboratory-1 (IML-1), a pressurized manned Spacelab module. The goal of IML-1 was to explore in depth the complex effects of weightlessness of living organisms and materials processing. Around-the-clock research was performed on the human nervous system's adaptation to low gravity and effects of microgravity on other life forms such as shrimp eggs, lentil seedlings, fruit fly eggs, and bacteria. Materials processing experiments were also conducted, including crystal growth from a variety of substances such as enzymes, mercury iodide, and a virus. The Huntsville Operations Support Center (HOSC) Spacelab Payload Operations Control Center (SL POCC) at the Marshall Space Flight Center (MSFC) was the air/ground communication channel used between the astronauts and ground control teams during the Spacelab missions. Featured is the Critical Point Facility (CPE) group in the SL POCC during STS-42, IML-1 mission.
Critical Point Facility (CPF) Team in the Spacelab Payload Operations Control Center (SL POCC)
NASA Technical Reports Server (NTRS)
1982-01-01
The primary payload for Space Shuttle Mission STS-42, launched January 22, 1992, was the International Microgravity Laboratory-1 (IML-1), a pressurized manned Spacelab module. The goal of IML-1 was to explore in depth the complex effects of weightlessness of living organisms and materials processing. Around-the-clock research was performed on the human nervous system's adaptation to low gravity and effects of microgravity on other life forms such as shrimp eggs, lentil seedlings, fruit fly eggs, and bacteria. Materials processing experiments were also conducted, including crystal growth from a variety of substances such as enzymes, mercury iodide, and a virus. The Huntsville Operations Support Center (HOSC) Spacelab Payload Operations Control Center (SL POCC) at the Marshall Space Flight Center (MSFC) was the air/ground communication channel used between the astronauts and ground control teams during the Spacelab missions. Featured is the Critical Point Facility (CPF) team in the SL POCC during the IML-1 mission.
Liang, Feng; Guo, Yuanyuan; Fung, Richard Y K
2015-11-01
Operation theatre is one of the most significant assets in a hospital as the greatest source of revenue as well as the largest cost unit. This paper focuses on surgery scheduling optimization, which is one of the most crucial tasks in operation theatre management. A combined scheduling policy composed of three simple scheduling rules is proposed to optimize the performance of scheduling operation theatre. Based on the real-life scenarios, a simulation-based model about surgery scheduling system is built. With two optimization objectives, the response surface method is adopted to search for the optimal weight of simple rules in a combined scheduling policy in the model. Moreover, the weights configuration can be revised to cope with dispatching dynamics according to real-time change at the operation theatre. Finally, performance comparison between the proposed combined scheduling policy and tabu search algorithm indicates that the combined scheduling policy is capable of sequencing surgery appointments more efficiently. PMID:26385551