Engineering to Control Noise, Loading, and Optimal Operating Points
Mitchell R. Swartz
2000-11-12
Successful engineering of low-energy nuclear systems requires control of noise, loading, and optimum operating point (OOP) manifolds. The latter result from the biphasic system response of low-energy nuclear reaction (LENR)/cold fusion systems, and their ash production rate, to input electrical power. Knowledge of the optimal operating point manifold can improve the reproducibility and efficacy of these systems in several ways. Improved control of noise, loading, and peak production rates is available through the study, and use, of OOP manifolds. Engineering of systems toward the OOP-manifold drive-point peak may, with inclusion of geometric factors, permit more accurate uniform determinations of the calibrated activity of these materials/systems.
Nonlinear Burn Control and Operating Point Optimization in ITER
NASA Astrophysics Data System (ADS)
Boyer, Mark; Schuster, Eugenio
2013-10-01
Control of the fusion power through regulation of the plasma density and temperature will be essential for achieving and maintaining desired operating points in fusion reactors and burning plasma experiments like ITER. In this work, a volume averaged model for the evolution of the density of energy, deuterium and tritium fuel ions, alpha-particles, and impurity ions is used to synthesize a multi-input multi-output nonlinear feedback controller for stabilizing and modulating the burn condition. Adaptive control techniques are used to account for uncertainty in model parameters, including particle confinement times and recycling rates. The control approach makes use of the different possible methods for altering the fusion power, including adjusting the temperature through auxiliary heating, modulating the density and isotopic mix through fueling, and altering the impurity density through impurity injection. Furthermore, a model-based optimization scheme is proposed to drive the system as close as possible to desired fusion power and temperature references. Constraints are considered in the optimization scheme to ensure that, for example, density and beta limits are avoided, and that optimal operation is achieved even when actuators reach saturation. Supported by the NSF CAREER award program (ECCS-0645086).
Optimal choice of cupola furnace nominal operating point
Abdelrahman, M.A.; Moore, K.L.
1998-08-01
One of the main goals in the operation of a cupola furnace is to keep the molten iron properties within prescribed bounds while maintaining the most economical operation for the cupola. In this paper the authors present a procedure to obtain the nominal values for the manipulated process variables. The nominal values are calculated by solving a constrained nonlinear programming optimization problem. Two different optimization problems are discussed and examples for using the procedure are presented.
Prediction of optimal operation point existence and parameters in lossy compression of noisy images
NASA Astrophysics Data System (ADS)
Zemliachenko, Alexander N.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2014-10-01
This paper deals with lossy compression of images corrupted by additive white Gaussian noise. For such images, compression can be characterized by existence of optimal operation point (OOP). In OOP, MSE or other metric derived between compressed and noise-free image might have optimum, i.e., maximal noise removal effect takes place. If OOP exists, then it is reasonable to compress an image in its neighbourhood. If no, more "careful" compression is reasonable. In this paper, we demonstrate that existence of OOP can be predicted based on very simple and fast analysis of discrete cosine transform (DCT) statistics in 8x8 blocks. Moreover, OOP can be predicted not only for conventional metrics as MSE or PSNR but also for visual quality metrics. Such prediction can be useful in automatic compression of multi- and hyperspectral remote sensing images.
Patnaik, Lalit; Umanand, Loganathan
2015-12-01
The inverted pendulum is a popular model for describing bipedal dynamic walking. The operating point of the walker can be specified by the combination of initial mid-stance velocity (v0) and step angle (φm) chosen for a given walk. In this paper, using basic mechanics, a framework of physical constraints that limit the choice of operating points is proposed. The constraint lines thus obtained delimit the allowable region of operation of the walker in the v0-φm plane. A given average forward velocity vx,avg can be achieved by several combinations of v0 and φm. Only one of these combinations results in the minimum mechanical power consumption and can be considered the optimum operating point for the given vx,avg. This paper proposes a method for obtaining this optimal operating point based on tangency of the power and velocity contours. Putting together all such operating points for various vx,avg, a family of optimum operating points, called the optimal locus, is obtained. For the energy loss and internal energy models chosen, the optimal locus obtained has a largely constant step angle with increasing speed but tapers off at non-dimensional speeds close to unity. PMID:26502096
LST data management and mission operations concept. [pointing control optimization for maximum data
NASA Technical Reports Server (NTRS)
Walker, R.; Hudson, F.; Murphy, L.
1977-01-01
A candidate design concept for an LST ground facility is described. The design objectives were to use NASA institutional hardware, software and facilities wherever practical, and to maximize efficiency of telescope use. The pointing control performance requirements of LST are summarized, and the major data interfaces of the candidate ground system are diagrammed.
ATLAS solar pointing operations
NASA Technical Reports Server (NTRS)
Tyler, C. A.; Zimmerman, C. J.
1994-01-01
The ATLAS-series of Spacelab missions are comprised of a diverse group of scientific instruments including instruments for studying the sun and how the sun's energy changes across an eleven-year solar cycle. The ATLAS solar instruments are located on one or more pallets in the Orbiter payload bay and use the Orbiter as a pointing platform for their examinations of the sun. One of the ATLAS instruments contained a sun sensor which allowed scientists and engineers on the ground to see the pointing error of the sun with respect to the instrument and correct for the error based upon the information coming from the ATLAS 1 and ATLAS 2 missions with particular attention given to identifying the sources of pointing discrepancies of the solar instruments and to describe the crew and ground controller procedures that were developed to correct for these discrepancies. The Orbiter pointing behavior from the ATLAS 1 and ATLAS 2 flights presented in this paper can be applied to future flights which use the Orbiter as a pointing platform.
Optimizing Operating Room Scheduling.
Levine, Wilton C; Dunn, Peter F
2015-12-01
This article reviews the management of an operating room (OR) schedule and use of the schedule to add value to an organization. We review the methodology of an OR block schedule, daily OR schedule management, and post anesthesia care unit patient flow. We discuss the importance of a well-managed OR schedule to ensure smooth patient care, not only in the OR, but throughout the entire hospital. PMID:26610624
Characterizations of fixed points of quantum operations
Li Yuan
2011-05-15
Let {phi}{sub A} be a general quantum operation. An operator B is said to be a fixed point of {phi}{sub A}, if {phi}{sub A}(B)=B. In this note, we shall show conditions under which B, a fixed point {phi}{sub A}, implies that B is compatible with the operation element of {phi}{sub A}. In particular, we offer an extension of the generalized Lueders theorem.
Optimal rate filters for biomedical point processes.
McNames, James
2005-01-01
Rate filters are used to estimate the mean event rate of many biomedical signals that can be modeled as point processes. Historically these filters have been designed using principles from two distinct fields. Signal processing principles are used to optimize the filter's frequency response. Kernel estimation principles are typically used to optimize the asymptotic statistical properties. This paper describes a design methodology that combines these principles from both fields to optimize the frequency response subject to constraints on the filter's order, symmetry, time-domain ripple, DC gain, and minimum impulse response. Initial results suggest that time-domain ripple and a negative impulse response are necessary to design a filter with a reasonable frequency response. This suggests that some of the common assumptions about the properties of rate filters should be reconsidered. PMID:17282132
Automated design of image operators that detect interest points.
Trujillo, Leonardo; Olague, Gustavo
2008-01-01
This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research. PMID:19053496
Linearization: Students Forget the Operating Point
ERIC Educational Resources Information Center
Roubal, J.; Husek, P.; Stecha, J.
2010-01-01
Linearization is a standard part of modeling and control design theory for a class of nonlinear dynamical systems taught in basic undergraduate courses. Although linearization is a straight-line methodology, it is not applied correctly by many students since they often forget to keep the operating point in mind. This paper explains the topic and…
Multi-Point Combinatorial Optimization Method with Distance Based Interaction
NASA Astrophysics Data System (ADS)
Yasuda, Keiichiro; Jinnai, Hiroyuki; Ishigame, Atsushi
This paper proposes a multi-point combinatorial optimization method based on Proximate Optimality Principle (POP), which method has several advantages for solving large-scale combinatorial optimization problems. The proposed algorithm uses not only the distance between search points but also the interaction among search points in order to utilize POP in several types of combinatorial optimization problems. The proposed algorithm is applied to several typical combinatorial optimization problems, a knapsack problem, a traveling salesman problem, and a flow shop scheduling problem, in order to verify the performance of the proposed algorithm. The simulation results indicate that the proposed method has higher optimality than the conventional combinatorial optimization methods.
Sensors operating at exceptional points: General theory
NASA Astrophysics Data System (ADS)
Wiersig, Jan
2016-03-01
A general theory of sensors based on the detection of splittings of resonant frequencies or energy levels operating at so-called exceptional points is presented. Exploiting the complex-square-root topology near such non-Hermitian degeneracies has a great potential for enhanced sensitivity. Passive and active systems are discussed. The theory is specified for whispering-gallery microcavity sensors for particle detection. As example, a microdisk with two holes is studied numerically. The theory and numerical simulations demonstrate a sevenfold enhancement of the sensitivity.
47 CFR 22.591 - Channels for point-to-point operation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false Channels for point-to-point operation. 22.591... PUBLIC MOBILE SERVICES Paging and Radiotelephone Service Point-To-Point Operation § 22.591 Channels for point-to-point operation. The following channels are allocated for assignment to fixed transmitters...
47 CFR 22.591 - Channels for point-to-point operation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Channels for point-to-point operation. 22.591... PUBLIC MOBILE SERVICES Paging and Radiotelephone Service Point-To-Point Operation § 22.591 Channels for point-to-point operation. The following channels are allocated for assignment to fixed transmitters...
OPTIMIZATION OF TREATMENT PLANT OPERATION
A review of the literature on upgrading the operation of wastewater treatment plants covers 61 citations concerning management, operation, maintenance, and training; process control and modelling; instrumentation and automation; and energy savings.
Universally optimal distribution of points on spheres
NASA Astrophysics Data System (ADS)
Cohn, Henry; Kumar, Abhinav
2007-01-01
We study configurations of points on the unit sphere that minimize potential energy for a broad class of potential functions (viewed as functions of the squared Euclidean distance between points). Call a configuration sharp if there are m distances between distinct points in it and it is a spherical (2m-1) -design. We prove that every sharp configuration minimizes potential energy for all completely monotonic potential functions. Examples include the minimal vectors of the E_8 and Leech lattices. We also prove the same result for the vertices of the 600 -cell, which do not form a sharp configuration. For most known cases, we prove that they are the unique global minima for energy, as long as the potential function is strictly completely monotonic. For certain potential functions, some of these configurations were previously analyzed by Yudin, Kolushov, and Andreev; we build on their techniques. We also generalize our results to other compact two-point homogeneous spaces, and we conclude with an extension to Euclidean space.
Operation Fair Share Points the Way.
ERIC Educational Resources Information Center
Rodgers, Curtis E.
1982-01-01
Through "Operation Fair Share," the NAACP aims at (1) expanded Black access to entry level corporate jobs; (2) establishment of minority vendor procurement programs; (3) appointment of Blacks to the boards of directors of corporations; (4) more Black senior level corporate managers; and (5) legislation permitting contracts to be set aside for…
On the operating point of cortical computation
NASA Astrophysics Data System (ADS)
Martin, Robert; Stimberg, Marcel; Wimmer, Klaus; Obermayer, Klaus
2010-06-01
In this paper, we consider a class of network models of Hodgkin-Huxley type neurons arranged according to a biologically plausible two-dimensional topographic orientation preference map, as found in primary visual cortex (V1). We systematically vary the strength of the recurrent excitation and inhibition relative to the strength of the afferent input in order to characterize different operating regimes of the network. We then compare the map-location dependence of the tuning in the networks with different parametrizations with the neuronal tuning measured in cat V1 in vivo. By considering the tuning of neuronal dynamic and state variables, conductances and membrane potential respectively, our quantitative analysis is able to constrain the operating regime of V1: The data provide strong evidence for a network, in which the afferent input is dominated by strong, balanced contributions of recurrent excitation and inhibition, operating in vivo. Interestingly, this recurrent regime is close to a regime of "instability", characterized by strong, self-sustained activity. The firing rate of neurons in the best-fitting model network is therefore particularly sensitive to small modulations of model parameters, possibly one of the functional benefits of this particular operating regime.
A Study on Optimal Operation of Power Generation by Waste
NASA Astrophysics Data System (ADS)
Sugahara, Hideo; Aoyagi, Yoshihiro; Kato, Masakazu
This paper proposes the optimal operation of power generation by waste. Refuse is taken as a new energy resource of biomass. Although some fossil fuel origin refuse like plastic may be mixed in, CO2 emission is not counted up except for above fossil fuel origin refuse for the Kyoto Protocol. Incineration is indispensable for refuse disposal and power generation by waste is environment-friendly and power system-friendly using synchronous generators. Optimal planning is a key point to make much of this merit. The optimal plan includes refuse incinerator operation plan with refuse collection and maintenance scheduling of refuse incinerator plant. In this paper, it has been made clear that the former plan increases generation energy through numerical simulations. Concerning the latter plan, a method to determine the maintenance schedule using genetic algorithm has been established. In addition, taking environmental load of CO2 emission into account, this is expected larger merits from environment and energy resource points of view.
Evaluation of stochastic reservoir operation optimization models
NASA Astrophysics Data System (ADS)
Celeste, Alcigeimes B.; Billib, Max
2009-09-01
This paper investigates the performance of seven stochastic models used to define optimal reservoir operating policies. The models are based on implicit (ISO) and explicit stochastic optimization (ESO) as well as on the parameterization-simulation-optimization (PSO) approach. The ISO models include multiple regression, two-dimensional surface modeling and a neuro-fuzzy strategy. The ESO model is the well-known and widely used stochastic dynamic programming (SDP) technique. The PSO models comprise a variant of the standard operating policy (SOP), reservoir zoning, and a two-dimensional hedging rule. The models are applied to the operation of a single reservoir damming an intermittent river in northeastern Brazil. The standard operating policy is also included in the comparison and operational results provided by deterministic optimization based on perfect forecasts are used as a benchmark. In general, the ISO and PSO models performed better than SDP and the SOP. In addition, the proposed ISO-based surface modeling procedure and the PSO-based two-dimensional hedging rule showed superior overall performance as compared with the neuro-fuzzy approach.
Optimal PGU operation strategy in CHP systems
NASA Astrophysics Data System (ADS)
Yun, Kyungtae
Traditional power plants only utilize about 30 percent of the primary energy that they consume, and the rest of the energy is usually wasted in the process of generating or transmitting electricity. On-site and near-site power generation has been considered by business, labor, and environmental groups to improve the efficiency and the reliability of power generation. Combined heat and power (CHP) systems are a promising alternative to traditional power plants because of the high efficiency and low CO2 emission achieved by recovering waste thermal energy produced during power generation. A CHP operational algorithm designed to optimize operational costs must be relatively simple to implement in practice such as to minimize the computational requirements from the hardware to be installed. This dissertation focuses on the following aspects pertaining the design of a practical CHP operational algorithm designed to minimize the operational costs: (a) real-time CHP operational strategy using a hierarchical optimization algorithm; (b) analytic solutions for cost-optimal power generation unit operation in CHP Systems; (c) modeling of reciprocating internal combustion engines for power generation and heat recovery; (d) an easy to implement, effective, and reliable hourly building load prediction algorithm.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-15
... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Indian Point 3, LLC.; Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit 3; Exemption 1.0 Background Entergy Nuclear Operations, Inc. (Entergy or the licensee) is the holder of Facility Operating License No....
Optimization of the bank's operating portfolio
NASA Astrophysics Data System (ADS)
Borodachev, S. M.; Medvedev, M. A.
2016-06-01
The theory of efficient portfolios developed by Markowitz is used to optimize the structure of the types of financial operations of a bank (bank portfolio) in order to increase the profit and reduce the risk. The focus of this paper is to check the stability of the model to errors in the original data.
Optimization of ejector design and operation
NASA Astrophysics Data System (ADS)
Kuzmenko, Konstantin; Yurchenko, Nina; Vynogradskyy, Pavlo; Paramonov, Yuriy
2016-03-01
The investigation aims at optimization of gas ejector operation. The goal consists in the improvement of the inflator design so that to enable 50 liters of gas inflation within ~30 milliseconds. For that, an experimental facility was developed and fabricated together with the measurement system to study pressure patterns in the inflator path.
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-27
... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Indian Point 2, LLC; Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit No. 2, Request for Action AGENCY: Nuclear Regulatory Commission. ACTION: Request for...
Optimizing robot placement for visit-point tasks
Hwang, Y.K.; Watterberg, P.A.
1996-06-01
We present a manipulator placement algorithm for minimizing the length of the manipulator motion performing a visit-point task such as spot welding. Given a set of points for the tool of a manipulator to visit, our algorithm finds the shortest robot motion required to visit the points from each possible base configuration. The base configurations resulting in the shortest motion is selected as the optimal robot placement. The shortest robot motion required for visiting multiple points from a given base configuration is computed using a variant of the traveling salesman algorithm in the robot joint space and a point-to-point path planner that plans collision free robot paths between two configurations. Our robot placement algorithm is expected to reduce the robot cycle time during visit- point tasks, as well as speeding up the robot set-up process when building a manufacturing line.
A superlinear interior points algorithm for engineering design optimization
NASA Technical Reports Server (NTRS)
Herskovits, J.; Asquier, J.
1990-01-01
We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.
Radar antenna pointing for optimized signal to noise ratio.
Doerry, Armin Walter; Marquette, Brandeis
2013-01-01
The Signal-to-Noise Ratio (SNR) of a radar echo signal will vary across a range swath, due to spherical wavefront spreading, atmospheric attenuation, and antenna beam illumination. The antenna beam illumination will depend on antenna pointing. Calculations of geometry are complicated by the curved earth, and atmospheric refraction. This report investigates optimizing antenna pointing to maximize the minimum SNR across the range swath.
Optimization of Regression Models of Experimental Data Using Confirmation Points
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.
Planning time-optimal robotic manipulator motions and work places for point-to-point tasks
NASA Technical Reports Server (NTRS)
Dubowsky, S.; Blubaugh, T. D.
1989-01-01
A method is presented which combines simple time-optimal motions in an optimal manner to yield the minimum-time motions for an important class of complex manipulator tasks composed of point-to-point moves such as assembly, electronic component insertion, and spot welding. This method can also be used to design manipulator actions and work places so that tasks can be completed in minimum time. The method has been implemented in a computer-aided design software system. Several examples are presented. Experimental results show the method's validity and utility.
Optimizing Integrated Terminal Airspace Operations Under Uncertainty
NASA Technical Reports Server (NTRS)
Bosson, Christabelle; Xue, Min; Zelinski, Shannon
2014-01-01
In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.
Robust stochastic optimization for reservoir operation
NASA Astrophysics Data System (ADS)
Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin
2015-01-01
Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.
An Optimization Study of Hot Stamping Operation
NASA Astrophysics Data System (ADS)
Ghoo, Bonyoung; Umezu, Yasuyoshi; Watanabe, Yuko; Ma, Ninshu; Averill, Ron
2010-06-01
In the present study, 3-dimensional finite element analyses for hot-stamping processes of Audi B-pillar product are conducted using JSTAMP/NV and HEEDS. Special attention is paid to the optimization of simulation technology coupling with thermal-mechanical formulations. Numerical simulation based on FEM technology and optimization design using the hybrid adaptive SHERPA algorithm are applied to hot stamping operation to improve productivity. The robustness of the SHERPA algorithm is found through the results of the benchmark example. The SHERPA algorithm is shown to be far superior to the GA (Genetic Algorithm) in terms of efficiency, whose calculation time is about 7 times faster than that of the GA. The SHERPA algorithm could show high performance in a large scale problem having complicated design space and long calculation time.
Optimal entanglement generation from quantum operations
Leifer, M.S.; Henderson, L.; Linden, N.
2003-01-01
We consider how much entanglement can be produced by a nonlocal two-qubit unitary operation, U{sub AB}--the entangling capacity of U{sub AB}. For a single application of U{sub AB}, with no ancillas, we find the entangling capacity and show that it generally helps to act with U{sub AB} on an entangled state. Allowing ancillas, we present numerical results from which we can conclude, quite generally, that allowing initial entanglement typically increases the optimal capacity in this case as well. Next, we show that allowing collective processing does not increase the entangling capacity if initial entanglement is allowed.
On Motivating Operations at the Point of Online Purchase Setting
ERIC Educational Resources Information Center
Fagerstrom, Asle; Arntzen, Erik
2013-01-01
Consumer behavior analysis can be applied over a wide range of economic topics in which the main focus is the contingencies that influence the behavior of the economic agent. This paper provides an overview on the work that has been done on the impact from motivating operations at the point of online purchase situation. Motivating operations, a…
Optimal Hedging Rule for Reservoir Refill Operation
NASA Astrophysics Data System (ADS)
Wan, W.; Zhao, J.; Lund, J. R.; Zhao, T.; Lei, X.; Wang, H.
2015-12-01
This paper develops an optimal reservoir Refill Hedging Rule (RHR) for combined water supply and flood operation using mathematical analysis. A two-stage model is developed to formulate the trade-off between operations for conservation benefit and flood damage in the reservoir refill season. Based on the probability distribution of the maximum refill water availability at the end of the second stage, three zones are characterized according to the relationship among storage capacity, expected storage buffer (ESB), and maximum safety excess discharge (MSED). The Karush-Kuhn-Tucker conditions of the model show that the optimality of the refill operation involves making the expected marginal loss of conservation benefit from unfilling (i.e., ending storage of refill period less than storage capacity) as nearly equal to the expected marginal flood damage from levee overtopping downstream as possible while maintaining all constraints. This principle follows and combines the hedging rules for water supply and flood management. A RHR curve is drawn analogously to water supply hedging and flood hedging rules, showing the trade-off between the two objectives. The release decision result has a linear relationship with the current water availability, implying the linearity of RHR for a wide range of water conservation functions (linear, concave, or convex). A demonstration case shows the impacts of factors. Larger downstream flood conveyance capacity and empty reservoir capacity allow a smaller current release and more water can be conserved. Economic indicators of conservation benefit and flood damage compete with each other on release, the greater economic importance of flood damage is, the more water should be released in the current stage, and vice versa. Below a critical value, improving forecasts yields less water release, but an opposing effect occurs beyond this critical value. Finally, the Danjiangkou Reservoir case study shows that the RHR together with a rolling
Multiple tipping points and optimal repairing in interacting networks.
Majdandzic, Antonio; Braunstein, Lidia A; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Stanley, H Eugene; Havlin, Shlomo
2016-01-01
Systems composed of many interacting dynamical networks-such as the human body with its biological networks or the global economic network consisting of regional clusters-often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread and recovery. Here we develop a model for such systems and find a very rich phase diagram that becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions and two 'forbidden' transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyse an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model. PMID:26926803
Multiple tipping points and optimal repairing in interacting networks
Majdandzic, Antonio; Braunstein, Lidia A.; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Eugene Stanley, H.; Havlin, Shlomo
2016-01-01
Systems composed of many interacting dynamical networks—such as the human body with its biological networks or the global economic network consisting of regional clusters—often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread and recovery. Here we develop a model for such systems and find a very rich phase diagram that becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions and two ‘forbidden' transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyse an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model. PMID:26926803
Multiple tipping points and optimal repairing in interacting networks
NASA Astrophysics Data System (ADS)
Majdandzic, Antonio; Braunstein, Lidia A.; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Eugene Stanley, H.; Havlin, Shlomo
2016-03-01
Systems composed of many interacting dynamical networks--such as the human body with its biological networks or the global economic network consisting of regional clusters--often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread and recovery. Here we develop a model for such systems and find a very rich phase diagram that becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions and two `forbidden' transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyse an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model.
CMB Polarization Detector Operating Parameter Optimization
NASA Astrophysics Data System (ADS)
Randle, Kirsten; Chuss, David; Rostem, Karwan; Wollack, Ed
2015-04-01
Examining the polarization of the Cosmic Microwave Background (CMB) provides the only known way to probe the physics of inflation in the early universe. Gravitational waves produced during inflation are posited to produce a telltale pattern of polarization on the CMB and if measured would provide both tangible evidence for inflation along with a measurement of inflation's energy scale. Leading the effort to detect and measure this phenomenon, Goddard Space Flight Center has been developing high-efficiency detectors. In order to optimize signal-to-noise ratios, sources like the atmosphere and the instrumentation must be considered. In this work we examine operating parameters of these detectors such as optical power loading and photon noise. SPS Summer Internship at NASA Goddard Spaceflight Center.
Detector characterization, optimization, and operation for ACTPol
NASA Astrophysics Data System (ADS)
Grace, Emily Ann
2016-01-01
Measurements of the temperature anisotropies of the Cosmic Microwave Background (CMB) have provided the foundation for much of our current knowledge of cosmology. Observations of the polarization of the CMB have already begun to build on this foundation and promise to illuminate open cosmological questions regarding the first moments of the universe and the properties of dark energy. The primary CMB polarization signal contains the signature of early universe physics including the possible imprint of inflationary gravitational waves, while a secondary signal arises due to late-time interactions of CMB photons which encode information about the formation and evolution of structure in the universe. The Atacama Cosmology Telescope Polarimeter (ACTPol), located at an elevation of 5200 meters in Chile and currently in its third season of observing, is designed to probe these signals with measurements of the CMB in both temperature and polarization from arcminute to degree scales. To measure the faint CMB polarization signal, ACTPol employs large, kilo-pixel detector arrays of transition edge sensor (TES) bolometers, which are cooled to a 100 mK operating temperature with a dilution refrigerator. Three such arrays are currently deployed, two with sensitivity to 150 GHz radiation and one dichroic array with 90 GHz and 150 GHz sensitivity. The operation of these large, monolithic detector arrays presents a number of challenges for both assembly and characterization. This thesis describes the design and assembly of the ACTPol polarimeter arrays and outlines techniques for their rapid characterization. These methods are employed to optimize the design and operating conditions of the detectors, select wafers for deployment, and evaluate the baseline array performance. The results of the application of these techniques to wafers from all three ACTPol arrays is described, including discussion of the measured thermal properties and time constants. Finally, aspects of the
Automatic parameter optimizer (APO) for multiple-point statistics
NASA Astrophysics Data System (ADS)
Bani Najar, Ehsanollah; Sharghi, Yousef; Mariethoz, Gregoire
2016-04-01
Multiple Point statistics (MPS) have gained popularity in recent years for generating stochastic realizations of complex natural processes. The main principle is that a training image (TI) is used to represent the spatial patterns to be modeled. One important feature of MPS is that the spatial model of the fields generated is made of 1) the chosen TI and 2) a set of algorithmic parameters that are specific to each MPS algorithm. While the choice of a training image can be guided by expert knowledge (e.g. for geological modeling) or by data acquisition methods (e.g. remote sensing) determining the algorithmic parameters can be more challenging. To date, only specific guidelines have been proposed for some simulation methods, and a general parameters inference methodology is still lacking, in particular for complex modeling settings such as when using multivariate training images. The common practice consists in carrying out an extensive parameters sensitivity analysis which can be cumbersome. An additional complexity is that the algorithmic parameters do influence CPU cost, and therefore finding optimal parameters is not only a modeling question, but also a computational challenge. To overcome these issues, we propose the automatic parameter optimizer (MPS-APO), a generic method based on stochastic optimization to rapidly determine acceptable parameters, in different settings and for any MPS method. The MPS automatic parameter optimizer proceeds in a 2-step approach. In the first step, it considers the set of input parameters of a given MPS algorithm and formulates an objective function that quantifies the reproduction of spatial patterns. The Simultaneous Perturbation Stochastic Approximation (SPSA) optimization method is used to minimize the objective function. SPSA is chosen because it is able to deal with the stochastic nature of the objective function and for its computational efficiency. At each iteration, small gaps are randomly placed in the input image
Searching for the Optimal Working Point of the MEIC at JLab Using an Evolutionary Algorithm
Balsa Terzic, Matthew Kramer, Colin Jarvis
2011-03-01
The Medium-energy Electron Ion Collider (MEIC), a proposed medium-energy ring-ring electron-ion collider based on CEBAF at Jefferson Lab. The collider luminosity and stability are sensitive to the choice of a working point - the betatron and synchrotron tunes of the two colliding beams. Therefore, a careful selection of the working point is essential for stable operation of the collider, as well as for achieving high luminosity. Here we describe a novel approach for locating an optimal working point based on evolutionary algorithm techniques.
Optimal periodic controller for formation flying on libration point orbits
NASA Astrophysics Data System (ADS)
Peng, Haijun; Zhao, Jun; Wu, Zhigang; Zhong, Wanxie
2011-09-01
An optimal periodic controller based on continuous low-thrust is proposed for the stabilization missions of spacecraft station-keeping and formation-keeping along periodic Libration point orbits of the Sun-Earth system. Additionally, a new numerical algorithm is proposed for solving the periodic Riccati differential equation in the design of the optimal periodic controller. Practical missions show that the optimal periodic controller (which is designed with the linear periodic time-varying equation of the relative dynamical model) overcomes the problems and limitations of the time-variant LQR controller. Furthermore, nonlinear numerical simulations are presented for the missions of a leader spacecraft station-keeping and three follower spacecraft formation-keeping. Numerical simulations show that the velocity increments for spacecraft control and relative position errors vary little with changes in the altitude of periodic orbits. In addition, the actual trajectories of the leader and the follower spacecraft track the periodic reference orbit with high accuracy under the perturbation of the eccentric nature of the Earth's orbit and the initial injection errors. In particular, the relative position errors obtained by the optimal periodic controller for spacecraft formation-keeping are all in the range of millimeters.
Optimal periodic control for spacecraft pointing and attitude determination
NASA Technical Reports Server (NTRS)
Pittelkau, Mark E.
1993-01-01
A new approach to autonomous magnetic roll/yaw control of polar-orbiting, nadir-pointing momentum bias spacecraft is considered as the baseline attitude control system for the next Tiros series. It is shown that the roll/yaw dynamics with magnetic control are periodically time varying. An optimal periodic control law is then developed. The control design features a state estimator that estimates attitude, attitude rate, and environmental torque disturbances from Earth sensor and sun sensor measurements; no gyros are needed. The state estimator doubles as a dynamic attitude determination and prediction function. In addition to improved performance, the optimal controller allows a much smaller momentum bias than would otherwise be necessary. Simulation results are given.
Improving Small Signal Stability through Operating Point Adjustment
Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.; Chen, Yousu; Trudnowski, Daniel J.; Mittelstadt, William; Hauer, John F.; Dagle, Jeffery E.
2010-09-30
ModeMeter techniques for real-time small signal stability monitoring continue to mature, and more and more phasor measurements are available in power systems. It has come to the stage to bring modal information into real-time power system operation. This paper proposes to establish a procedure for Modal Analysis for Grid Operations (MANGO). Complementary to PSS’s and other traditional modulation-based control, MANGO aims to provide suggestions such as increasing generation or decreasing load for operators to mitigate low-frequency oscillations. Different from modulation-based control, the MANGO procedure proactively maintains adequate damping for all time, instead of reacting to disturbances when they occur. Effect of operating points on small signal stability is presented in this paper. Implementation with existing operating procedures is discussed. Several approaches for modal sensitivity estimation are investigated to associate modal damping and operating parameters. The effectiveness of the MANGO procedure is confirmed through simulation studies of several test systems.
Attitude Control Optimization for ROCSAT-2 Operation
NASA Astrophysics Data System (ADS)
Chern, Jeng-Shing; Wu, A.-M.
The second satellite of the Republic of China is named ROCSAT-2. It is a small satellite with total mass of 750 kg for remote sensing and scientific purposes. The Remote Sensing Instrument (RSI) has resolutions of 2 m for panchromatic and 8 m for multi-spectral bands, respectively. It is mainly designed for disaster monitoring and rescue, environment and pollution monitoring, forest and agriculture planning, city and country planning, etc. for Taiwan and its surrounding islands and oceans. In order to monitor Taiwan area constantly for a long time, the orbit is designed to be sun-synchronous with 14 revolutions per day. As to the scientific payload, it is an Imager of Sprite, the Upper Atmospheric Lightening (ISUAL). Since it is a small satellite, the RSI, ISUAL, and solar panel are all body-fixed. Consequently, the satellite has to maneuver as a whole body so that either RSI or ISUAL or solar panel can be pointing to the desired direction. When ROCSAT-2 rises from the horizon and catches the sunlight, it has to maneuver to face the sun for the battery to be charged. As soon as it flies to Taiwan area, several maneuvers must be made to cover the whole area for remote sensing mission. Since the swath of ROCSAT-2 is 24 km, it needs four stripes to form the mosaic of Taiwan area. Usually, four maneuvers are required to fulfill the mission in one flight path. The sequence is very important from the point of view of saving energy. However, in some cases, we may need to sacrifice energy in order to obtain good remote sensing data at a particularly specified ground region. After that mission, its solar panel has to face the sun again. Then when ROCSAT-2 sets the horizon, it has to maneuver to point the ISUAL in the specified direction for sprite imaging mission. It is the direction where scientists predict the sprite is most probable to exist. Further maneuver may be required for the down loading of onboard data. When ROCSAT-2 rises from the horizon again, it completes
Stress-Based Crossover Operator for Structural Topology Optimization
NASA Astrophysics Data System (ADS)
Li, Cuimin; Hiroyasu, Tomoyuki; Miki, Mitsunori
In this paper, we propose a stress-based crossover (SX) operator to solve the checkerboard-like material distributation and disconnected topology that is common for simple genetic algorithm (SGA) to structural topology optimization problems (STOPs). A penalty function is defined to evaluate the fitness of each individual. A number of constrained problems are adopted to experiment the effectiveness of SX for STOPs. Comparison of 2-point crossover (2X) with SX indicates that SX can markedly suppress the checkerboard-like material distribution phenomena. Comparison of evolutionary structural optimization (ESO) and SX demonstrates the global search ability and flexibility of SX. Experiments of a Michell-type problem verifies the effectiveness of SX for STOPs. For a multi-loaded problem, SX searches out alternate solutions on the same parameters that shows the global search ability of GA.
Multi-resolution imaging with an optimized number and distribution of sampling points.
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo
2014-05-01
We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis. PMID:24921717
Silver, Gary L
2009-01-01
Equations for interpolating five data in rectangular array are seldom encountered in textbooks. This paper describes a new method that renders polynomial and exponential equations for the design. Operational center point estimators are often more more resistant to the effects of an outlying datum than the mean.
Optimization of wastewater treatment plant operation for greenhouse gas mitigation.
Kim, Dongwook; Bowen, James D; Ozelkan, Ertunga C
2015-11-01
This study deals with the determination of optimal operation of a wastewater treatment system for minimizing greenhouse gas emissions, operating costs, and pollution loads in the effluent. To do this, an integrated performance index that includes three objectives was established to assess system performance. The ASMN_G model was used to perform system optimization aimed at determining a set of operational parameters that can satisfy three different objectives. The complex nonlinear optimization problem was simulated using the Nelder-Mead Simplex optimization algorithm. A sensitivity analysis was performed to identify influential operational parameters on system performance. The results obtained from the optimization simulations for six scenarios demonstrated that there are apparent trade-offs among the three conflicting objectives. The best optimized system simultaneously reduced greenhouse gas emissions by 31%, reduced operating cost by 11%, and improved effluent quality by 2% compared to the base case operation. PMID:26292772
Fixed-Point Optimization of Atoms and Density in DFT.
Marks, L D
2013-06-11
I describe an algorithm for simultaneous fixed-point optimization (mixing) of the density and atomic positions in Density Functional Theory calculations which is approximately twice as fast as conventional methods, is robust, and requires minimal to no user intervention or input. The underlying numerical algorithm differs from ones previously proposed in a number of aspects and is an autoadaptive hybrid of standard Broyden methods. To understand how the algorithm works in terms of the underlying quantum mechanics, the concept of algorithmic greed for different Broyden methods is introduced, leading to the conclusion that if a linear model holds that the first Broyden method is optimal, the second if a linear model is a poor approximation. How this relates to the algorithm is discussed in terms of electronic phase transitions during a self-consistent run which results in discontinuous changes in the Jacobian. This leads to the need for a nongreedy algorithm when the charge density crosses phase boundaries, as well as a greedy algorithm within a given phase. An ansatz for selecting the algorithm structure is introduced based upon requiring the extrapolated component of the curvature condition to have projected positive eigenvalues. The general convergence of the fixed-point methods is briefly discussed in terms of the dielectric response and elastic waves using known results for quasi-Newton methods. The analysis indicates that both should show sublinear dependence with system size, depending more upon the number of different chemical environments than upon the number of atoms, consistent with the performance of the algorithm and prior literature. This is followed by details of algorithm ranging from preconditioning to trust region control. A number of results are shown, finishing up with a discussion of some of the many open questions. PMID:26583869
Process Parameters Optimization in Single Point Incremental Forming
NASA Astrophysics Data System (ADS)
Gulati, Vishal; Aryal, Ashmin; Katyal, Puneet; Goswami, Amitesh
2016-04-01
This work aims to optimize the formability and surface roughness of parts formed by the single-point incremental forming process for an Aluminium-6063 alloy. The tests are based on Taguchi's L18 orthogonal array selected on the basis of DOF. The tests have been carried out on vertical machining center (DMC70V); using CAD/CAM software (SolidWorks V5/MasterCAM). Two levels of tool radius, three levels of sheet thickness, step size, tool rotational speed, feed rate and lubrication have been considered as the input process parameters. Wall angle and surface roughness have been considered process responses. The influential process parameters for the formability and surface roughness have been identified with the help of statistical tool (response table, main effect plot and ANOVA). The parameter that has the utmost influence on formability and surface roughness is lubrication. In the case of formability, lubrication followed by the tool rotational speed, feed rate, sheet thickness, step size and tool radius have the influence in descending order. Whereas in surface roughness, lubrication followed by feed rate, step size, tool radius, sheet thickness and tool rotational speed have the influence in descending order. The predicted optimal values for the wall angle and surface roughness are found to be 88.29° and 1.03225 µm. The confirmation experiments were conducted thrice and the value of wall angle and surface roughness were found to be 85.76° and 1.15 µm respectively.
24 CFR 902.47 - Management operations portion of total PHAS points.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Operations § 902.47 Management operations portion of total PHAS points. Of the total 100 points available for a PHAS score, a PHA may receive up to 30 points based on the Management Operations Indicator....
Multi-point optimization of recirculation flow type casing treatment in centrifugal compressors
NASA Astrophysics Data System (ADS)
Tun, Min Thaw; Sakaguchi, Daisaku
2016-06-01
High-pressure ratio and wide operating range are highly required for a turbocharger in diesel engines. A recirculation flow type casing treatment is effective for flow range enhancement of centrifugal compressors. Two ring grooves on a suction pipe and a shroud casing wall are connected by means of an annular passage and stable recirculation flow is formed at small flow rates from the downstream groove toward the upstream groove through the annular bypass. The shape of baseline recirculation flow type casing is modified and optimized by using a multi-point optimization code with a metamodel assisted evolutionary algorithm embedding a commercial CFD code CFX from ANSYS. The numerical optimization results give the optimized design of casing with improving adiabatic efficiency in wide operating flow rate range. Sensitivity analysis of design parameters as a function of efficiency has been performed. It is found that the optimized casing design provides optimized recirculation flow rate, in which an increment of entropy rise is minimized at grooves and passages of the rotating impeller.
Hill, R.C.
1998-07-01
Precise orientation control of the International Space Station (ISS) Electrical Power System (EPS) photovoltaic (PV) solar arrays is required for a number of reasons, including the optimization of power delivery to ISS system loads and payloads. To maximize power generation and delivery in general, the PV arrays are pointed directly at the sun with some allowance for inaccuracies in determination of where to point and in the actuation of pointing the PV arrays. Control of PV array orientation in this sun pointing mode is performed automatically by onboard hardware and software. During certain conditions, maximum power cannot be generated in automatic sun tracking mode due to shadowing of the PV arrays cast by other ISS structures, primarily adjacent PV arrays. In order to maximize the power generated, the PV arrays must be pointed away from the ideal sun pointing targets to reduce the amount of shadowing. The amount of off-pointing to maximize power is a function of many parameters such as the physical configuration of the ISS structures during the assembly timeframe, the solar beta angle and vehicle attitude. Thus the off-pointing cannot be controlled automatically and must be determined by ground operators. This paper presents an overview of ISS PV array orientation control, PV array power performance under shadowed and off-pointing conditions, and a methodology to maximize power under those same conditions.
Point-of-care testing in the cardiovascular operating theatre.
Nydegger, Urs E; Gygax, Erich; Carrel, Thierry
2006-01-01
Point-of-care testing (POCT) remains under scrutiny by healthcare professionals because of its ill-tried, young history. POCT methods are being developed by a few major equipment companies based on rapid progress in informatics and nanotechnology. Issues as POCT quality control, comparability with standard laboratory procedures, standardisation, traceability and round robin testing are being left to hospitals. As a result, the clinical and operational benefits of POCT were first evident for patients on the operating table. For the management of cardiovascular surgery patients, POCT technology is an indispensable aid. Improvement of the technology has meant that clinical laboratory pathologists now recognise the need for POCT beyond their high-throughput areas. PMID:16958595
Charcoal bed operation for optimal organic carbon removal
Merritt, C.M.; Scala, F.R.
1995-05-01
Historically, evaporation, reverse osmosis or charcoal-demineralizer systems have been used to remove impurities in liquid radwaste processing systems. At Nine Mile point, we recently replaced our evaporators with charcoal-demineralizer systems to purify floor drain water. A comparison of the evaporator to the charcoal-demineralizer system has shown that the charcoal-demineralizer system is more effective in organic carbon removal. We also show the performance data of the Granulated Activated Charcoal (GAC) vessel as a mechanical filter. Actual data showing that frequent backflushing and controlled flow rates through the GAC vessel dramatically increases Total Organic Carbon (TOC) removal efficiency. GAC vessel dramatically increases Total Organic Carbon (TOC) removal efficiency. Recommendations are provided for operating the GAC vessel to ensure optimal performance.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-25
..., between Beesleys Point and Somers Point, NJ, in the Federal Register (74 FR 30031). We received two... SECURITY Coast Guard 33 CFR Part 117 RIN 1625-AA09 Drawbridge Operation Regulations; Great Egg Harbor Bay... Bridge over Great Egg Harbor Bay, at mile 3.5, between Beesleys Point and Somers Point, NJ. This...
NASA Technical Reports Server (NTRS)
Brown, Jonathan M.; Petersen, Jeremy D.
2014-01-01
NASA's WIND mission has been operating in a large amplitude Lissajous orbit in the vicinity of the interior libration point of the Sun-Earth/Moon system since 2004. Regular stationkeeping maneuvers are required to maintain the orbit due to the instability around the collinear libration points. Historically these stationkeeping maneuvers have been performed by applying an incremental change in velocity, or (delta)v along the spacecraft-Sun vector as projected into the ecliptic plane. Previous studies have shown that the magnitude of libration point stationkeeping maneuvers can be minimized by applying the (delta)v in the direction of the local stable manifold found using dynamical systems theory. This paper presents the analysis of this new maneuver strategy which shows that the magnitude of stationkeeping maneuvers can be decreased by 5 to 25 percent, depending on the location in the orbit where the maneuver is performed. The implementation of the optimized maneuver method into operations is discussed and results are presented for the first two optimized stationkeeping maneuvers executed by WIND.
Optimizing Synchronization Operations for Remote Memory Communication Systems
Buntinas, Darius; Saify, Amina; Panda, Dhabaleswar K.; Nieplocha, Jarek; Bob Werner
2003-04-22
Synchronization operations, such as fence and locking, are used in many parallel operations accessing shared memory. However, a process which is blocked waiting for a fence operation to complete, or for a lock to be acquired, cannot perform useful computation. It is therefore critical that these operations be implemented as efficiently as possible to reduce the time a process waits idle. These operations also impact the scalability of the overall system. As system sizes get larger, the number of processes potentially requesting a lock increases. In this paper we describe the design and implementation of an optimized operation which combines a global fence operation and a barrier synchronization operation. We also describe our implementation of an optimized lock algorithm. The optimizations have been incorporated into the ARMCI communication library. The global fence and barrier operation gives a factor of improvement of up to 9 over the current implementation in a 16 node system, while the optimized lock implementation gives up to 1.25 factor of improvement. These optimizations allow for more efficient and scalable applications
FCCU operating changes optimize octane catalyst use
Desai, P.H.
1986-09-01
The use of octane-enhancing catalysts in a fluid catalytic cracking unit (FCCU) requires changes in the operation of the unit to derive maximum benefits from the octane catalyst. In addition to the impressive octane gain achieved by the octane catalyst, the catalyst also affects the yield structure, the unit heat balance, and the product slate by reducing hydrogen transfer reactions. Catalyst manufacturers have introduced new product lines based upon ultrastable Y type (USY) zeolites which can result in 2 to 3 research octane number (RON) gains over the more traditional rare earth exchanged Y type (REY) zeolites. Here are some operating techniques for the FCCU and associated processes that will allow maximum benefits from octane catalyst use.
Earth-Moon Libration Point Orbit Stationkeeping: Theory, Modeling and Operations
NASA Technical Reports Server (NTRS)
Folta, David C.; Pavlak, Thomas A.; Haapala, Amanda F.; Howell, Kathleen C.; Woodard, Mark A.
2013-01-01
Collinear Earth-Moon libration points have emerged as locations with immediate applications. These libration point orbits are inherently unstable and must be maintained regularly which constrains operations and maneuver locations. Stationkeeping is challenging due to relatively short time scales for divergence effects of large orbital eccentricity of the secondary body, and third-body perturbations. Using the Acceleration Reconnection and Turbulence and Electrodynamics of the Moon's Interaction with the Sun (ARTEMIS) mission orbit as a platform, the fundamental behavior of the trajectories is explored using Poincare maps in the circular restricted three-body problem. Operational stationkeeping results obtained using the Optimal Continuation Strategy are presented and compared to orbit stability information generated from mode analysis based in dynamical systems theory.
Beam pointing angle optimization and experiments for vehicle laser Doppler velocimetry
NASA Astrophysics Data System (ADS)
Fan, Zhe; Hu, Shuling; Zhang, Chunxi; Nie, Yanju; Li, Jun
2015-10-01
Beam pointing angle (BPA) is one of the key parameters that affects the operation performance of the laser Doppler velocimetry (LDV) system. By considering velocity sensitivity and echo power, for the first time, the optimized BPA of vehicle LDV is analyzed. Assuming mounting error is within ±1.0 deg, the reflectivity and roughness are variable for different scenarios, the optimized BPA is obtained in the range from 29 to 43 deg. Therefore, velocity sensitivity is in the range of 1.25 to 1.76 MHz/(m/s), and the percentage of normalized echo power at optimized BPA with respect to that at 0 deg is greater than 53.49%. Laboratory experiments with a rotating table are done with different BPAs of 10, 35, and 66 deg, and the results coincide with the theoretical analysis. Further, vehicle experiment with optimized BPA of 35 deg is conducted by comparison with microwave radar (accuracy of ±0.5% full scale output). The root-mean-square error of LDV's results is smaller than the Microstar II's, 0.0202 and 0.1495 m/s, corresponding to LDV and Microstar II, respectively, and the mean velocity discrepancy is 0.032 m/s. It is also proven that with the optimized BPA both high velocity sensitivity and acceptable echo power can simultaneously be guaranteed.
How beam driven operations optimize ALICE efficiency and safety
NASA Astrophysics Data System (ADS)
Pinazza, Ombretta; Augustinus, André; Bond, Peter M.; Chochula, Peter C.; Kurepin, Alexander N.; Lechman, Mateusz; Rosinsky, Peter
2012-12-01
ALICE is one of the experiments at the Large Hadron Collider (LHC), CERN (Geneva, Switzerland). The ALICE DCS is responsible for the coordination and monitoring of the various detectors and of central systems, for collecting and managing alarms, data and commands. Furthermore, it's the central tool to monitor and verify the beam status with special emphasis on safety. In particular, it is important to ensure that the experiment's detectors are brought to and stay in a safe state, e.g. reduced voltages during the injection, acceleration, and adjusting phases of the LHC beams. Thanks to its central role, it's the appropriate system to implement automatic actions that were normally left to the initiative of the shift leader; where decisions come from the knowledge of detectors’ statuses and of the beam, combined together to fulfil the scientific requirements, keeping safety as a priority in all cases. This paper shows how the central DCS is interpreting the daily operations from a beam driven point of view. A tool is being implemented where automatic actions can be set and monitored through expert panels, with a custom level of automatization. Some routine operations are already automated, when a particular beam mode is declared by the LHC, which can represent a safety concern. This beam driven approach is proving to be a tool for the shift crew to optimize the efficiency of data taking, while improving the safety of the experiment.
Constrained genetic algorithms for optimizing multi-use reservoir operation
NASA Astrophysics Data System (ADS)
Chang, Li-Chiu; Chang, Fi-John; Wang, Kuo-Wei; Dai, Shin-Yi
2010-08-01
To derive an optimal strategy for reservoir operations to assist the decision-making process, we propose a methodology that incorporates the constrained genetic algorithm (CGA) where the ecological base flow requirements are considered as constraints to water release of reservoir operation when optimizing the 10-day reservoir storage. Furthermore, a number of penalty functions designed for different types of constraints are integrated into reservoir operational objectives to form the fitness function. To validate the applicability of this proposed methodology for reservoir operations, the Shih-Men Reservoir and its downstream water demands are used as a case study. By implementing the proposed CGA in optimizing the operational performance of the Shih-Men Reservoir for the last 20 years, we find this method provides much better performance in terms of a small generalized shortage index (GSI) for human water demands and greater ecological base flows for most of the years than historical operations do. We demonstrate the CGA approach can significantly improve the efficiency and effectiveness of water supply capability to both human and ecological base flow requirements and thus optimize reservoir operations for multiple water users. The CGA can be a powerful tool in searching for the optimal strategy for multi-use reservoir operations in water resources management.
Synergy optimization and operation management on syndicate complementary knowledge cooperation
NASA Astrophysics Data System (ADS)
Tu, Kai-Jan
2014-10-01
The number of multi enterprises knowledge cooperation has grown steadily, as a result of global innovation competitions. I have conducted research based on optimization and operation studies in this article, and gained the conclusion that synergy management is effective means to break through various management barriers and solve cooperation's chaotic systems. Enterprises must communicate system vision and access complementary knowledge. These are crucial considerations for enterprises to exert their optimization and operation knowledge cooperation synergy to meet global marketing challenges.
Implementation of a Point Algorithm for Real-Time Convex Optimization
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Motaghedi, Shui; Carson, John
2007-01-01
The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.
NASA Astrophysics Data System (ADS)
Bhole, Gaurav; Anjusha, V. S.; Mahesh, T. S.
2016-04-01
A robust control over quantum dynamics is of paramount importance for quantum technologies. Many of the existing control techniques are based on smooth Hamiltonian modulations involving repeated calculations of basic unitaries resulting in time complexities scaling rapidly with the length of the control sequence. Here we show that bang-bang controls need one-time calculation of basic unitaries and hence scale much more efficiently. By employing a global optimization routine such as the genetic algorithm, it is possible to synthesize not only highly intricate unitaries, but also certain nonunitary operations. We demonstrate the unitary control through the implementation of the optimal fixed-point quantum search algorithm in a three-qubit nuclear magnetic resonance (NMR) system. Moreover, by combining the bang-bang pulses with the crusher gradients, we also demonstrate nonunitary transformations of thermal equilibrium states into effective pure states in three- as well as five-qubit NMR systems.
Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping
2014-01-01
A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180
Wang, Jian; Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping
2014-01-01
A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180
NASA Astrophysics Data System (ADS)
Sato, Yuki; Izui, Kazuhiro; Yamada, Takayuki; Nishiwaki, Shinji
2016-07-01
This paper proposes techniques to improve the diversity of the searching points during the optimization process in an Aggregative Gradient-based Multiobjective Optimization (AGMO) method, so that well-distributed Pareto solutions are obtained. First to be discussed is a distance constraint technique, applied among searching points in the objective space when updating design variables, that maintains a minimum distance between the points. Next, a scheme is introduced that deals with updated points that violate the distance constraint, by deleting the offending points and introducing new points in areas of the objective space where searching points are sparsely distributed. Finally, the proposed method is applied to example problems to illustrate its effectiveness.
Improvements in floating point addition/subtraction operations
Farmwald, P.M.
1984-02-24
Apparatus is described for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.
Nickel-Cadmium Battery Operation Management Optimization Using Robust Design
NASA Technical Reports Server (NTRS)
Blosiu, Julian O.; Deligiannis, Frank; DiStefano, Salvador
1996-01-01
In recent years following several spacecraft battery anomalies, it was determined that managing the operational factors of NASA flight NiCd rechargeable battery was very important in order to maintain space flight battery nominal performance. The optimization of existing flight battery operational performance was viewed as something new for a Taguchi Methods application.
Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations
NASA Technical Reports Server (NTRS)
Zhao, Yiyuan; Chen, Robert T. N.
1996-01-01
This paper presents a summary of a series of recent analytical studies conducted to investigate One-Engine-Inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, Continued TakeOff (CTO), Rejected TakeOff (RTO), Balked Landing (BL), and Continued Landing (CL) are investigated for both Vertical-TakeOff-and-Landing (VTOL) and Short-TakeOff-and-Landing (STOL) terminal-area operations. The formulation of the nonlinear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajectory optimization studies are presented. In particular, a new balanced- weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.
Decomposition and coordination of large-scale operations optimization
NASA Astrophysics Data System (ADS)
Cheng, Ruoyu
Nowadays, highly integrated manufacturing has resulted in more and more large-scale industrial operations. As one of the most effective strategies to ensure high-level operations in modern industry, large-scale engineering optimization has garnered a great amount of interest from academic scholars and industrial practitioners. Large-scale optimization problems frequently occur in industrial applications, and many of them naturally present special structure or can be transformed to taking special structure. Some decomposition and coordination methods have the potential to solve these problems at a reasonable speed. This thesis focuses on three classes of large-scale optimization problems: linear programming, quadratic programming, and mixed-integer programming problems. The main contributions include the design of structural complexity analysis for investigating scaling behavior and computational efficiency of decomposition strategies, novel coordination techniques and algorithms to improve the convergence behavior of decomposition and coordination methods, as well as the development of a decentralized optimization framework which embeds the decomposition strategies in a distributed computing environment. The complexity study can provide fundamental guidelines to practical applications of the decomposition and coordination methods. In this thesis, several case studies imply the viability of the proposed decentralized optimization techniques for real industrial applications. A pulp mill benchmark problem is used to investigate the applicability of the LP/QP decentralized optimization strategies, while a truck allocation problem in the decision support of mining operations is used to study the MILP decentralized optimization strategies.
Fuzzy multiobjective models for optimal operation of a hydropower system
NASA Astrophysics Data System (ADS)
Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.
2013-06-01
Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.
Point-process principal components analysis via geometric optimization.
Solo, Victor; Pasha, Syed Ahmed
2013-01-01
There has been a fast-growing demand for analysis tools for multivariate point-process data driven by work in neural coding and, more recently, high-frequency finance. Here we develop a true or exact (as opposed to one based on time binning) principal components analysis for preliminary processing of multivariate point processes. We provide a maximum likelihood estimator, an algorithm for maximization involving steepest ascent on two Stiefel manifolds, and novel constrained asymptotic analysis. The method is illustrated with a simulation and compared with a binning approach. PMID:23020106
Mode-tracking based stationary-point optimization.
Bergeler, Maike; Herrmann, Carmen; Reiher, Markus
2015-07-15
In this work, we present a transition-state optimization protocol based on the Mode-Tracking algorithm [Reiher and Neugebauer, J. Chem. Phys., 2003, 118, 1634]. By calculating only the eigenvector of interest instead of diagonalizing the full Hessian matrix and performing an eigenvector following search based on the selectively calculated vector, we can efficiently optimize transition-state structures. The initial guess structures and eigenvectors are either chosen from a linear interpolation between the reactant and product structures, from a nudged-elastic band search, from a constrained-optimization scan, or from the minimum-energy structures. Alternatively, initial guess vectors based on chemical intuition may be defined. We then iteratively refine the selected vectors by the Davidson subspace iteration technique. This procedure accelerates finding transition states for large molecules of a few hundred atoms. It is also beneficial in cases where the starting structure is very different from the transition-state structure or where the desired vector to follow is not the one with lowest eigenvalue. Explorative studies of reaction pathways are feasible by following manually constructed molecular distortions. PMID:26073318
Optimization of Operations Resources via Discrete Event Simulation Modeling
NASA Technical Reports Server (NTRS)
Joshi, B.; Morris, D.; White, N.; Unal, R.
1996-01-01
The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.
47 CFR 22.621 - Channels for point-to-multipoint operation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false Channels for point-to-multipoint operation. 22... SERVICES PUBLIC MOBILE SERVICES Paging and Radiotelephone Service Point-To-Multipoint Operation § 22.621 Channels for point-to-multipoint operation. The following channels are allocated for assignment...
Application of trajectory optimization principles to minimize aircraft operating costs
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Morello, S. A.; Erzberger, H.
1979-01-01
This paper summarizes various applications of trajectory optimization principles that have been or are being devised by both government and industrial researchers to minimize aircraft direct operating costs (DOC). These costs (time and fuel) are computed for aircraft constrained to fly over a fixed range. Optimization theory is briefly outlined, and specific algorithms which have resulted from application of this theory are described. Typical results which demonstrate use of these algorithms and the potential savings which they can produce are given. Finally, need for further trajectory optimization research is presented.
Optimal Operation of a Thermal Energy Storage Tank Using Linear Optimization
NASA Astrophysics Data System (ADS)
Civit Sabate, Carles
In this thesis, an optimization procedure for minimizing the operating costs of a Thermal Energy Storage (TES) tank is presented. The facility in which the optimization is based is the combined cooling, heating, and power (CCHP) plant at the University of California, Irvine. TES tanks provide the ability of decoupling the demand of chilled water from its generation, over the course of a day, from the refrigeration and air-conditioning plants. They can be used to perform demand-side management, and optimization techniques can help to approach their optimal use. The proposed optimization approach provides a fast and reliable methodology of finding the optimal use of the TES tank to reduce energy costs and provides a tool for future implementation of optimal control laws on the system. Advantages of the proposed methodology are studied using simulation with historical data.
Optimizing Reservoir Operation to Adapt to the Climate Change
NASA Astrophysics Data System (ADS)
Madadgar, S.; Jung, I.; Moradkhani, H.
2010-12-01
Climate change and upcoming variation in flood timing necessitates the adaptation of current rule curves developed for operation of water reservoirs as to reduce the potential damage from either flood or draught events. This study attempts to optimize the current rule curves of Cougar Dam on McKenzie River in Oregon addressing some possible climate conditions in 21th century. The objective is to minimize the failure of operation to meet either designated demands or flood limit at a downstream checkpoint. A simulation/optimization model including the standard operation policy and a global optimization method, tunes the current rule curve upon 8 GCMs and 2 greenhouse gases emission scenarios. The Precipitation Runoff Modeling System (PRMS) is used as the hydrology model to project the streamflow for the period of 2000-2100 using downscaled precipitation and temperature forcing from 8 GCMs and two emission scenarios. An ensemble of rule curves, each associated with an individual scenario, is obtained by optimizing the reservoir operation. The simulation of reservoir operation, for all the scenarios and the expected value of the ensemble, is conducted and performance assessment using statistical indices including reliability, resilience, vulnerability and sustainability is made.
Bilinear quark operator renormalization at generalized symmetric point
NASA Astrophysics Data System (ADS)
Bell, J. M.; Gracey, J. A.
2016-03-01
We compute Green's functions with a bilinear quark operator inserted at nonzero momentum for a generalized momentum configuration to two loops. These are required to assist lattice gauge theory measurements of the same quantity in matching to the high energy behavior. The flavor nonsinglet operators considered are the scalar, vector and tensor currents as well as the second moment of the twist-2 Wilson operator used in deep inelastic scattering for the measurement of nucleon structure functions.
76 FR 60733 - Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-30
... SECURITY Coast Guard 33 CFR Part 117 Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY AGENCY... the Smith Point Bridge, 6.1, across Narrow Bay, between Smith Point and Fire Island, New York. The.... SUPPLEMENTARY INFORMATION: The Smith Point Bridge, across Narrow Bay, mile 6.1, between Smith Point and...
SVOM pointing strategy: how to optimize the redshift measurements?
Cordier, B.; Schanne, S.
2008-05-22
The Sino-French SVOM mission (Space-based multi-band astronomical Variable Objects Monitor) has been designed to detect all known types of gamma-ray bursts (GRBs) and to provide fast and reliable GRB positions. In this study we present the SVOM pointing strategy which should ensure the largest number of localized bursts allowing a redshift measurement. The redshift measurement can only be performed by large telescopes located on Earth. The best scientific return will be achieved if we are able to combine constraints from both space segment (platform and payload) and ground telescopes (visibility)
Optimizing and controlling the operation of heat-exchanger networks
Aguilera, N.; Marchetti, J.L.
1998-05-01
A procedure was developed for on-line optimization and control systems of heat-exchanger networks, which features a two-level control structure, one for a constant configuration control system and the other for a supervisor on-line optimizer. The coordination between levels is achieved by adjusting the formulation of the optimization problem to meet requirements of the adopted control system. The general goal is always to work without losing stream temperature targets while keeping the highest energy integration. The operation constraints used for heat-exchanger and utility units emphasize the computation of heat-exchanger duties rather than intermediate stream temperatures. This simplifies the modeling task and provides clear links with the limits of the manipulated variables. The optimal condition is determined using LP or NLP, depending on the final problem formulation. Degrees of freedom for optimization and equation constraints for considering simple and multiple bypasses are rigorously discussed. An example used shows how the optimization problem can be adjusted to a specific network design, its expected operating space, and the control configuration. Dynamic simulations also show benefits and limitations of this procedure.
A Transmittance-optimized, Point-focus Fresnel Lens Solar Concentrator
NASA Technical Reports Server (NTRS)
Oneill, M. J.
1984-01-01
The development of a point-focus Fresnel lens solar concentrator for high-temperature solar thermal energy system applications is discussed. The concentrator utilizes a transmittance-optimized, short-focal-length, dome-shaped refractive Fresnel lens as the optical element. This concentrator combines both good optical performance and a large tolerance for manufacturing, deflection, and tracking errors. The conceptual design of an 11-meter diameter concentrator which should provide an overall collector efficiency of about 70% at an 815 C (1500 F) receiver operating temperature and a 1500X geometric concentration ratio (lens aperture area/receiver aperture area) was completed. Results of optical and thermal analyses of the collector, a discussion of manufacturing methods for making the large lens, and an update on the current status and future plans of the development program are included.
NASA Technical Reports Server (NTRS)
Mehr, Ali Farhang; Tumer, Irem
2005-01-01
In this paper, we will present a new methodology that measures the "worth" of deploying an additional testing instrument (sensor) in terms of the amount of information that can be retrieved from such measurement. This quantity is obtained using a probabilistic model of RLV's that has been partially developed in the NASA Ames Research Center. A number of correlated attributes are identified and used to obtain the worth of deploying a sensor in a given test point from an information-theoretic viewpoint. Once the information-theoretic worth of sensors is formulated and incorporated into our general model for IHM performance, the problem can be formulated as a constrained optimization problem where reliability and operational safety of the system as a whole is considered. Although this research is conducted specifically for RLV's, the proposed methodology in its generic form can be easily extended to other domains of systems health monitoring.
Driving external chemistry optimization via operations management principles.
Bi, F Christopher; Frost, Heather N; Ling, Xiaolan; Perry, David A; Sakata, Sylvie K; Bailey, Simon; Fobian, Yvette M; Sloan, Leslie; Wood, Anthony
2014-03-01
Confronted with the need to significantly raise the productivity of remotely located chemistry CROs Pfizer embraced a commitment to continuous improvement which leveraged the tools from both Lean Six Sigma and queue management theory to deliver positive measurable outcomes. During 2012 cycle times were reduced by 48% by optimization of the work in progress and conducting a detailed workflow analysis to identify and address pinch points. Compound flow was increased by 29% by optimizing the request process and de-risking the chemistry. Underpinning both achievements was the development of close working relationships and productive communications between Pfizer and CRO chemists. PMID:23973340
Trajectory optimization for intra-operative nuclear tomographic imaging.
Vogel, Jakob; Lasser, Tobias; Gardiazabal, José; Navab, Nassir
2013-10-01
Diagnostic nuclear imaging modalities like SPECT typically employ gantries to ensure a densely sampled geometry of detectors in order to keep the inverse problem of tomographic reconstruction as well-posed as possible. In an intra-operative setting with mobile freehand detectors the situation changes significantly, and having an optimal detector trajectory during acquisition becomes critical. In this paper we propose an incremental optimization method based on the numerical condition of the system matrix of the underlying iterative reconstruction method to calculate optimal detector positions during acquisition in real-time. The performance of this approach is evaluated using simulations. A first experiment on a phantom using a robot-controlled intra-operative SPECT-like setup demonstrates the feasibility of the approach. PMID:23706624
Na-Faraday rotation filtering: The optimal point
Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja
2014-01-01
Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251
Na-Faraday rotation filtering: the optimal point.
Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja
2014-01-01
Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251
Near-Optimal Operation of Dual-Fuel Launch Vehicles
NASA Technical Reports Server (NTRS)
Ardema, M. D.; Chou, H. C.; Bowles, J. V.
1996-01-01
A near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. Of interest are both the optimal operation of the propulsion system and the optimal flight path. A methodology is developed to investigate the optimal throttle switching of dual-fuel engines. The method is based on selecting propulsion system modes and parameters that maximize a certain performance function. This function is derived from consideration of the energy-state model of the aircraft equations of motion. Because the density of liquid hydrogen is relatively low, the sensitivity of perturbations in volume need to be taken into consideration as well as weight sensitivity. The cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize vehicle empty weight for a given payload mass and volume in orbit.
A fixed point theorem for certain operator valued maps
NASA Technical Reports Server (NTRS)
Brown, D. R.; Omalley, M. J.
1978-01-01
In this paper, we develop a family of Neuberger-like results to find points z epsilon H satisfying L(z)z = z and P(z) = z. This family includes Neuberger's theorem and has the additional property that most of the sequences q sub n converge to idempotent elements of B sub 1(H).
On point spread function modelling: towards optimal interpolation
NASA Astrophysics Data System (ADS)
Bergé, Joel; Price, Sedona; Amara, Adam; Rhodes, Jason
2012-01-01
Point spread function (PSF) modelling is a central part of any astronomy data analysis relying on measuring the shapes of objects. It is especially crucial for weak gravitational lensing, in order to beat down systematics and allow one to reach the full potential of weak lensing in measuring dark energy. A PSF modelling pipeline is made of two main steps: the first one is to assess its shape on stars, and the second is to interpolate it at any desired position (usually galaxies). We focus on the second part, and compare different interpolation schemes, including polynomial interpolation, radial basis functions, Delaunay triangulation and Kriging. For that purpose, we develop simulations of PSF fields, in which stars are built from a set of basis functions defined from a principal components analysis of a real ground-based image. We find that Kriging gives the most reliable interpolation, significantly better than the traditionally used polynomial interpolation. We also note that although a Kriging interpolation on individual images is enough to control systematics at the level necessary for current weak lensing surveys, more elaborate techniques will have to be developed to reach future ambitious surveys' requirements.
AN OPTIMIZED 64X64 POINT TWO-DIMENSIONAL FAST FOURIER TRANSFORM
NASA Technical Reports Server (NTRS)
Miko, J.
1994-01-01
Scientists at Goddard have developed an efficient and powerful program-- An Optimized 64x64 Point Two-Dimensional Fast Fourier Transform-- which combines the performance of real and complex valued one-dimensional Fast Fourier Transforms (FFT's) to execute a two-dimensional FFT and its power spectrum coefficients. These coefficients can be used in many applications, including spectrum analysis, convolution, digital filtering, image processing, and data compression. The program's efficiency results from its technique of expanding all arithmetic operations within one 64-point FFT; its high processing rate results from its operation on a high-speed digital signal processor. For non-real-time analysis, the program requires as input an ASCII data file of 64x64 (4096) real valued data points. As output, this analysis produces an ASCII data file of 64x64 power spectrum coefficients. To generate these coefficients, the program employs a row-column decomposition technique. First, it performs a radix-4 one-dimensional FFT on each row of input, producing complex valued results. Then, it performs a one-dimensional FFT on each column of these results to produce complex valued two-dimensional FFT results. Finally, the program sums the squares of the real and imaginary values to generate the power spectrum coefficients. The program requires a Banshee accelerator board with 128K bytes of memory from Atlanta Signal Processors (404/892-7265) installed on an IBM PC/AT compatible computer (DOS ver. 3.0 or higher) with at least one 16-bit expansion slot. For real-time operation, an ASPI daughter board is also needed. The real-time configuration reads 16-bit integer input data directly into the accelerator board, operating on 64x64 point frames of data. The program's memory management also allows accumulation of the coefficient results. The real-time processing rate to calculate and accumulate the 64x64 power spectrum output coefficients is less than 17.0 mSec. Documentation is included
Optimal Operation of Energy Storage in Power Transmission and Distribution
NASA Astrophysics Data System (ADS)
Akhavan Hejazi, Seyed Hossein
In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider
NASA Astrophysics Data System (ADS)
Yang, Dong; Ren, Wei-Xin; Hu, Yi-Ding; Li, Dan
2015-08-01
The structural health monitoring (SHM) involves the sampled operational vibration measurements over time so that the structural features can be extracted accordingly. The recurrence plot (RP) and corresponding recurrence quantification analysis (RQA) have become a useful tool in various fields due to its efficiency. The threshold selection is one of key issues to make sure that the constructed recurrence plot contains enough recurrence points. Different signals have in nature different threshold values. This paper is aiming at presenting an approach to determine the optimal threshold for the operational vibration measurements of civil engineering structures. The surrogate technique and Taguchi loss function are proposed to generate reliable data and to achieve the optimal discrimination power point where the threshold is optimum. The impact of selecting recurrence thresholds on different signals is discussed. It is demonstrated that the proposed method to identify the optimal threshold is applicable to the operational vibration measurements. The proposed method provides a way to find the optimal threshold for the best RP construction of structural vibration measurements under operational conditions.
Optimality conditions for a two-stage reservoir operation problem
NASA Astrophysics Data System (ADS)
Zhao, Jianshi; Cai, Ximing; Wang, Zhongjing
2011-08-01
This paper discusses the optimality conditions for standard operation policy (SOP) and hedging rule (HR) for a two-stage reservoir operation problem using a consistent theoretical framework. The effects of three typical constraints, i.e., mass balance, nonnegative release, and storage constraints under both certain and uncertain conditions are analyzed. When all nonnegative constraints and storage constraints are unbinding, HR results in optimal reservoir operation following the marginal benefit (MB) principle (the MB is equal over current and future stages. However, if any of those constraints are binding, SOP results in the optimal solution, except in some special cases which need to carry over water in the current stage to the future stage, when extreme drought is certain and a higher marginal utility exists for the second stage. Furthermore, uncertainty complicates the effects of the various constraints. A higher uncertainty level in the future makes HR more favorable as water needs to be reserved to defend against the risk caused by uncertainty. Using the derived optimality conditions, an algorithm for solving a numerical model is developed and tested with the Miyun Reservoir in China.
Optimality Conditions for A Two-Stage Reservoir Operation Problem
NASA Astrophysics Data System (ADS)
Zhao, J.; Cai, X.; Wang, Z.
2010-12-01
This paper discusses the optimality conditions for standard operation policy (SOP) and hedging rule (HR) for a two-stage reservoir operation problem within a consistent theoretical framework. The effects of three typical constraints, which are mass balance, non-negative release and storage constraints under both certain and uncertain conditions have been analyzed. When all non-negative constraints and storage constraints are non-binding, HR results in optimal reservoir operation following the marginal benefit (MB) principle (the MB is equal over the two stages); while if any of the non-negative release or storage constraints is binding, in general SOP results in the optimal solution except two special cases. One of them is a complement of the traditional SOP/HR curve, which happens while the capacity constraint is binding; the other is a special hedging rule, which should be employed to carry over all water in the current stage to the future, when extreme drought is certain and higher marginal utility exists for the second stage. Furthermore, uncertainty complicates the effects of the various constraints but in general higher uncertainty level in the future makes HR a more favorable since water needs to be reserved to defense the risk caused by the uncertainty. Using the derived optimality conditions, an algorithm for solving the model numerically has been developed and tested with hypothetical examples.
Break-Even Point for a Proof Slip Operation
ERIC Educational Resources Information Center
Anderson, James F.
1972-01-01
Break-even analysis is applied to determine what magnitude of titles added per year is sufficient to utilize economically Library of Congress proof slips and a Xerox 914 copying machine in the cataloging operation of a library. A formula is derived, and an example of its use is given. (1 reference) (Author/SJ)
Robust optimal sun-pointing control of a large solar power satellite
NASA Astrophysics Data System (ADS)
Wu, Shunan; Zhang, Kaiming; Peng, Haijun; Wu, Zhigang; Radice, Gianmarco
2016-10-01
The robust optimal sun-pointing control strategy for a large geostationary solar power satellite (SPS) is addressed in this paper. The SPS is considered as a huge rigid body, and the sun-pointing dynamics are firstly proposed in the state space representation. The perturbation effects caused by gravity gradient, solar radiation pressure and microwave reaction are investigated. To perform sun-pointing maneuvers, a periodically time-varying robust optimal LQR controller is designed to assess the pointing accuracy and the control inputs. It should be noted that, to reduce the pointing errors, the disturbance rejection technique is combined into the proposed LQR controller. A recursive algorithm is then proposed to solve the optimal LQR control gain. Simulation results are finally provided to illustrate the performance of the proposed closed-loop system.
78 FR 52987 - Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit 3
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-27
...The U.S. Nuclear Regulatory Commission (NRC) has concluded that existing exemptions from its regulations, ``Fire Protection Program for Nuclear Power Facilities Operating Prior to January 1, 1979,'' for Fire Areas ETN-4 and PAB-2, issued to Entergy Nuclear Operations, Inc. (the licensee), for operation of Indian Point Nuclear Generating Unit 3 (Indian Point 3), located in Westchester County,......
78 FR 23845 - Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-23
... SECURITY Coast Guard 33 CFR Part 117 Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY AGENCY... the Smith Point Bridge, mile 6.1, across Narrow Bay, between Smith Point and Fire Island, New York. The deviation is necessary to facilitate the Smith Point Triathlon. This deviation allows the...
CFD-based simulation of operational point influences on product changing processes
NASA Astrophysics Data System (ADS)
Szöke, L.; Wortberg, J.
2014-05-01
In the means of production, saving resources is becoming more and more a priority optimizing plastics extrusion processes. The analysis of color and material changes has become very interesting to prevent unnecessary material loss. This interest is justified especially due to increasing numbers of changing processes as a reaction of more individual product specifications and thus decreasing lot sizes in the last decades. It can be shown that commercial numerical tools are capable of plausible calculations of changing processes, enabling process observations and giving the possibility of predicting different influences. Due to the highly dynamical character of the flow behavior in the control volume, a transient approach is necessary to show the effects of different operational points on product changes. To determine the progress of the product change the volume of fluid model (VOF) as multiphase approach is used. The analysis of influences from operational points on the changing process is achieved by separate observations of two important input parameters. On one hand the effects of varied mass flow rates as inlet boundary conditions and on the other hand different mass temperatures are observed. To check the plausibility of the calculation method the results are discussed referring to exemplary experimental data in qualitative comparison. The experimental data is obtained using special laboratory equipment neglecting influences from the extruder and taking only the die as control volume into consideration.
ERIC Educational Resources Information Center
Ben-Yashar, Ruth; Nitzan, Shmuel; Vos, Hans J.
This paper compares the determination of optimal cutoff points for single and multiple tests in the field of personnel selection. Decisional skills of predictor tests composing the multiple test are assumed to be endogenous variables that depend on the cutting points to be set. The main result specifies the condition that determines the…
Optimization of operating conditions in tunnel drying of food
Dong Sun Lee . Dept. of Food Engineering); Yu Ryang Pyun . Dept. of Food Engineering)
1993-01-01
A food drying process in a tunnel dryer was modeled from Keey's drying model and experimental drying curve, and optimized in operating conditions consisting of inlet air temperature, air recycle ratio and air flow rate. Radish was chosen as a typical food material to be dried, because it has the typical drying characteristics of food and quality indexes of ascorbic acid destruction and browning during drying. Optimization results of cocurrent and counter current tunnel drying showed higher inlet air temperature, lower recycle ratio and higher air flow rate with shorter total drying time. Compared with cocurrent operation counter current drying used lower air temperature, lower recycle ratio and lower air flow rate, and appeared to be more efficient in energy usage. Most of consumed energy was shown to be used for sir heating and then escaped from the dryer in the form of exhaust air.
Physics-Based Prognostics for Optimizing Plant Operation
Leonard J. Bond; Don B. Jarrell
2005-03-01
Scientists at the Pacific Northwest National Laboratory (PNNL) have examined the necessity for optimization of energy plant operation using 'DSOM{reg_sign}'--Decision Support Operation and Maintenance and this has been deployed at several sites. This approach has been expanded to include a prognostics components and tested on a pilot scale service water system, modeled on the design employed in a nuclear power plant. A key element in plant optimization is understanding and controlling the aging process of safety-specific nuclear plant components. This paper reports the development and demonstration of a physics-based approach to prognostic analysis that combines distributed computing, RF data links, the measurement of aging precursor metrics and their correlation with degradation rate and projected machine failure.
The optimization of operating parameters on microalgae upscaling process planning.
Ma, Yu-An; Huang, Hsin-Fu; Yu, Chung-Chyi
2016-03-01
The upscaling process planning developed in this study primarily involved optimizing operating parameters, i.e., dilution ratios, during process designs. Minimal variable cost was used as an indicator for selecting the optimal combination of dilution ratios. The upper and lower mean confidence intervals obtained from the actual cultured cell density data were used as the final cell density stability indicator after the operating parameters or dilution ratios were selected. The process planning method and results were demonstrated through three case studies of batch culture simulation. They are (1) final objective cell densities were adjusted, (2) high and low light intensities were used for intermediate-scale cultures, and (3) the number of culture days was expressed as integers for the intermediate-scale culture. PMID:26739144
NASA Astrophysics Data System (ADS)
Mao, Xuefeng; Zhou, Xinlei; Yu, Qingxu
2016-02-01
We describe a stabilizing operation point technique based on the tunable Distributed Feedback (DFB) laser for quadrature demodulation of interferometric sensors. By introducing automatic lock quadrature point and wavelength periodically tuning compensation into an interferometric system, the operation point of interferometric system is stabilized when the system suffers various environmental perturbations. To demonstrate the feasibility of this stabilizing operation point technique, experiments have been performed using a tunable-DFB-laser as light source to interrogate an extrinsic Fabry-Perot interferometric vibration sensor and a diaphragm-based acoustic sensor. Experimental results show that good tracing of Q-point was effectively realized.
Optimal recovery of linear operators in non-Euclidean metrics
Osipenko, K Yu
2014-10-31
The paper looks at problems concerning the recovery of operators from noisy information in non-Euclidean metrics. A number of general theorems are proved and applied to recovery problems for functions and their derivatives from the noisy Fourier transform. In some cases, a family of optimal methods is found, from which the methods requiring the least amount of original information are singled out. Bibliography: 25 titles.
Optimizing integrated airport surface and terminal airspace operations under uncertainty
NASA Astrophysics Data System (ADS)
Bosson, Christabelle S.
In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-10
... implement emergency operating procedure (EOP) 2-FR- H.1, ``Response To Loss Of Secondary Heat Sink.'' The NRC does not consider implementing 2-FR-H.1 an OMA, as actions to establish reactor coolant system... OMA origin Area name actions 1 C Auxiliary Boiler Implement EOP FR- Feed Pump Room, H.l as...
Multi-objective nested algorithms for optimal reservoir operation
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj; Solomatine, Dimitri
2016-04-01
The optimal reservoir operation is in general a multi-objective problem, meaning that multiple objectives are to be considered at the same time. For solving multi-objective optimization problems there exist a large number of optimization algorithms - which result in a generation of a Pareto set of optimal solutions (typically containing a large number of them), or more precisely, its approximation. At the same time, due to the complexity and computational costs of solving full-fledge multi-objective optimization problems some authors use a simplified approach which is generically called "scalarization". Scalarization transforms the multi-objective optimization problem to a single-objective optimization problem (or several of them), for example by (a) single objective aggregated weighted functions, or (b) formulating some objectives as constraints. We are using the approach (a). A user can decide how many multi-objective single search solutions will generate, depending on the practical problem at hand and by choosing a particular number of the weight vectors that are used to weigh the objectives. It is not guaranteed that these solutions are Pareto optimal, but they can be treated as a reasonably good and practically useful approximation of a Pareto set, albeit small. It has to be mentioned that the weighted-sum approach has its known shortcomings because the linear scalar weights will fail to find Pareto-optimal policies that lie in the concave region of the Pareto front. In this context the considered approach is implemented as follows: there are m sets of weights {w1i, …wni} (i starts from 1 to m), and n objectives applied to single objective aggregated weighted sum functions of nested dynamic programming (nDP), nested stochastic dynamic programming (nSDP) and nested reinforcement learning (nRL). By employing the multi-objective optimization by a sequence of single-objective optimization searches approach, these algorithms acquire the multi-objective properties
47 CFR 90.471 - Points of operation in internal transmitter control systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) SAFETY AND SPECIAL RADIO SERVICES PRIVATE LAND MOBILE RADIO SERVICES Transmitter Control Internal Transmitter Control Systems § 90.471 Points of operation in internal transmitter control systems. The... licensee for internal communications and transmitter control purposes. Operating positions in...
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).
Optimal reservoir operation policies using novel nested algorithms
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri
2015-04-01
Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested
2016-01-01
Several published studies have reported the need to change the cutoff points of anthropometric indices for obesity. We therefore conducted a cross-sectional study to estimate anthropometric cutoff points predicting high coronary heart disease (CHD) risk in Korean adults. We analyzed the Korean National Health and Nutrition Examination Survey data from 2007 to 2010. A total of 21,399 subjects aged 20 to 79 yr were included in this study (9,204 men and 12,195 women). We calculated the 10-yr Framingham coronary heart disease risk score for all individuals. We then estimated receiver-operating characteristic (ROC) curves for body mass index (BMI), waist circumference, and waist-to-height ratio to predict a 10-yr CHD risk of 20% or more. For sensitivity analysis, we conducted the same analysis for a 10-yr CHD risk of 10% or more. For a CHD risk of 20% or more, the area under the curve of waist-to-height ratio was the highest, followed by waist circumference and BMI. The optimal cutoff points in men and women were 22.7 kg/m2 and 23.3 kg/m2 for BMI, 83.2 cm and 79.7 cm for waist circumference, and 0.50 and 0.52 for waist-to-height ratio, respectively. In sensitivity analysis, the results were the same as those reported above except for BMI in women. Our results support the re-classification of anthropometric indices and suggest the clinical use of waist-to-height ratio as a marker for obesity in Korean adults. PMID:26770039
Optimization of Maneuver Execution for Landsat-7 Routine Operations
NASA Technical Reports Server (NTRS)
Cox, E. Lucien, Jr.; Bauer, Frank H. (Technical Monitor)
2000-01-01
Multiple mission constraints were satisfied during a lengthy, strategic ascent phase. Once routine operations begin, the ongoing concern of maintaining mission requirements becomes an immediate priority. The Landsat-7 mission has tight longitude control box and Earth imaging that requires sub-satellite descending nodal equator crossing times to occur in a narrow 30minute range fifteen (15) times daily. Operationally, spacecraft maneuvers must'be executed properly to maintain mission requirements. The paper will discuss the importance of optimizing the altitude raising and plane change maneuvers, amidst known constraints, to satisfy requirements throughout mission lifetime. Emphasis will be placed not only on maneuver size and frequency but also on changes in orbital elements that impact maneuver execution decisions. Any associated trade-off arising from operations contingencies will be discussed as well. Results of actual altitude and plane change maneuvers are presented to clarify actions taken.
Joe D. Wilson, Jr.
2003-04-01
The technology of Jefferson Laboratory's (JLab) Continuous Electron Beam Accelerator Facility (CEBAF) and Free Electron Laser (FEL) requires cooling from one of the world's largest 2K helium refrigerators known as the Central Helium Liquefier (CHL). The key characteristic of CHL is the ability to maintain a constant low vapor pressure over the large liquid helium inventory using a series of five cold compressors. The cold compressor system operates with a constrained discharge pressure over a range of suction pressures and mass flows to meet the operational requirements of CEBAF and FEL. The research topic is the prediction of the most thermodynamically efficient conditions for the system over its operating range of mass flows and vapor pressures with minimum disruption to JLab operations. The research goal is to find the operating points for each cold compressor for optimizing the overall system at any given flow and vapor pressure.
76 FR 79066 - Drawbridge Operation Regulation; Escatawpa River, Moss Point, MS
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-21
... SECURITY Coast Guard 33 CFR Part 117 Drawbridge Operation Regulation; Escatawpa River, Moss Point, MS... of the Mississippi Export Railroad Company swing bridge across the Escatawpa River, mile 3.0, at Moss... operating schedule for the swing span bridge across Escatawpa River, mile 3.0, at Moss Point, Jackson...
78 FR 58570 - Environmental Assessment; Entergy Nuclear Operations, Inc., Big Rock Point
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-24
...) requirements in Sec. Sec. 50.47 and 50.54, and Aapendix E of 10 CFR part 50 (76 FR 72560; November 23, 2011... COMMISSION Environmental Assessment; Entergy Nuclear Operations, Inc., Big Rock Point AGENCY: Nuclear... Nuclear Operations, Inc. (ENO) (the applicant or the licensee), for the Big Rock Point (BRP)...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-28
...The Commander, Fifth Coast Guard District, has issued a temporary deviation from the regulations governing the operation of the Route 88/Veterans Memorial Bridge across Point Pleasant Canal, at NJICW mile 3.0, in Point Pleasant, NJ. This closure is necessary to facilitate extensive mechanical rehabilitation and to maintain the bridge's operational...
ERIC Educational Resources Information Center
Sobh, Tarek M.; Tibrewal, Abhilasha
2006-01-01
Operating systems theory primarily concentrates on the optimal use of computing resources. This paper presents an alternative approach to teaching and studying operating systems design and concepts by way of parametrically optimizing critical operating system functions. Detailed examples of two critical operating systems functions using the…
NASA Technical Reports Server (NTRS)
Williams, Daniel M.
2006-01-01
Described is the research process that NASA researchers used to validate the Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) concept. The four phase building-block validation and verification process included multiple elements ranging from formal analysis of HVO procedures to flight test, to full-system architecture prototype that was successfully shown to the public at the June 2005 SATS Technical Demonstration in Danville, VA. Presented are significant results of each of the four research phases that extend early results presented at ICAS 2004. HVO study results have been incorporated into the development of the Next Generation Air Transportation System (NGATS) vision and offer a validated concept to provide a significant portion of the 3X capacity improvement sought after in the United States National Airspace System (NAS).
Optimized Algorithms for Prediction within Robotic Tele-Operative Interfaces
NASA Technical Reports Server (NTRS)
Martin, Rodney A.; Wheeler, Kevin R.; SunSpiral, Vytas; Allan, Mark B.
2006-01-01
Robonaut, the humanoid robot developed at the Dexterous Robotics Laboratory at NASA Johnson Space Center serves as a testbed for human-robot collaboration research and development efforts. One of the primary efforts investigates how adjustable autonomy can provide for a safe and more effective completion of manipulation-based tasks. A predictive algorithm developed in previous work was deployed as part of a software interface that can be used for long-distance tele-operation. In this paper we provide the details of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmic approach. We show that all of the algorithms presented can be optimized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. Judicious feature selection also plays a significant role in the conclusions drawn.
Johnson, Gary E.; Khan, Fenton; Ploskey, Gene R.; Hughes, James S.; Fischer, Eric S.
2010-08-18
The goal of the study was to optimize performance of the fixed-location hydroacoustic systems at Lookout Point Dam (LOP) and the acoustic imaging system at Cougar Dam (CGR) by determining deployment and data acquisition methods that minimized structural, electrical, and acoustic interference. The general approach was a multi-step process from mount design to final system configuration. The optimization effort resulted in successful deployments of hydroacoustic equipment at LOP and CGR.
[Numerical simulation and operation optimization of biological filter].
Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing
2014-12-01
BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10. PMID:25826934
Optimizing Watershed Management by Coordinated Operation of Storing Facilities
NASA Astrophysics Data System (ADS)
Anghileri, Daniela; Castelletti, Andrea; Pianosi, Francesca; Soncini-Sessa, Rodolfo; Weber, Enrico
2013-04-01
Water storing facilities in a watershed are very often operated independently one to another to meet specific operating objectives, with no information sharing among the operators. This uncoordinated approach might result in upstream-downstream disputes and conflicts among different water users, or inefficiencies in the watershed management, when looked at from the viewpoint of an ideal central decision-maker. In this study, we propose an approach in two steps to design coordination mechanisms at the watershed scale with the ultimate goal of enlarging the space for negotiated agreements between competing uses and improve the overall system efficiency. First, we compute the multi-objective centralized solution to assess the maximum potential benefits of a shift from a sector-by-sector to an ideal fully coordinated perspective. Then, we analyze the Pareto-optimal operating policies to gain insight into suitable strategies to foster cooperation or impose coordination among the involved agents. The approach is demonstrated on an Alpine watershed in Italy where a long lasting conflict exists between upstream hydropower production and downstream irrigation water users. Results show that a coordination mechanism can be designed that drive the current uncoordinated structure towards the performance of the ideal centralized operation.
Optimal operation of a potable water distribution network.
Biscos, C; Mulholland, M; Le Lann, M V; Brouckaert, C J; Bailey, R; Roustan, M
2002-01-01
This paper presents an approach to an optimal operation of a potable water distribution network. The main control objective defined during the preliminary steps was to maximise the use of low-cost power, maintaining at the same time minimum emergency levels in all reservoirs. The combination of dynamic elements (e.g. reservoirs) and discrete elements (pumps, valves, routing) makes this a challenging predictive control and constrained optimisation problem, which is being solved by MINLP (Mixed Integer Non-linear Programming). Initial experimental results show the performance of this algorithm and its ability to control the water distribution process. PMID:12448464
Optimization of shared autonomy vehicle control architectures for swarm operations.
Sengstacken, Aaron J; DeLaurentis, Daniel A; Akbarzadeh-T, Mohammad R
2010-08-01
The need for greater capacity in automotive transportation (in the midst of constrained resources) and the convergence of key technologies from multiple domains may eventually produce the emergence of a "swarm" concept of operations. The swarm, which is a collection of vehicles traveling at high speeds and in close proximity, will require technology and management techniques to ensure safe, efficient, and reliable vehicle interactions. We propose a shared autonomy control approach, in which the strengths of both human drivers and machines are employed in concert for this management. Building from a fuzzy logic control implementation, optimal architectures for shared autonomy addressing differing classes of drivers (represented by the driver's response time) are developed through a genetic-algorithm-based search for preferred fuzzy rules. Additionally, a form of "phase transition" from a safe to an unsafe swarm architecture as the amount of sensor capability is varied uncovers key insights on the required technology to enable successful shared autonomy for swarm operations. PMID:19963700
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
Excited meson radiative transitions from lattice QCD using variationally optimized operators
Shultz, Christian J.; Dudek, Jozef J.; Edwards, Robert G.
2015-06-02
We explore the use of 'optimized' operators, designed to interpolate only a single meson eigenstate, in three-point correlation functions with a vector-current insertion. These operators are constructed as linear combinations in a large basis of meson interpolating fields using a variational analysis of matrices of two-point correlation functions. After performing such a determination at both zero and non-zero momentum, we compute three-point functions and are able to study radiative transition matrix elements featuring excited state mesons. The required two- and three-point correlation functions are efficiently computed using the distillation framework in which there is a factorization between quark propagation and operator construction, allowing for a large number of meson operators of definite momentum to be considered. We illustrate the method with a calculation using anisotopic lattices having three flavors of dynamical quark all tuned to the physical strange quark mass, considering form-factors and transitions of pseudoscalar and vector meson excitations. In conclusion, the dependence on photon virtuality for a number of form-factors and transitions is extracted and some discussion of excited-state phenomenology is presented.
NASA Astrophysics Data System (ADS)
Balian, S. J.; Liu, Ren-Bao; Monteiro, T. S.
2015-06-01
There are two distinct techniques of proven effectiveness for extending the coherence lifetime of spin qubits in environments of other spins. One is dynamical decoupling, whereby the qubit is subjected to a carefully timed sequence of control pulses; the other is tuning the qubit towards "optimal working points" (OWPs), which are sweet spots for reduced decoherence in magnetic fields. By means of quantum many-body calculations, we investigate the effects of dynamical decoupling pulse sequences far from and near OWPs for a central donor qubit subject to decoherence from a nuclear spin bath. Key to understanding the behavior is to analyze the degree of suppression of the usually dominant contribution from independent pairs of flip-flopping spins within the many-body quantum bath. We find that to simulate recently measured Hahn echo decays at OWPs (lowest-order dynamical decoupling), one must consider clusters of three interacting spins since independent pairs do not even give finite-T2 decay times. We show that while operating near OWPs, dynamical decoupling sequences require hundreds of pulses for a single order of magnitude enhancement of T2, in contrast to regimes far from OWPs, where only about 10 pulses are required.
Optimal Spectral Regions For Laser Excited Fluorescence Diagnostics For Point Of Care Application
NASA Astrophysics Data System (ADS)
Vaitkuviene, A.; Gėgžna, V.; Varanius, D.; Vaitkus, J.
2011-09-01
The tissue fluorescence gives the response of light emitting molecule signature, and characterizes the cell composition and peculiarities of metabolism. Both are useful for the biomedical diagnostics, as reported in previous our and others works. The present work demonstrates the results of application of laser excited autofluorescence for diagnostics of pathology in genital tissues, and the feasibility for the bedside at "point of care—off lab" application. A portable device using the USB spectrophotometer, micro laser (355 nm Nd:YAG, 0,5 ns pulse, repetition rate 10 kHz, output power 15 mW), three channel optical fiber and computer with diagnostic program was designed and ready for clinical trial to be used for cytology and biopsy specimen on site diagnostics, and for the endoscopy/puncture procedures. The biopsy and cytology samples, as well as intervertebral disc specimen were evaluated by pathology experts and the fluorescence spectra were investigated in the fresh and preserved specimens. The spectra were recorded in the spectral range 350-900 nm. At the initial stage the Gaussian components of spectra were found and the Mann-Whitney test was used for the groups' differentiation and the spectral regions for optimal diagnostics purpose were found. Then a formal dividing of spectra in the components or the definite width bands, where the main difference of the different group spectra was observed, was used to compare these groups. The ROC analysis based diagnostic algorithms were created for medical prognosis. The positive prognostic values and negative prediction values were determined for cervical Liquid PAP smear supernatant sediment diagnosis of being Cervicitis and Norma versus CIN2+. In a case of intervertebral disc the analysis allows to get the additional information about the disc degeneration status. All these results demonstrated an efficiency of the proposed procedure and the designed device could be tested at the point-of-care site or for
Applications of Optimal Building Energy System Selection and Operation
Marnay, Chris; Stadler, Michael; Siddiqui, Afzal; DeForest, Nicholas; Donadee, Jon; Bhattacharya, Prajesh; Lai, Judy
2011-04-01
Berkeley Lab has been developing the Distributed Energy Resources Customer Adoption Model (DER-CAM) for several years. Given load curves for energy services requirements in a building microgrid (u grid), fuel costs and other economic inputs, and a menu of available technologies, DER-CAM finds the optimum equipment fleet and its optimum operating schedule using a mixed integer linear programming approach. This capability is being applied using a software as a service (SaaS) model. Optimisation problems are set up on a Berkeley Lab server and clients can execute their jobs as needed, typically daily. The evolution of this approach is demonstrated by description of three ongoing projects. The first is a public access web site focused on solar photovoltaic generation and battery viability at large commercial and industrial customer sites. The second is a building CO2 emissions reduction operations problem for a University of California, Davis student dining hall for which potential investments are also considered. And the third, is both a battery selection problem and a rolling operating schedule problem for a large County Jail. Together these examples show that optimization of building u grid design and operation can be effectively achieved using SaaS.
Biohydrogen Production from Simple Carbohydrates with Optimization of Operating Parameters.
Muri, Petra; Osojnik-Črnivec, Ilja Gasan; Djinovič, Petar; Pintar, Albin
2016-01-01
Hydrogen could be alternative energy carrier in the future as well as source for chemical and fuel synthesis due to its high energy content, environmentally friendly technology and zero carbon emissions. In particular, conversion of organic substrates to hydrogen via dark fermentation process is of great interest. The aim of this study was fermentative hydrogen production using anaerobic mixed culture using different carbon sources (mono and disaccharides) and further optimization by varying a number of operating parameters (pH value, temperature, organic loading, mixing intensity). Among all tested mono- and disaccharides, glucose was shown as the preferred carbon source exhibiting hydrogen yield of 1.44 mol H(2)/mol glucose. Further evaluation of selected operating parameters showed that the highest hydrogen yield (1.55 mol H(2)/mol glucose) was obtained at the initial pH value of 6.4, T=37 °C and organic loading of 5 g/L. The obtained results demonstrate that lower hydrogen yield at all other conditions was associated with redirection of metabolic pathways from butyric and acetic (accompanied by H(2) production) to lactic (simultaneous H(2) production is not mandatory) acid production. These results therefore represent an important foundation for the optimization and industrial-scale production of hydrogen from organic substrates. PMID:26970800
Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces
NASA Technical Reports Server (NTRS)
Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas
2010-01-01
Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1
Optimizing and controlling earthmoving operations using spatial technologies
NASA Astrophysics Data System (ADS)
Alshibani, Adel
This thesis presents a model designed for optimizing, tracking, and controlling earthmoving operations. The proposed model utilizes, Genetic Algorithm (GA), Linear Programming (LP), and spatial technologies including Global Positioning Systems (GPS) and Geographic Information Systems (GIS) to support the management functions of the developed model. The model assists engineers and contractors in selecting near optimum crew formations in planning phase and during construction, using GA and LP supported by the Pathfinder Algorithm developed in a GIS environment. GA is used in conjunction with a set of rules developed to accelerate the optimization process and to avoid generating and evaluating hypothetical and unrealistic crew formations. LP is used to determine quantities of earth to be moved from different borrow pits and to be placed at different landfill sites to meet project constraints and to minimize the cost of these earthmoving operations. On the one hand, GPS is used for onsite data collection and for tracking construction equipment in near real-time. On the other hand, GIS is employed to automate data acquisition and to analyze the collected spatial data. The model is also capable of reconfiguring crew formations dynamically during the construction phase while site operations are in progress. The optimization of the crew formation considers: (1) construction time, (2) construction direct cost, or (3) construction total cost. The model is also capable of generating crew formations to meet, as close as possible, specified time and/or cost constraints. In addition, the model supports tracking and reporting of project progress utilizing the earned-value concept and the project ratio method with modifications that allow for more accurate forecasting of project time and cost at set future dates and at completion. The model is capable of generating graphical and tabular reports. The developed model has been implemented in prototype software, using Object
An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification
ERIC Educational Resources Information Center
Wang, Jun; Samal, Ashok; Rong, Panying; Green, Jordan R.
2016-01-01
Purpose: The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. Method: The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of…
NASA Astrophysics Data System (ADS)
Ghorbani, Mehrdad; Assadian, Nima
2013-12-01
In this study the gravitational perturbations of the Sun and other planets are modeled on the dynamics near the Earth-Moon Lagrange points and optimal continuous and discrete station-keeping maneuvers are found to maintain spacecraft about these points. The most critical perturbation effect near the L1 and L2 Lagrange points of the Earth-Moon is the ellipticity of the Moon's orbit and the Sun's gravity, respectively. These perturbations deviate the spacecraft from its nominal orbit and have been modeled through a restricted five-body problem (R5BP) formulation compatible with circular restricted three-body problem (CR3BP). The continuous control or impulsive maneuvers can compensate the deviation and keep the spacecraft on the closed orbit about the Lagrange point. The continuous control has been computed using linear quadratic regulator (LQR) and is compared with nonlinear programming (NP). The multiple shooting (MS) has been used for the computation of impulsive maneuvers to keep the trajectory closed and subsequently an optimized MS (OMS) method and multiple impulses optimization (MIO) method have been introduced, which minimize the summation of multiple impulses. In these two methods the spacecraft is allowed to deviate from the nominal orbit; however, the spacecraft trajectory should close itself. In this manner, some closed or nearly closed trajectories around the Earth-Moon Lagrange points are found that need almost zero station-keeping maneuver.
Optimization of a point-focusing, distributed receiver solar thermal electric system
NASA Technical Reports Server (NTRS)
Pons, R. L.
1979-01-01
This paper presents an approach to optimization of a solar concept which employs solar-to-electric power conversion at the focus of parabolic dish concentrators. The optimization procedure is presented through a series of trade studies, which include the results of optical/thermal analyses and individual subsystem trades. Alternate closed-cycle and open-cycle Brayton engines and organic Rankine engines are considered to show the influence of the optimization process, and various storage techniques are evaluated, including batteries, flywheels, and hybrid-engine operation.
Mechanical optimization of superconducting cavities in continuous wave operation
NASA Astrophysics Data System (ADS)
Posen, Sam; Liepe, Matthias
2012-02-01
Several planned accelerator facilities call for hundreds of elliptical cavities operating cw with low effective beam loading, and therefore require cavities that have been mechanically optimized to operate at high QL by minimizing df/dp, the sensitivity to microphonics detuning from fluctuations in helium pressure. Without such an optimization, the facilities would suffer either power costs driven up by millions of dollars or an extremely high per-cavity trip rate. ANSYS simulations used to predict df/dp are presented as well as a model that illustrates factors that contribute to this parameter in elliptical cavities. For the Cornell Energy Recovery Linac (ERL) main linac cavity, df/dp is found to range from 2.5 to 17.4Hz/mbar, depending on the radius of the stiffening rings, with minimal df/dp for very small or very large radii. For the Cornell ERL injector cavity, simulations predict a df/dp of 124Hz/mbar, which fits well within the range of measurements performed with the injector cryomodule. Several methods for reducing df/dp are proposed, including decreasing the diameter of the tuner bellows and increasing the stiffness of the enddishes and the tuner. Using measurements from a Tesla Test Facility cavity as the baseline, if both of these measures were implemented and the stiffening rings were optimized, simulations indicate that df/dp would be reduced from ˜30Hz/mbar to just 2.9Hz/mbar, and the power required to maintain the accelerating field would be reduced by an order of magnitude. Finally, other consequences of optimizing the stiffening ring radius are investigated. It is found that stiffening rings larger than 70% of the iris-equator distance make the cavity impossible to tune. Small rings, on the other hand, leave the cavity susceptible to plastic deformation during handling and have lower frequency mechanical resonances, which is undesirable for active compensation of microphonics. Additional simulations of Lorentz force detuning are discussed, and
Optimization of Insertion Cost for Transfer Trajectories to Libration Point Orbits
NASA Technical Reports Server (NTRS)
Howell, K. C.; Wilson, R. S.; Lo, M. W.
1999-01-01
The objective of this work is the development of efficient techniques to optimize the cost associated with transfer trajectories to libration point orbits in the Sun-Earth-Moon four body problem, that may include lunar gravity assists. Initially, dynamical systems theory is used to determine invariant manifolds associated with the desired libration point orbit. These manifolds are employed to produce an initial approximation to the transfer trajectory. Specific trajectory requirements such as, transfer injection constraints, inclusion of phasing loops, and targeting of a specified state on the manifold are then incorporated into the design of the transfer trajectory. A two level differential corrections process is used to produce a fully continuous trajectory that satisfies the design constraints, and includes appropriate lunar and solar gravitational models. Based on this methodology, and using the manifold structure from dynamical systems theory, a technique is presented to optimize the cost associated with insertion onto a specified libration point orbit.
Strategies for optimal operation of the tellurium electrowinning process
Broderick, G.; Handle, B.; Paschen, P.
1999-02-01
Empirical models predicting the purity of electrowon tellurium have been developed using data from 36 pilot-plant trials. Based on these models, a numerical optimization of the process was performed to identify conditions which minimize the total contamination in Pb and Se while reducing electrical consumption per kilogram of electrowon tellurium. Results indicate that product quality can be maintained and even improved while operating at the much higher electroplating production rates obtained at high current densities. Using these same process settings, the electrical consumption of the process can be reduced by up to 10 pct by operating at midrange temperatures of close to 50 C. This is particularly attractive when waste heat is available at the plant to help preheat the electrolyte feed. When both Pb and Se are present as contaminants, the most energy-efficient strategy involves the use of a high current density, at a moderate temperature with high flow, for low concentrations of TeO{sub 2}. If Pb is removed prior to the electrowinning process, the use of a low current density and low electrolyte feed concentration, while operating at a low temperature and moderate flow rates, provides the most significant reduction in Se codeposition.
An Efficient Operator for the Change Point Estimation in Partial Spline Model
Han, Sung Won; Zhong, Hua; Putt, Mary
2015-01-01
In bio-informatics application, the estimation of the starting and ending points of drop-down in the longitudinal data is important. One possible approach to estimate such change times is to use the partial spline model with change points. In order to use estimate change time, the minimum operator in terms of a smoothing parameter has been widely used, but we showed that the minimum operator causes large MSE of change point estimates. In this paper, we proposed the summation operator in terms of a smoothing parameter, and our simulation study showed that the summation operator gives smaller MSE for estimated change points than the minimum one. We also applied the proposed approach to the experiment data, blood flow during photodynamic cancer therapy. PMID:25705072
78 FR 39018 - Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating Unit Nos. 2 and 3
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-28
... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating Unit Nos. 2 and 3 AGENCY: Nuclear Regulatory Commission. ACTION: Supplement to Final Supplement 38 to the Generic...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-03
...The U.S. Nuclear Regulatory Commission (NRC) is issuing an exemption in response to a request submitted by Entergy Nuclear Operations, Inc. (ENO) on June 20, 2012, for the Big Rock Point (BRP) Independent Spent Fuel Storage Installation...
Optimal design of river nutrient monitoring points based on an export coefficient model
NASA Astrophysics Data System (ADS)
Do, Huu Tuan; Lo, Shang-Lien; Chiueh, Pei-Te; Thi, Lan Anh Phan; Shang, Wei-Ting
2011-08-01
SummaryNutrient concentration is an important factor in identifying the quality of water sources and the likelihood of eutrophication. A nutrient monitoring network is an important information source that provides data on the nutrient pollution status of rivers. Export coefficient models have been widely used to study non-point source pollution. However, there has been little discussion about applying non-point source pollution and export coefficient modeling to design sampling points for monitoring. In this study, a new procedure providing a comprehensive solution was proposed to design nutrient monitoring points, from identifying pollution sources to designing sampling points and frequencies. Application of this procedure to design nutrient monitoring points upstream from the Feitsui reservoirs, Taipei, Taiwan, indicated that agriculture occupied only 7.24% of the area, but it released 45,795 kg/yr, or 41%, of the total nutrient load from non-point sources. Additionally, the optimization conditions defined four sampling points as well as the frequency of sampling at those points in the study area.
Listak, J.M.; Goodman, G.V.R.; Jankowski, R.A.
1999-07-01
Respirable dust studies were conducted at several underground coal mining operations to evaluate and compare the dust measurements of fixed-point machine-mounted samples on a continuous miner and personal samples of the remote miner operator. Fixed-point sampling was conducted at the right rear corner of the continuous miner which corresponded to the traditional location of the operator's cab. Although it has been documented that higher concentrations of dust are present at the machine-mounted position, this work sought to determine whether a relationship exists between the concentrations at the fixed-point position and the dust levels experienced at the remote operator position and whether this relationship could be applied on an industry-wide basis. To achieve this objective, gravimetric samplers were used to collect respirable dust data on continuous miner sections. These samplers were placed at a fixed position at the cab location of the continuous mining machine and on or near the remote miner operator during the 1 shift/day sampling periods. Dust sampling took place at mines with a variety of geographic locations and in-mine conditions. The dust concentration data collected at each site and for each sampling period were reduced to ratios of fixed-point to operator concentration. The ratios were calculated to determine similarities, differences, and/or variability at the two positions. The data show that dust concentrations at the remote operator position were always lower than dust concentrations measured at the fixed-point continuous miner location. However, the ratios of fixed-point to remote operator dust levels showed little consistency from shift to shift or from operation to operation. The fact that these ratios are so variable may introduce some uncertainty into attempting to correlate dust exposures of the remote operator to dust levels measured on the continuous mining machine.
Applications of operational calculus: trigonometric interpolating equation for the eight-point cube
Silver, Gary L
2009-01-01
A general method for obtaining a trigonometric-type interpolating equation for the eight-point cubical array is illustrated. It can often be used to reproduce a ninth datum at an arbitrary point near the center of the array by adjusting a variable exponent. The new method complements operational polynomial and exponential methods for the same design.
The influence of transducer operating point on distortion generation in the cochlea
NASA Astrophysics Data System (ADS)
Sirjani, Davud B.; Salt, Alec N.; Gill, Ruth M.; Hale, Shane A.
2004-03-01
Distortion generated by the cochlea can provide a valuable indicator of its functional state. In the present study, the dependence of distortion on the operating point of the cochlear transducer and its relevance to endolymph volume disturbances has been investigated. Calculations have suggested that as the operating point moves away from zero, second harmonic distortion would increase. Cochlear microphonic waveforms were analyzed to derive the cochlear transducer operating point and to quantify harmonic distortions. Changes in operating point and distortion were measured during endolymph manipulations that included 200-Hz tone exposures at 115-dB SPL, injections of artificial endolymph into scala media at 80, 200, or 400 nl/min, and treatment with furosemide given intravenously or locally into the cochlea. Results were compared with other functional changes that included action potential thresholds at 2.8 or 8 kHz, summating potential, endocochlear potential, and the 2 f1-f2 and f2-f1 acoustic emissions. The results demonstrated that volume disturbances caused changes in the operating point that resulted in predictable changes in distortion. Understanding the factors influencing operating point is important in the interpretation of distortion measurements and may lead to tests that can detect abnormal endolymph volume states.
NASA Astrophysics Data System (ADS)
Parkinson, S.; Morehead, M. D.; Conner, J. T.; Frye, C.
2012-12-01
Increasing demand for water and electricity, increasing variability in weather and climate and stricter requirements for riverine ecosystem health has put ever more stringent demands on hydropower operations. Dam operators are being impacted by these constraints and are looking for methods to meet these requirements while retaining the benefits hydropower offers. Idaho Power owns and operates 17 hydroelectric plants in Idaho and Oregon which have both Federal and State compliance requirements. Idaho Power has started building Decision Support Systems (DSS) to aid the hydroelectric plant operators in maximizing hydropower operational efficiency, while meeting regulatory compliance constraints. Regulatory constraints on dam operations include: minimum in-stream flows, maximum ramp rate of river stage, reservoir volumes, and reservoir ramp rate for draft and fill. From the hydroelectric standpoint, the desire is to vary the plant discharge (ramping) such that generation matches electricity demand (load-following), but ramping is limited by the regulatory requirements. Idaho Power desires DSS that integrate real time and historic data, simulates the rivers behavior from the hydroelectric plants downstream to the compliance measurement point and presents the information in an easily understandable display that allows the operators to make informed decisions. Creating DSS like these has a number of scientific and technical challenges. Real-time data are inherently noisy and automated data cleaning routines are required to filter the data. The DSS must inform the operators when incoming data are outside of predefined bounds. Complex river morphologies can make the timing and shape of a discharge change traveling downstream from a power plant nearly impossible to represent with a predefined lookup table. These complexities require very fast hydrodynamic models of the river system that simulate river characteristics (ex. Stage, discharge) at the downstream compliance point
Implementation of a near-optimal global set point control method in a DDC controller
Cascia, M.A.
2000-07-01
A near-optimal global set point control method that can be implemented in an energy management system's (EMS) DDC controller is described in this paper. Mathematical models are presented for the power consumption of electric chillers, hot water boilers, chilled and hot water pumps, and air handler fans, which allow the calculation of near-optimal chilled water, hot water, and coil discharge air set points to minimize power consumption, based on data collected by the EMS. Also optimized are the differential and static pressure set points for the variable speed pumps and fans. A pilot test of this control methodology was implemented for a cooling plant at a pharmaceutical manufacturing facility near Dallas, Texas. Data collected at this site showed good agreement between the actual power consumed by the chillers, chilled water pumps, and air handlers and that predicted by the models. An approximate model was developed to calculate real-time power savings in the DDC controller. A third-party energy accounting program was used to track savings due to the near-optimal control, and results show a monthly KWH reduction ranging from 3% to 14%.
Operational optimization and real-time control of fuel-cell systems
NASA Astrophysics Data System (ADS)
Hasikos, J.; Sarimveis, H.; Zervas, P. L.; Markatos, N. C.
Fuel cells is a rapidly evolving technology with applications in many industries including transportation, and both portable and stationary power generation. The viability, efficiency and robustness of fuel-cell systems depend strongly on optimization and control of their operation. This paper presents the development of an integrated optimization and control tool for Proton Exchange Membrane Fuel-Cell (PEMFC) systems. Using a detailed simulation model, a database is generated first, which contains steady-state values of the manipulated and controlled variables over the full operational range of the fuel-cell system. In a second step, the database is utilized for producing Radial Basis Function (RBF) neural network "meta-models". In the third step, a Non-Linear Programming Problem (NLP) is formulated, that takes into account the constraints and limitations of the system and minimizes the consumption of hydrogen, for a given value of power demand. Based on the formulation and solution of the NLP problem, a look-up table is developed, containing the optimal values of the system variables for any possible value of power demand. In the last step, a Model Predictive Control (MPC) methodology is designed, for the optimal control of the system response to successive sep-point changes of power demand. The efficiency of the produced MPC system is illustrated through a number of simulations, which show that a successful dynamic closed-loop behaviour can be achieved, while at the same time the consumption of hydrogen is minimized.
Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.; Chen, Yousu; Trudnowski, Daniel J.; Diao, Ruisheng; Fuller, Jason C.; Mittelstadt, William A.; Hauer, John F.; Dagle, Jeffery E.
2010-10-18
Small signal stability problems are one of the major threats to grid stability and reliability in the U.S. power grid. An undamped mode can cause large-amplitude oscillations and may result in system breakups and large-scale blackouts. There have been several incidents of system-wide oscillations. Of those incidents, the most notable is the August 10, 1996 western system breakup, a result of undamped system-wide oscillations. Significant efforts have been devoted to monitoring system oscillatory behaviors from measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision, time-synchronized data needed for detecting oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measurements to identify system oscillation modes and their damping. Low damping indicates potential system stability issues. Modal analysis has been demonstrated with phasor measurements to have the capability of estimating system modes from both oscillation signals and ambient data. With more and more phasor measurements available and ModeMeter techniques maturing, there is yet a need for methods to bring modal analysis from monitoring to actions. The methods should be able to associate low damping with grid operating conditions, so operators or automated operation schemes can respond when low damping is observed. The work presented in this report aims to develop such a method and establish a Modal Analysis for Grid Operation (MANGO) procedure to aid grid operation decision making to increase inter-area modal damping. The procedure can provide operation suggestions (such as increasing generation or decreasing load) for mitigating inter-area oscillations.
Friedrich, Tobias; Neumann, Frank; Thyssen, Christian
2015-01-01
Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multi-objective problems as the population of such an algorithm can be used to represent the trade-offs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multi-objective problems. We consider indicator-based algorithms whose goal is to maximize the hypervolume for a given problem by distributing [Formula: see text] points on the Pareto front. To gain new theoretical insights into the behavior of hypervolume-based algorithms, we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of bi-objective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolume-based approaches and examine Pareto fronts of different shapes by numerical calculations. PMID:24654679
NASA Astrophysics Data System (ADS)
Goldberg, Daniel N.; Krishna Narayanan, Sri Hari; Hascoet, Laurent; Utke, Jean
2016-05-01
We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enabling larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. The methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.
NASA Astrophysics Data System (ADS)
Hui, Zhenyang; Hu, Youjian; Jin, Shuanggen; Yevenyo, Yao Ziggah
2016-08-01
Road information acquisition is an important part of city informatization construction. Airborne LiDAR provides a new means of acquiring road information. However, the existing road extraction methods using LiDAR point clouds always decide the road intensity threshold based on experience, which cannot obtain the optimal threshold to extract a road point cloud. Moreover, these existing methods are deficient in removing the interference of narrow roads and several attached areas (e.g., parking lot and bare ground) to main roads extraction, thereby imparting low completeness and correctness to the city road network extraction result. Aiming at resolving the key technical issues of road extraction from airborne LiDAR point clouds, this paper proposes a novel method to extract road centerlines from airborne LiDAR point clouds. The proposed approach is mainly composed of three key algorithms, namely, Skewness balancing, Rotating neighborhood, and Hierarchical fusion and optimization (SRH). The skewness balancing algorithm used for the filtering was adopted as a new method for obtaining an optimal intensity threshold such that the "pure" road point cloud can be obtained. The rotating neighborhood algorithm on the other hand was developed to remove narrow roads (corridors leading to parking lots or sidewalks), which are not the main roads to be extracted. The proposed hierarchical fusion and optimization algorithm caused the road centerlines to be unaffected by certain attached areas and ensured the road integrity as much as possible. The proposed method was tested using the Vaihingen dataset. The results demonstrated that the proposed method can effectively extract road centerlines in a complex urban environment with 91.4% correctness and 80.4% completeness.
NASA Astrophysics Data System (ADS)
Sue-Ann, Goh; Ponnambalam, S. G.
This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.
Analysis of an optimization-based atomistic-to-continuum coupling method for point defects
Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; Luskin, Mitchell
2015-11-16
Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Operation of internal transmitter control... Transmitter Control Internal Transmitter Control Systems § 90.473 Operation of internal transmitter control systems through licensed fixed control points. An internal transmitter control system may be...
Data Mining Method for Battery Operation Optimization in Photovoltaics
NASA Astrophysics Data System (ADS)
Sato, Katsunori; Wakao, Shinji
Recently, a photovoltaic (PV) system has attracted attention because of serious environmental and energy problems. In near future, PV systems intensively connected to the grid will bring about the difficulties in the power system operation. As a countermeasure, this paper deals with the introduction of storage battery for making the unstable PV power controllable. In this regard, when we introduce a storage battery into a PV system, we have to consider the advantages and disadvantages. In order to evaluate the system from various perspectives, we have carried out multi-objective optimization of battery operation in PV system design. However, as the number of objective functions increases, it becomes difficult to appropriately interpret the correlation among objective functions and design variables. With this background, in this paper, a novel computational method is proposed for data mining of PV system design, in which we make an attempt to effectively extract the design information of the battery system with the use of Self-Organizing Map (SOM).
Optimizing Wind And Hydropower Generation Within Realistic Reservoir Operating Policy
NASA Astrophysics Data System (ADS)
Magee, T. M.; Clement, M. A.; Zagona, E. A.
2012-12-01
Previous studies have evaluated the benefits of utilizing the flexibility of hydropower systems to balance the variability and uncertainty of wind generation. However, previous hydropower and wind coordination studies have simplified non-power constraints on reservoir systems. For example, some studies have only included hydropower constraints on minimum and maximum storage volumes and minimum and maximum plant discharges. The methodology presented here utilizes the pre-emptive linear goal programming optimization solver in RiverWare to model hydropower operations with a set of prioritized policy constraints and objectives based on realistic policies that govern the operation of actual hydropower systems, including licensing constraints, environmental constraints, water management and power objectives. This approach accounts for the fact that not all policy constraints are of equal importance. For example target environmental flow levels may not be satisfied if it would require violating license minimum or maximum storages (pool elevations), but environmental flow constraints will be satisfied before optimizing power generation. Additionally, this work not only models the economic value of energy from the combined hydropower and wind system, it also captures the economic value of ancillary services provided by the hydropower resources. It is recognized that the increased variability and uncertainty inherent with increased wind penetration levels requires an increase in ancillary services. In regions with liberalized markets for ancillary services, a significant portion of hydropower revenue can result from providing ancillary services. Thus, ancillary services should be accounted for when determining the total value of a hydropower system integrated with wind generation. This research shows that the end value of integrated hydropower and wind generation is dependent on a number of factors that can vary by location. Wind factors include wind penetration level
An invariance principle for maintaining the operating point of a neuron.
Elliott, Terry; Kuang, Xutao; Shadbolt, Nigel R; Zauner, Klaus-Peter
2008-01-01
Sensory neurons adapt to changes in the natural statistics of their environments through processes such as gain control and firing threshold adjustment. It has been argued that neurons early in sensory pathways adapt according to information-theoretic criteria, perhaps maximising their coding efficiency or information rate. Here, we draw a distinction between how a neuron's preferred operating point is determined and how its preferred operating point is maintained through adaptation. We propose that a neuron's preferred operating point can be characterised by the probability density function (PDF) of its output spike rate, and that adaptation maintains an invariant output PDF, regardless of how this output PDF is initially set. Considering a sigmoidal transfer function for simplicity, we derive simple adaptation rules for a neuron with one sensory input that permit adaptation to the lower-order statistics of the input, independent of how the preferred operating point of the neuron is set. Thus, if the preferred operating point is, in fact, set according to information-theoretic criteria, then these rules nonetheless maintain a neuron at that point. Our approach generalises from the unimodal case to the multimodal case, for a neuron with inputs from distinct sensory channels, and we briefly consider this case too. PMID:18946837
Optimal feature point selection and automatic initialization in active shape model search.
Lekadir, Karim; Yang, Guang-Zhong
2008-01-01
This paper presents a novel approach for robust and fully automatic segmentation with active shape model search. The proposed method incorporates global geometric constraints during feature point search by using interlandmark conditional probabilities. The A* graph search algorithm is adapted to identify in the image the optimal set of valid feature points. The technique is extended to enable reliable and fast automatic initialization of the ASM search. Validation with 2-D and 3-D MR segmentation of the left ventricular epicardial border demonstrates significant improvement in robustness and overall accuracy, while eliminating the need for manual initialization. PMID:18979776
Confidence intervals for the symmetry point: an optimal cutpoint in continuous diagnostic tests.
López-Ratón, Mónica; Cadarso-Suárez, Carmen; Molanes-López, Elisa M; Letón, Emilio
2016-01-01
Continuous diagnostic tests are often used for discriminating between healthy and diseased populations. For this reason, it is useful to select an appropriate discrimination threshold. There are several optimality criteria: the North-West corner, the Youden index, the concordance probability and the symmetry point, among others. In this paper, we focus on the symmetry point that maximizes simultaneously the two types of correct classifications. We construct confidence intervals for this optimal cutpoint and its associated specificity and sensitivity indexes using two approaches: one based on the generalized pivotal quantity and the other on empirical likelihood. We perform a simulation study to check the practical behaviour of both methods and illustrate their use by means of three real biomedical datasets on melanoma, prostate cancer and coronary artery disease. PMID:26756550
Sensitivity analysis and optimization of nodal point placement for vibration reduction
NASA Technical Reports Server (NTRS)
Pritchard, J. I.; Adelman, H. M.; Haftka, R. T.
1987-01-01
A method is developed for sensitivity analysis and optimization of nodal point locations in connection with vibration reduction. A straightforward derivation of the expression for the derivative of nodal locations is given, and the role of the derivative in assessing design trends is demonstrated. An optimization process is developed which uses added lumped masses on the structure as design variables to move the node to a preselected location - for example, where low response amplitude is required or to a point which makes the mode shape nearly orthogonal to the force distribution, thereby minimizing the generalized force. The optimization formulation leads to values for added masses that adjust a nodal location while minimizing the total amount of added mass required to do so. As an example, the node of the second mode of a cantilever box beam is relocated to coincide with the centroid of a prescribed force distribution, thereby reducing the generalized force substantially without adding excessive mass. A comparison with an optimization formulation that directly minimizes the generalized force indicates that nodal placement gives essentially a minimum generalized force when the node is appropriately placed.
Sensitivity derivatives and optimization of nodal point locations for vibration reduction
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Haftka, Raphael T.
1987-01-01
A method is developed for sensitivity analysis and optimization of nodal point locations in connection with vibration reduction. A straightforward derivation of the expression for the derivative of nodal locations is given, and the role of the derivative in assessing design trends is demonstrated. An optimization process is developed which uses added lumped masses on the structure as design variables to move the node to a preselected location; for example, where low response amplitude is required or to a point which makes the mode shape nearly orthogonal to the force distribution, thereby minimizing the generalized force. The optimization formulation leads to values for added masses that adjust a nodal location while minimizing the total amount of added mass required to do so. As an example, the node of the second mode of a cantilever box beam is relocated to coincide with the centroid of a prescribed force distribution, thereby reducing the generalized force substantially without adding excessive mass. A comparison with an optimization formulation that directly minimizes the generalized force indicates that nodal placement gives essentially a minimum generalized force when the node is appropriately placed.
Senstitivty analysis and optimization of nodal point placement for vibration reduction
NASA Technical Reports Server (NTRS)
Pritchard, J. I.; Adelman, H. M.; Haftka, R. T.
1986-01-01
A method is developed for sensitivity analysis and optimization of nodal point locations in connection with vibration reduction. A straightforward derivation of the expression for the derivative of nodal locations is given, and the role of the derivative in assessing design trends is demonstrated. An optimization process is developed which uses added lumped masses on the structure as design variables to move the node to a preselected location - for example, where low response amplitude is required or to a point which makes the mode shape nearly orthogonal to the force distribution, thereby minimizing the generalized force. The optimization formulation leads to values for added masses that adjust a nodal location while minimizing the total amount of added mass required to do so. As an example, the node of the second mode of a cantilever box beam is relocated to coincide with the centroid of a prescribed force distribution, thereby reducing the generalized force substantially without adding excessive mass. A comparison with an optimization formulation that directly minimizes the generalized force indicates that nodal placement gives essentially a minimum generalized force when the node is appropriately placed.
Point-based warping with optimized weighting factors of displacement vectors
NASA Astrophysics Data System (ADS)
Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas
2000-06-01
The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.
NASA Astrophysics Data System (ADS)
Guo, Jie; Zhu, Dalin; Tang, Shengjing
2012-11-01
The initial trajectory design of the missile is an important part of the overall design, but often a tedious calculation and analysis process due to the large dimension nonlinear differential equations and the traditional statistical analysis methods. To improve the traditional design methods, a robust optimization concept and method are introduced in this paper to deal with the determination of the initial control point. First, the Gaussian Radial Basis Network is adopted to establish the approximate model of the missile's disturbance motion based on the disturbance motion and disturbance factors analysis. Then, a direct analytical relationship between the disturbance input and statistical results is deduced on the basis of Gaussian Radial Basis Network model. Subsequently, a robust optimization model is established aiming at the initial control point design problem and the niche Pareto genetic algorithm for multi-objective optimization is adopted to solve this optimization model. An integral design example is give at last and the simulation results have verified the validity of this method.
Performing a scatterv operation on a hierarchical tree network optimized for collective operations
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2013-10-22
Performing a scatterv operation on a hierarchical tree network optimized for collective operations including receiving, by the scatterv module installed on the node, from a nearest neighbor parent above the node a chunk of data having at least a portion of data for the node; maintaining, by the scatterv module installed on the node, the portion of the data for the node; determining, by the scatterv module installed on the node, whether any portions of the data are for a particular nearest neighbor child below the node or one or more other nodes below the particular nearest neighbor child; and sending, by the scatterv module installed on the node, those portions of data to the nearest neighbor child if any portions of the data are for a particular nearest neighbor child below the node or one or more other nodes below the particular nearest neighbor child.
Optimizing Wellfield Operation in a Variable Power Price Regime.
Bauer-Gottwein, Peter; Schneider, Raphael; Davidsen, Claus
2016-01-01
Wellfield management is a multiobjective optimization problem. One important objective has been energy efficiency in terms of minimizing the energy footprint (EFP) of delivered water (MWh/m(3) ). However, power systems in most countries are moving in the direction of deregulated markets and price variability is increasing in many markets because of increased penetration of intermittent renewable power sources. In this context the relevant management objective becomes minimizing the cost of electric energy used for pumping and distribution of groundwater from wells rather than minimizing energy use itself. We estimated EFP of pumped water as a function of wellfield pumping rate (EFP-Q relationship) for a wellfield in Denmark using a coupled well and pipe network model. This EFP-Q relationship was subsequently used in a Stochastic Dynamic Programming (SDP) framework to minimize total cost of operating the combined wellfield-storage-demand system over the course of a 2-year planning period based on a time series of observed price on the Danish power market and a deterministic, time-varying hourly water demand. In the SDP setup, hourly pumping rates are the decision variables. Constraints include storage capacity and hourly water demand fulfilment. The SDP was solved for a baseline situation and for five scenario runs representing different EFP-Q relationships and different maximum wellfield pumping rates. Savings were quantified as differences in total cost between the scenario and a constant-rate pumping benchmark. Minor savings up to 10% were found in the baseline scenario, while the scenario with constant EFP and unlimited pumping rate resulted in savings up to 40%. Key factors determining potential cost savings obtained by flexible wellfield operation under a variable power price regime are the shape of the EFP-Q relationship, the maximum feasible pumping rate and the capacity of available storage facilities. PMID:25964991
Optimal Operation Method of Smart House by Controllable Loads based on Smart Grid Topology
NASA Astrophysics Data System (ADS)
Yoza, Akihiro; Uchida, Kosuke; Yona, Atsushi; Senju, Tomonobu
2013-08-01
From the perspective of global warming suppression and depletion of energy resources, renewable energy such as wind generation (WG) and photovoltaic generation (PV) are getting attention in distribution systems. Additionally, all electrification apartment house or residence such as DC smart house have increased in recent years. However, due to fluctuating power from renewable energy sources and loads, supply-demand balancing fluctuations of power system become problematic. Therefore, "smart grid" has become very popular in the worldwide. This article presents a methodology for optimal operation of a smart grid to minimize the interconnection point power flow fluctuations. To achieve the proposed optimal operation, we use distributed controllable loads such as battery and heat pump. By minimizing the interconnection point power flow fluctuations, it is possible to reduce the maximum electric power consumption and the electric cost. This system consists of photovoltaics generator, heat pump, battery, solar collector, and load. In order to verify the effectiveness of the proposed system, MATLAB is used in simulations.
A PERFECT MATCH CONDITION FOR POINT-SET MATCHING PROBLEMS USING THE OPTIMAL MASS TRANSPORT APPROACH
CHEN, PENGWEN; LIN, CHING-LONG; CHERN, I-LIANG
2013-01-01
We study the performance of optimal mass transport-based methods applied to point-set matching problems. The present study, which is based on the L2 mass transport cost, states that perfect matches always occur when the product of the point-set cardinality and the norm of the curl of the non-rigid deformation field does not exceed some constant. This analytic result is justified by a numerical study of matching two sets of pulmonary vascular tree branch points whose displacement is caused by the lung volume changes in the same human subject. The nearly perfect match performance verifies the effectiveness of this mass transport-based approach. PMID:23687536
Archer, Charles J.; Faraj, Ahmad A.; Inglett, Todd A.; Ratterman, Joseph D.
2012-10-23
Methods, apparatus, and products are disclosed for providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: identifying each link in the global combining network for each compute node of the operational group; designating one of a plurality of point-to-point class routing identifiers for each link such that no compute node in the operational group is connected to two adjacent compute nodes in the operational group with links designated for the same class routing identifiers; and configuring each compute node of the operational group for point-to-point communications with each adjacent compute node in the global combining network through the link between that compute node and that adjacent compute node using that link's designated class routing identifier.
Phase-operation for conduction electron by atomic-scale scattering via single point-defect
Nagaoka, Katsumi Yaginuma, Shin; Nakayama, Tomonobu
2014-03-17
In order to propose a phase-operation technique for conduction electrons in solid, we have investigated, using scanning tunneling microscopy, an atomic-scale electron-scattering phenomenon on a 2D subband state formed in Si. Particularly, we have noticed a single surface point-defect around which a standing-wave pattern created, and a dispersion of scattering phase-shifts by the defect-potential against electron-energy has been measured. The behavior is well-explained with appropriate scattering parameters: the potential height and radius. This result experimentally proves that the atomic-scale potential scattering via the point defect enables phase-operation for conduction electrons.
NASA Astrophysics Data System (ADS)
Kiran, B. S.; Singh, Satyendra; Negi, Kuldeep
The GSAT-12 spacecraft is providing Communication services from the INSAT/GSAT system in the Indian region. The spacecraft carries 12 extended C-band transponders. GSAT-12 was launched by ISRO’s PSLV from Sriharikota, into a sub-geosynchronous Transfer Orbit (sub-GTO) of 284 x 21000 km with inclination 18 deg. This Mission successfully accomplished combined optimization of launch vehicle and satellite capabilities to maximize operational life of the s/c. This paper describes mission analysis carried out for GSAT-12 comprising launch window, orbital events study and orbit raising maneuver strategies considering various Mission operational constraints. GSAT-12 is equipped with two earth sensors (ES), three gyroscopes and digital sun sensor. The launch window was generated considering mission requirement of minimum 45 minutes of ES data for calibration of gyros with Roll-sun-pointing orientation in T.O. Since the T.O. period was a rather short 6.1 hr, required pitch biases were worked out to meet the gyro-calibration requirement. A 440 N Liquid Apogee Motor (LAM) is used for orbit raising. The objective of the maneuver strategy is to achieve desired drift orbit satisfying mission constraints and minimizing propellant expenditure. In case of sub-GTO, the optimal strategy is to first perform an in-plane maneuver at perigee to raise the apogee to synchronous level and then perform combined maneuvers at the synchronous apogee to achieve desired drift orbit. The perigee burn opportunities were examined considering ground station visibility requirement for monitoring the burn. Two maneuver strategies were proposed: an optimal five-burn strategy with two perigee burns centered around perigee#5 and perigee#8 with partial ground station visibility and three apogee burns with dual station visibility, a near-optimal five-burn strategy with two off-perigee burns at perigee#5 and perigee#8 with single ground station visibility and three apogee burns with dual station visibility
Li/CFx Cells Optimized for Low-Temperature Operation
NASA Technical Reports Server (NTRS)
Smart, Marshall C.; Whitacre, Jay F.; Bugga, Ratnakumar V.; Prakash, G. K. Surya; Bhalla, Pooja; Smith, Kiah
2009-01-01
Some developments reported in prior NASA Tech Briefs articles on primary electrochemical power cells containing lithium anodes and fluorinated carbonaceous (CFx) cathodes have been combined to yield a product line of cells optimized for relatively-high-current operation at low temperatures at which commercial lithium-based cells become useless. These developments have involved modifications of the chemistry of commercial Li/CFx cells and batteries, which are not suitable for high-current and low-temperature applications because they are current-limited and their maximum discharge rates decrease with decreasing temperature. One of two developments that constitute the present combination is, itself, a combination of developments: (1) the use of sub-fluorinated carbonaceous (CFx wherein x<1) cathode material, (2) making the cathodes thinner than in most commercial units, and (3) using non-aqueous electrolytes formulated especially to enhance low-temperature performance. This combination of developments was described in more detail in High-Energy-Density, Low- Temperature Li/CFx Primary Cells (NPO-43219), NASA Tech Briefs, Vol. 31, No. 7 (July 2007), page 43. The other development included in the present combination is the use of an anion receptor as an electrolyte additive, as described in the immediately preceding article, "Additive for Low-Temperature Operation of Li-(CF)n Cells" (NPO- 43579). A typical cell according to the present combination of developments contains an anion-receptor additive solvated in an electrolyte that comprises LiBF4 dissolved at a concentration of 0.5 M in a mixture of four volume parts of 1,2 dimethoxyethane with one volume part of propylene carbonate. The proportion, x, of fluorine in the cathode in such a cell lies between 0.5 and 0.9. The best of such cells fabricated to date have exhibited discharge capacities as large as 0.6 A h per gram at a temperature of 50 C when discharged at a rate of C/5 (where C is the magnitude of the
NASA Astrophysics Data System (ADS)
Stuparu, A.; Susan-Resiga, R.; Anton, L. E.; Muntean, S.
2010-08-01
The paper presents a new method for the analysis of the cavitational behaviour of hydraulic turbomachines. This new method allows determining the coefficient of the cavitation inception and the cavitation sensitivity of the turbomachines. We apply this method to study the cavitational behaviour of a large storage pump. By plotting in semi-logarithmic coordinates the vapour volume versus the cavitation coefficient, we show that all numerical data collapse in an exponential manner. This storage pump is located in a power plant and operating without the presence of the developed cavitation is vital. We investigate the behaviour of the pump from the cavitational point of view while the pump is operating for variable discharge. A distribution of the vapour volume upon the blade of the impeller is presented for all the four operating points. It can be seen how the volume of vapour evolves from one operating point to another. In order to study the influence of the cavitation phenomena upon the pump, the evolution of the pumping head against the cavitation coefficient is presented. That will show how the pumping head drops while the cavitation coefficient decreases. From analysing the data obtained from the numerical simulation it results that the cavitation phenomena is present for all the investigated operating points. By analysis of the slope of the curve describing the evolution of the vapour volume against the cavitation coefficient we determine the cavitation sensitivity of the pump for each operating point. It is showed that the cavitation sensitivity of the investigated storage pump increases while the flow rate decreases.
Johnson, David K; Lewis, Matthew J; Pavlich, Jane C; Wright, Alan D; Johnson, Kathryn E; Pace, Andrew M
2013-02-01
The goal of this Department of Energy (DOE) project is to increase wind turbine efficiency and reliability with the use of a Light Detection and Ranging (LIDAR) system. The LIDAR provides wind speed and direction data that can be used to help mitigate the fatigue stress on the turbine blades and internal components caused by wind gusts, sub-optimal pointing and reactionary speed or RPM changes. This effort will have a significant impact on the operation and maintenance costs of turbines across the industry. During the course of the project, Michigan Aerospace Corporation (MAC) modified and tested a prototype direct detection wind LIDAR instrument; the resulting LIDAR design considered all aspects of wind turbine LIDAR operation from mounting, assembly, and environmental operating conditions to laser safety. Additionally, in co-operation with our partners, the National Renewable Energy Lab and the Colorado School of Mines, progress was made in LIDAR performance modeling as well as LIDAR feed forward control system modeling and simulation. The results of this investigation showed that using LIDAR measurements to change between baseline and extreme event controllers in a switching architecture can reduce damage equivalent loads on blades and tower, and produce higher mean power output due to fewer overspeed events. This DOE project has led to continued venture capital investment and engagement with leading turbine OEMs, wind farm developers, and wind farm owner/operators.
Polarizable six-point water models from computational and empirical optimization.
Tröster, Philipp; Lorenzen, Konstantin; Tavan, Paul
2014-02-13
Tröster et al. (J. Phys. Chem B 2013, 117, 9486-9500) recently suggested a mixed computational and empirical approach to the optimization of polarizable molecular mechanics (PMM) water models. In the empirical part the parameters of Buckingham potentials are optimized by PMM molecular dynamics (MD) simulations. The computational part applies hybrid calculations, which combine the quantum mechanical description of a H2O molecule by density functional theory (DFT) with a PMM model of its liquid phase environment generated by MD. While the static dipole moments and polarizabilities of the PMM water models are fixed at the experimental gas phase values, the DFT/PMM calculations are employed to optimize the remaining electrostatic properties. These properties cover the width of a Gaussian inducible dipole positioned at the oxygen and the locations of massless negative charge points within the molecule (the positive charges are attached to the hydrogens). The authors considered the cases of one and two negative charges rendering the PMM four- and five-point models TL4P and TL5P. Here we extend their approach to three negative charges, thus suggesting the PMM six-point model TL6P. As compared to the predecessors and to other PMM models, which also exhibit partial charges at fixed positions, TL6P turned out to predict all studied properties of liquid water at p0 = 1 bar and T0 = 300 K with a remarkable accuracy. These properties cover, for instance, the diffusion constant, viscosity, isobaric heat capacity, isothermal compressibility, dielectric constant, density, and the isobaric thermal expansion coefficient. This success concurrently provides a microscopic physical explanation of corresponding shortcomings of previous models. It uniquely assigns the failures of previous models to substantial inaccuracies in the description of the higher electrostatic multipole moments of liquid phase water molecules. Resulting favorable properties concerning the transferability to
77 FR 40091 - Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating, Units 2 and 3
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-06
... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating, Units 2 and 3 AGENCY: Nuclear... statement for license renewal of nuclear plants; availability. SUMMARY: The U.S. Nuclear...
SOLCUS: Update On Point-of-Care Ultrasound In Special Operations Medicine.
Hampton, Katarzyna Kasia; Vasios, William N; Loos, Paul E
2016-01-01
Point-of-care ultrasonography has been recognized as a relevant and versatile tool in Special Operations Forces (SOF) medicine. The Special Operator Level Clinical Ultrasound (SOLCUS) program has been developed specifically for SOF Medics. A number of challenges, including skill sustainment, high-volume training, and quality assurance, have been identified. Potential solutions, including changes to content delivery methods and application of tele-ultrasound, are described in this article. Given the shift in operational context toward extended care in austere environments, a curriculum adjustment for the SOLCUS program is also proposed. PMID:27045495
Existence and data dependence of fixed points for multivalued operators on gauge spaces
NASA Astrophysics Data System (ADS)
Espínola, Rafael; Petrusel, Adrian
2005-09-01
The purpose of this note is to present some fixed point and data dependence theorems in complete gauge spaces and in hyperconvex metric spaces for the so-called Meir-Keeler multivalued operators and admissible multivalued a[alpha]-contractions. Our results extend and generalize several theorems of Espínola and Kirk [R. Espínola, W.A. Kirk, Set-valued contractions and fixed points, Nonlinear Anal. 54 (2003) 485-494] and Rus, Petrusel, and Sîntamarian [I.A. Rus, A. Petrusel, A. Sîntamarian, Data dependence of the fixed point set of some multivalued weakly Picard operators, Nonlinear Anal. 52 (2003) 1947-1959].
Building Restoration Operations Optimization Model Beta Version 1.0
2007-05-31
The Building Restoration Operations Optimization Model (BROOM), developed by Sandia National Laboratories, is a software product designed to aid in the restoration of large facilities contaminated by a biological material. BROOMs integrated data collection, data management, and visualization software improves the efficiency of cleanup operations, minimizes facility downtime, and provides a transparent basis for reopening the facility. Secure remote access to building floor plans Floor plan drawings and knowledge of the HVAC system are critical to the design and implementation of effective sampling plans. In large facilities, access to these data may be complicated by the sheer abundance and disorganized state they are often stored in. BROOM avoids potentially costly delays by providing a means of organizing and storing mechanical and floor plan drawings in a secure remote database that is easily accessed. Sampling design tools BROOM provides an array of tools to answer the question of where to sample and how many samples to take. In addition to simple judgmental and random sampling plans, the software includes two sophisticated methods of adaptively developing a sampling strategy. Both tools strive to choose sampling locations that best satisfy a specified objective (i.e. minimizing kriging variance) but use numerically different strategies to do so. Surface samples are collected early in the restoration process to characterize the extent of contamination and then again later to verify that the facility is safe to reenter. BROOM supports sample collection using a ruggedized PDA equipped with a barcode scanner and laser range finder. The PDA displays building floor drawings, sampling plans, and electronic forms for data entry. Barcodes are placed on sample containers for the purpose of tracking the specimen and linking acquisition data (i.e. location, surface type, texture) to laboratory results. Sample location is determined by activating the integrated laser
Building Restoration Operations Optimization Model Beta Version 1.0
2007-05-31
The Building Restoration Operations Optimization Model (BROOM), developed by Sandia National Laboratories, is a software product designed to aid in the restoration of large facilities contaminated by a biological material. BROOMs integrated data collection, data management, and visualization software improves the efficiency of cleanup operations, minimizes facility downtime, and provides a transparent basis for reopening the facility. Secure remote access to building floor plans Floor plan drawings and knowledge of the HVAC system are criticalmore » to the design and implementation of effective sampling plans. In large facilities, access to these data may be complicated by the sheer abundance and disorganized state they are often stored in. BROOM avoids potentially costly delays by providing a means of organizing and storing mechanical and floor plan drawings in a secure remote database that is easily accessed. Sampling design tools BROOM provides an array of tools to answer the question of where to sample and how many samples to take. In addition to simple judgmental and random sampling plans, the software includes two sophisticated methods of adaptively developing a sampling strategy. Both tools strive to choose sampling locations that best satisfy a specified objective (i.e. minimizing kriging variance) but use numerically different strategies to do so. Surface samples are collected early in the restoration process to characterize the extent of contamination and then again later to verify that the facility is safe to reenter. BROOM supports sample collection using a ruggedized PDA equipped with a barcode scanner and laser range finder. The PDA displays building floor drawings, sampling plans, and electronic forms for data entry. Barcodes are placed on sample containers for the purpose of tracking the specimen and linking acquisition data (i.e. location, surface type, texture) to laboratory results. Sample location is determined by activating the integrated
Approximation of functions by asymmetric two-point hermite polynomials and its optimization
NASA Astrophysics Data System (ADS)
Shustov, V. V.
2015-12-01
A function is approximated by two-point Hermite interpolating polynomials with an asymmetric orders-of-derivatives distribution at the endpoints of the interval. The local error estimate is examined theoretically and numerically. As a result, the position of the maximum of the error estimate is shown to depend on the ratio of the numbers of conditions imposed on the function and its derivatives at the endpoints of the interval. The shape of a universal curve representing a reduced error estimate is found. Given the sum of the orders of derivatives at the endpoints of the interval, the ordersof-derivatives distribution is optimized so as to minimize the approximation error. A sufficient condition for the convergence of a sequence of general two-point Hermite polynomials to a given function is given.
The Hubble Space Telescope fine guidance system operating in the coarse track pointing control mode
NASA Technical Reports Server (NTRS)
Whittlesey, Richard
1993-01-01
The Hubble Space Telescope (HST) Fine Guidance System has set new standards in pointing control capability for earth orbiting spacecraft. Two precision pointing control modes are implemented in the Fine Guidance System; one being a Coarse Track Mode which employs a pseudo-quadrature detector approach and the second being a Fine Mode which uses a two axis interferometer implementation. The Coarse Track Mode was designed to maintain FGS pointing error to within 20 milli-arc seconds (rms) when guiding on a 14.5 Mv star. The Fine Mode was designed to maintain FGS pointing error to less than 3 milli-arc seconds (rms). This paper addresses the HST FGS operating in the Coarse Track Mode. An overview of the implementation, the operation, and both the predicted and observed on orbit performance is presented. The discussion includes a review of the Fine Guidance System hardware which uses two beam steering Star Selector servos, four photon counting photomultiplier tube detectors, as well as a 24 bit microprocessor, which executes the control system firmware. Unanticipated spacecraft operational characteristics are discussed as they impact pointing performance. These include the influence of spherically aberrated star images as well as the mechanical shocks induced in the spacecraft during and following orbital day/night terminator crossings. Computer modeling of the Coarse Track Mode verifies the observed on orbit performance trends in the presence of these optical and mechanical disturbances. It is concluded that the coarse track pointing control function is performing as designed and is providing a robust pointing control capability for the Hubble Space Telescope.
Optimization of a catchment-scale coupled surface-subsurface hydrological model using pilot points
NASA Astrophysics Data System (ADS)
Danapour, Mehrdis; Stisen, Simon; Lajer Højberg, Anker
2016-04-01
Transient coupled surface-subsurface models are usually complex and contain a large amount of spatio-temporal information. In the traditional calibration approach, model parameters are adjusted against only few spatially aggregated observations of discharge or individual point observations of groundwater head. However, this approach doesn't enable an assessment of spatially explicit predictive model capabilities at the intermediate scale relevant for many applications. The overall objectives of this project is to develop a new model calibration and evaluation framework by combining distributed model parameterization and regularization with new types of objective functions focusing on optimizing spatial patterns rather than individual points or catchment scale features. Inclusion of detailed observed spatial patterns of hydraulic head gradients or relevant information obtained from remote sensing data in the calibration process could allow for a better representation of spatial variability of hydraulic properties. Pilot Points as an alternative to classical parameterization approaches, introduce great flexibility when calibrating heterogeneous systems without neglecting expert knowledge (Doherty, 2003). A highly parameterized optimization of complex distributed hydrological models at catchment scale is challenging due to the computational burden that comes with it. In this study the physically-based coupled surface-subsurface model MIKE SHE is calibrated for the 8,500 km2 area of central Jylland (Denmark) that is characterized by heterogeneous geology and considerable groundwater flow across topographical catchment boundaries. The calibration of the distributed conductivity fields is carried out with a pilot point-based approach, implemented using the PEST parameter estimation tool. To reduce the high number of calibration parameters, PEST's advanced singular value decomposition combined with regularization was utilized and a reduction of the model's complexity was
Towards 3D lidar point cloud registration improvement using optimal neighborhood knowledge
NASA Astrophysics Data System (ADS)
Gressin, Adrien; Mallet, Clément; Demantké, Jérôme; David, Nicolas
2013-05-01
Automatic 3D point cloud registration is a main issue in computer vision and remote sensing. One of the most commonly adopted solution is the well-known Iterative Closest Point (ICP) algorithm. This standard approach performs a fine registration of two overlapping point clouds by iteratively estimating the transformation parameters, assuming good a priori alignment is provided. A large body of literature has proposed many variations in order to improve each step of the process (namely selecting, matching, rejecting, weighting and minimizing). The aim of this paper is to demonstrate how the knowledge of the shape that best fits the local geometry of each 3D point neighborhood can improve the speed and the accuracy of each of these steps. First we present the geometrical features that form the basis of this work. These low-level attributes indeed describe the neighborhood shape around each 3D point. They allow to retrieve the optimal size to analyze the neighborhoods at various scales as well as the privileged local dimension (linear, planar, or volumetric). Several variations of each step of the ICP process are then proposed and analyzed by introducing these features. Such variants are compared on real datasets with the original algorithm in order to retrieve the most efficient algorithm for the whole process. Therefore, the method is successfully applied to various 3D lidar point clouds from airborne, terrestrial, and mobile mapping systems. Improvement for two ICP steps has been noted, and we conclude that our features may not be relevant for very dissimilar object samplings.
NASA Astrophysics Data System (ADS)
Kim, U.; Parker, J.; Borden, R. C.
2014-12-01
In-situ chemical oxidation (ISCO) has been applied at many dense non-aqueous phase liquid (DNAPL) contaminated sites. A stirred reactor-type model was developed that considers DNAPL dissolution using a field-scale mass transfer function, instantaneous reaction of oxidant with aqueous and adsorbed contaminant and with readily oxidizable natural oxygen demand ("fast NOD"), and second-order kinetic reactions with "slow NOD." DNAPL dissolution enhancement as a function of oxidant concentration and inhibition due to manganese dioxide precipitation during permanganate injection are included in the model. The DNAPL source area is divided into multiple treatment zones with different areas, depths, and contaminant masses based on site characterization data. The performance model is coupled with a cost module that involves a set of unit costs representing specific fixed and operating costs. Monitoring of groundwater and/or soil concentrations in each treatment zone is employed to assess ISCO performance and make real-time decisions on oxidant reinjection or ISCO termination. Key ISCO design variables include the oxidant concentration to be injected, time to begin performance monitoring, groundwater and/or soil contaminant concentrations to trigger reinjection or terminate ISCO, number of monitoring wells or geoprobe locations per treatment zone, number of samples per sampling event and location, and monitoring frequency. Design variables for each treatment zone may be optimized to minimize expected cost over a set of Monte Carlo simulations that consider uncertainty in site parameters. The model is incorporated in the Stochastic Cost Optimization Toolkit (SCOToolkit) program, which couples the ISCO model with a dissolved plume transport model and with modules for other remediation strategies. An example problem is presented that illustrates design tradeoffs required to deal with characterization and monitoring uncertainty. Monitoring soil concentration changes during ISCO
Hemmateenejad, Bahram; Shamsipur, Mojtaba; Zare-Shahabadi, Vali; Akhond, Morteza
2011-10-17
The classification and regression trees (CART) possess the advantage of being able to handle large data sets and yield readily interpretable models. A conventional method of building a regression tree is recursive partitioning, which results in a good but not optimal tree. Ant colony system (ACS), which is a meta-heuristic algorithm and derived from the observation of real ants, can be used to overcome this problem. The purpose of this study was to explore the use of CART and its combination with ACS for modeling of melting points of a large variety of chemical compounds. Genetic algorithm (GA) operators (e.g., cross averring and mutation operators) were combined with ACS algorithm to select the best solution model. In addition, at each terminal node of the resulted tree, variable selection was done by ACS-GA algorithm to build an appropriate partial least squares (PLS) model. To test the ability of the resulted tree, a set of approximately 4173 structures and their melting points were used (3000 compounds as training set and 1173 as validation set). Further, an external test set containing of 277 drugs was used to validate the prediction ability of the tree. Comparison of the results obtained from both trees showed that the tree constructed by ACS-GA algorithm performs better than that produced by recursive partitioning procedure. PMID:21907021
An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification
Samal, Ashok; Rong, Panying; Green, Jordan R.
2016-01-01
Purpose The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. Method The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of words, and a set of short phrases during the recording. We used a machine-learning classifier (support-vector machine) to classify the speech stimuli on the basis of articulatory movements. We then compared classification accuracies of the flesh-point combinations to determine an optimal set of sensors. Results When data from the 4 sensors (T1: the vicinity between the tongue tip and tongue blade; T4: the tongue-body back; UL: the upper lip; and LL: the lower lip) were combined, phoneme and word classifications were most accurate and were comparable with the full set (including T2: the tongue-body front; and T3: the tongue-body front). Conclusion We identified a 4-sensor set—that is, T1, T4, UL, LL—that yielded a classification accuracy (91%–95%) equivalent to that using all 6 sensors. These findings provide an empirical basis for selecting sensors and their locations for scientific and emerging clinical applications that incorporate articulatory movements. PMID:26564030
Selective internal operations in the recognition of locally and globally point-inverted patterns.
Bischof, W F; Foster, D H; Kahn, J I
1985-01-01
Performance in discriminating rotated 'same' patterns from 'different' patterns may decrease with rotation angle up to about 90 degrees and then increase with angle up to 180 degrees. This anomalously improved performance under 180 degrees pattern rotation or point-inversion can be explained by assuming that patterns are internally represented in terms of local features and their spatial-order relations ('left of', 'above', etc.), and that, in pattern comparison, an efficient internal sense-reversal operation occurs (transforming 'left of' to 'right of', etc.). Previous experiments suggested that local features and spatial relations could not be efficiently separated in some pattern-comparison tasks. This hypothesis was tested by measuring 'same-different' discrimination performance under four transformation: point-inversion 1 of the whole pattern, point-inversion 1F of local features alone, point-inversion 1P of local-feature positions alone, and identity transformation Id. The results suggested that internal sense-reversal operations could be applied selectively and efficiently, provided that local features were well separated. Under this condition performances for 1F and 1 were about the same whereas performance for 1P was significantly worse, the latter performance resulting possibly from an attempt to apply internal global and local sense-reversal operations serially. PMID:3940058
NASA Technical Reports Server (NTRS)
Rowland, John R.; Goldhirsh, Julius; Vogel, Wolfhard J.; Torrence, Geoffrey W.
1991-01-01
An overview and a status description of the planned LMSS mobile K band experiment with ACTS is presented. As a precursor to the ACTS mobile measurements at 20.185 GHz, measurements at 19.77 GHz employing the Olympus satellite were originally planned. However, because of the demise of Olympus in June of 1991, the efforts described here are focused towards the ACTS measurements. In particular, we describe the design and testing results of a gyro controlled mobile-antenna pointing system. Preliminary pointing measurements during mobile operations indicate that the present system is suitable for measurements employing a 15 cm aperture (beamwidth at approximately 7 deg) receiving antenna operating with ACTS in the high gain transponder mode. This should enable measurements with pattern losses smaller than plus or minus 1 dB over more than 95 percent of the driving distance. Measurements with the present mount system employing a 60 cm aperture (beamwidth at approximately 1.7 deg) results in pattern losses smaller than plus or minus 3 dB for 70 percent of the driving distance. Acceptable propagation measurements may still be made with this system by employing developed software to flag out bad data points due to extreme pointing errors. The receiver system including associated computer control software has been designed and assembled. Plans are underway to integrate the antenna mount with the receiver on the University of Texas mobile receiving van and repeat the pointing tests on highways employing a recently designed radome system.
Optimization with Telios of the Polar-Drive Point Design for the National Ignition Facility
NASA Astrophysics Data System (ADS)
Collins, T. J. B.; Marozas, J. A.; McKenty, P. W.
2012-10-01
Polar drivefootnotetextS. Skupsky et al., Phys. Plasmas 11, 2763 (2004). (PD) will make it possible to conduct direct-drive--ignition experiments at the National Ignition Facilityfootnotetext G. H. Miller, E. I. Moses, and C. R. Wuest, Opt. Eng. 43, 2841 (2004). while the facility is configured for x-ray drive. A PD-ignition design has been developedfootnotetextT. J. B. Collins et al., Phys. Plasmas 19, 056308 (2012). achieving high gain in simulations including single- and multiple-beam nonuniformities, and ice and outer-surface roughness. This design has been further optimized to reduce the in-flight aspect ratio and implosion speed, increasing target stability while maintaining moderately high thermonuclear gains. The dependence of target properties on implosion speed has been examined using the optimization shell Telios. Telios has the capability to drive complex radiation--hydrodynamic simulations and optimized results over an arbitrarily large parameter space, including ring pointing angles, spot-shape parameters, target dimensions, pulse timing, and relative pulse energies. Telios is capable of extracting output from a variety of sources and combining them to form arbitrarily complex, user-specified metrics. This work was supported by the U.S. Department of Energy Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC52-08NA28302.
Target point correction optimized based on the dose distribution of each fraction in daily IGRT
NASA Astrophysics Data System (ADS)
Stoll, Markus; Giske, Kristina; Stoiber, Eva M.; Schwarz, Michael; Bendl, Rolf
2014-03-01
Purpose: To use daily re-calculated dose distributions for optimization of target point corrections (TPCs) in image guided radiation therapy (IGRT). This aims to adapt fractioned intensity modulated radiation therapy (IMRT) to changes in the dose distribution induced by anatomical changes. Methods: Daily control images from an in-room on-rail spiral CT-Scanner of three head-and-neck cancer patients were analyzed. The dose distribution was re-calculated on each control CT after an initial TPC, found by a rigid image registration method. The clinical target volumes (CTVs) were transformed from the planning CT to the rigidly aligned control CTs using a deformable image registration method. If at least 95% of each transformed CTV was covered by the initially planned D95 value, the TPC was considered acceptable. Otherwise the TPC was iteratively altered to maximize the dose coverage of the CTVs. Results: In 14 (out of 59) fractions the criterion was already fulfilled after the initial TPC. In 10 fractions the TPC can be optimized to fulfill the coverage criterion. In 31 fractions the coverage can be increased but the criterion is not fulfilled. In another 4 fractions the coverage cannot be increased by the TPC optimization. Conclusions: The dose coverage criterion allows selection of patients who would benefit from replanning. Using the criterion to include daily re-calculated dose distributions in the TPC reduces the replanning rate in the analysed three patients from 76% to 59% compared to the rigid image registration TPC.
Sturm, C.; Soni, A.; Aoki, Y.; Christ, N. H.; Izubuchi, T.; Sachrajda, C. T. C.
2009-07-01
We extend the Rome-Southampton regularization independent momentum-subtraction renormalization scheme (RI/MOM) for bilinear operators to one with a nonexceptional, symmetric subtraction point. Two-point Green's functions with the insertion of quark bilinear operators are computed with scalar, pseudoscalar, vector, axial-vector and tensor operators at one-loop order in perturbative QCD. We call this new scheme RI/SMOM, where the S stands for 'symmetric'. Conversion factors are derived, which connect the RI/SMOM scheme and the MS scheme and can be used to convert results obtained in lattice calculations into the MS scheme. Such a symmetric subtraction point involves nonexceptional momenta implying a lattice calculation with substantially suppressed contamination from infrared effects. Further, we find that the size of the one-loop corrections for these infrared improved kinematics is substantially decreased in the case of the pseudoscalar and scalar operator, suggesting a much better behaved perturbative series. Therefore it should allow us to reduce the error in the determination of the quark mass appreciably.
NASA Astrophysics Data System (ADS)
Saleh, Joseph H.; Hastings, Daniel E.; Newman, Dava J.
2004-03-01
An augmented perspective on system architecture is proposed (diachronic) that complements the traditional views on system architecture (synchronic). This paper proposes to view in a system architecture the flow of service (or utility) that the system will provide over its design lifetime. It suggests that the design lifetime is a fundamental component of system architecture although one cannot see it or touch it. Consequently, cost, utility, and value per unit time metrics are introduced. A framework is then developed that identifies optimal design lifetimes for complex systems in general, and space systems in particular, based on this augmented perspective of system architecture and on these metrics. It is found that an optimal design lifetime for a satellite exists, even in the case of constant expected revenues per day over the system's lifetime, and that it changes substantially with the expected Time to Obsolescence of the system and the volatility of the market the system is serving in the case of a commercial venture. The analysis thus proves that it is essential for a system architect to match the design lifetime with the dynamical characteristics of the environment the system is/will be operating in. It is also shown that as the uncertainty in the dynamical characteristics of the environment the system is operating in increases, the value of having the option to upgrade, modify, or extend the lifetime of a system at a later point in time increases depending on how events unfold.
Performance of FORTRAN floating-point operations on the Flex/32 multicomputer
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1987-01-01
A series of experiments has been run to examine the floating-point performance of FORTRAN programs on the Flex/32 (Trademark) computer. The experiments are described, and the timing results are presented. The time required to execute a floating-point operation is found to vary considerbaly depending on a number of factors. One factor of particular interest from an algorithm design standpoint is the difference in speed between common memory accesses and local memory accesses. Common memory accesses were found to be slower, and guidelines are given for determinig when it may be cost effective to copy data from common to local memory.
NASA Astrophysics Data System (ADS)
Xia, Shu; Ge, Xiaolin
2016-04-01
In this study, according to various grid-connected demands, the optimization scheduling models of Combined Heat and Power (CHP) units are established with three scheduling modes, which are tracking the total generation scheduling mode, tracking steady output scheduling mode and tracking peaking curve scheduling mode. In order to reduce the solution difficulty, based on the principles of modern algebraic integers, linearizing techniques are developed to handle complex nonlinear constrains of the variable conditions, and the optimized operation problem of CHP units is converted into a mixed-integer linear programming problem. Finally, with specific examples, the 96 points day ahead, heat and power supply plans of the systems are optimized. The results show that, the proposed models and methods can develop appropriate coordination heat and power optimization programs according to different grid-connected control.
NASA Astrophysics Data System (ADS)
Afghan-Toloee, A.; Heidari, A. A.; Joibari, Y.
2013-09-01
The problem of specifying the minimum number of sensors to deploy in a certain area to face multiple targets has been generally studied in the literatures. In this paper, we are arguing the multi-sensors deployment problem (MDP). The Multi-sensor placement problem can be clarified as minimizing the cost required to cover the multi target points in the area. We propose a more feasible method for the multi-sensor placement problem. Our method makes provision the high coverage of grid based placements while minimizing the cost as discovered in perimeter placement techniques. The NICA algorithm as improved ICA (Imperialist Competitive Algorithm) is used to decrease the performance time to explore an enough solution compared to other meta-heuristic schemes such as GA, PSO and ICA. A three dimensional area is used for clarify the multiple target and placement points, making provision x, y, and z computations in the observation algorithm. A structure of model for the multi-sensor placement problem is proposed: The problem is constructed as an optimization problem with the objective to minimize the cost while covering all multiple target points upon a given probability of observation tolerance.
Optimization of the Operation of Green Buildings applying the Facility Management
NASA Astrophysics Data System (ADS)
Somorová, Viera
2014-06-01
Nowadays, in the field of civil engineering there exists an upward trend towards environmental sustainability. It relates mainly to the achievement of energy efficiency and also to the emission reduction throughout the whole life cycle of the building, i.e. in the course of its implementation, use and liquidation. These requirements are fulfilled, to a large extent, by green buildings. The characteristic feature of green buildings are primarily highly-sophisticated technical and technological equipments which are installed therein. The sophisticated systems of technological equipments need also the sophisticated management. From this point of view the facility management has all prerequisites to meet this requirement. The paper is aimed to define the facility management as an effective method which enables the optimization of the management of supporting activities by creating conditions for the optimum operation of green buildings viewed from the aspect of the environmental conditions
Design optimization of composite structures operating in acoustic environments
NASA Astrophysics Data System (ADS)
Chronopoulos, D.
2015-10-01
The optimal mechanical and geometric characteristics for layered composite structures subject to vibroacoustic excitations are derived. A Finite Element description coupled to Periodic Structure Theory is employed for the considered layered panel. Structures of arbitrary anisotropy as well as geometric complexity can thus be modelled by the presented approach. Damping can also be incorporated in the calculations. Initially, a numerical continuum-discrete approach for computing the sensitivity of the acoustic wave characteristics propagating within the modelled periodic composite structure is exhibited. The first- and second-order sensitivities of the acoustic transmission coefficient expressed within a Statistical Energy Analysis context are subsequently derived as a function of the computed acoustic wave characteristics. Having formulated the gradient vector as well as the Hessian matrix, the optimal mechanical and geometric characteristics satisfying the considered mass, stiffness and vibroacoustic performance criteria are sought by employing Newton's optimization method.
Design, Performance and Optimization for Multimodal Radar Operation
Bhat, Surendra S.; Narayanan, Ram M.; Rangaswamy, Muralidhar
2012-01-01
This paper describes the underlying methodology behind an adaptive multimodal radar sensor that is capable of progressively optimizing its range resolution depending upon the target scattering features. It consists of a test-bed that enables the generation of linear frequency modulated waveforms of various bandwidths. This paper discusses a theoretical approach to optimizing the bandwidth used by the multimodal radar. It also discusses the various experimental results obtained from measurement. The resolution predicted from theory agrees quite well with that obtained from experiments for different target arrangements.
NASA Astrophysics Data System (ADS)
Branz, H. M.
1982-09-01
A new computer simulation of the annual operation of degraded flat-plate photovoltaic (PV) arrays is used to evaluate the need for maximum-power-point tracking in real PV systems. The simulations are based on single-glitch I-V curve shapes rather than particular array degradations, making the data reported applicable to any system whose likely failure modes are predictable and result in single-glitch I-V curves. The simulations show that with a reasonable array wiring strategy, effective maintenance, periodic I-V curve tracing, and avoidance of frequent and serious array shadowing, there is no reason that considerations of degradation should force the adoption of maximum-power-point-tracking power conditioning on a PV system that would otherwise operate economically at fixed voltage.
Code of Federal Regulations, 2013 CFR
2013-10-01
...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....
Code of Federal Regulations, 2012 CFR
2012-10-01
...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....
Code of Federal Regulations, 2014 CFR
2014-10-01
...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....
Code of Federal Regulations, 2011 CFR
2011-10-01
...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....
Code of Federal Regulations, 2010 CFR
2010-10-01
...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....
TH-C-19A-11: Toward An Optimized Multi-Point Scintillation Detector
Duguay-Drouin, P; Delage, ME; Therriault-Proulx, F; Beddar, S; Beaulieu, L
2014-06-15
Purpose: The purpose of this work is to characterize a 2-points mPSDs' optical chain using a spectral analysis to help selecting the optimal components for the detector. Methods: Twenty different 2-points mPSD combinations were built using 4 plastic scintillators (BCF10, BCF12, BCF60, BC430; St-Gobain) and quantum dots (QDs). The scintillator is said to be proximal when near the photodetector, and distal otherwise. A 15m optical fiber (ESKA GH-4001) was coupled to the scintillating component and connected to a spectrometer (Shamrock, Andor and QEPro, OceanOptics). These scintillation components were irradiated at 125kVp; a spectrum for each scintillator was obtained by irradiation of individual scintillator and shielding the second component, thus talking into account light propagation in all components and interfaces. The combined total spectrum was also acquired and involved simultaneous irradiation of the two scintillators for each possible combination. The shape and intensity were characterized. Results: QDs in proximal position absorb almost all the light signal from distal plastic scintillators and emit in its own emission wavelength, with 100% of the signal in the QD range (625–700nm) for the combination BCF12/QD. However, discrimination is possible when QD is in distal position in combination with blue scintillators, total signal being 73% in the blue range (400-550nm) and 27% in QD range. Similar results are obtained with the orange scintillator (BC430). For optimal signal intensity, BCF12 should always be in proximal position, e.g. having 50% more intensity when coupled with BCF60 in distal position (BCF12/BCF60) compared to the BCF60/BCF12 combination. Conclusion: Different combinations of plastic scintillators and QD were built and their emission spectra were studied. We established a preferential order for the scintillating components in the context of an optimized 2-points mPSD. In short, the components with higher wavelength emission spectrum
Sound source localization on an axial fan at different operating points
NASA Astrophysics Data System (ADS)
Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes
2016-08-01
A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.
An optimal operational advisory system for a brewery's energy supply plant
Ito, K.; Shiba, T.; Yokoyama, R. . Dept. of Energy Systems Engineering); Sakashita, S. . Mayekawa Energy Management Research Center)
1994-03-01
An optimal operational advisory system is proposed to operate rationally a brewery's energy supply plant from the economical viewpoint. A mixed-integer linear programming problem is formulated so as to minimize the daily operational cost subject to constraints such as equipment performance characteristics, energy supply-demand relations, and some practical operational restrictions. This problem includes lots of unknown variables and a hierarchical approach is adopted to derive numerical solutions. The optimal solution obtained by this methods is indicated to the plant operators so as to support their decision making. Through the numerical study for a real brewery plant, the possibility of saving operational cost is ascertained.
NASA Astrophysics Data System (ADS)
Zain, N. N. M.; Abu Bakar, N. K.; Mohamad, S.; Saleh, N. Md.
2014-01-01
A greener method based on cloud point extraction was developed for removing phenol species including 2,4-dichlorophenol (2,4-DCP), 2,4,6-trichlorophenol (2,4,6-TCP) and 4-nitrophenol (4-NP) in water samples by using the UV-Vis spectrophotometric method. The non-ionic surfactant DC193C was chosen as an extraction solvent due to its low water content in a surfactant rich phase and it is well-known as an environmentally-friendly solvent. The parameters affecting the extraction efficiency such as pH, temperature and incubation time, concentration of surfactant and salt, amount of surfactant and water content were evaluated and optimized. The proposed method was successfully applied for removing phenol species in real water samples.
Optimizing the rotating point spread function by SLM aided spiral phase modulation
NASA Astrophysics Data System (ADS)
Baránek, M.; Bouchal, Z.
2014-12-01
We demonstrate the vortex point spread function (PSF) whose shape and the rotation sensitivity to defocusing can be controlled by a phase-only modulation implemented in the spatial or frequency domains. Rotational effects are studied in detail as a result of the spiral modulation carried out in discrete radial and azimuthal sections with different topological charges. As the main result, a direct connection between properties of the PSF and the parameters of the spiral mask is found and subsequently used for an optimal shaping of the PSF and control of its defocusing rotation rate. Experiments on the PSF rotation verify a good agreement with theoretical predictions and demonstrate potential of the method for applications in microscopy, tracking of particles and 3D imaging.
Melting point prediction employing k-nearest neighbor algorithms and genetic parameter optimization.
Nigsch, Florian; Bender, Andreas; van Buuren, Bernd; Tissen, Jos; Nigsch, Eduard; Mitchell, John B O
2006-01-01
We have applied the k-nearest neighbor (kNN) modeling technique to the prediction of melting points. A data set of 4119 diverse organic molecules (data set 1) and an additional set of 277 drugs (data set 2) were used to compare performance in different regions of chemical space, and we investigated the influence of the number of nearest neighbors using different types of molecular descriptors. To compute the prediction on the basis of the melting temperatures of the nearest neighbors, we used four different methods (arithmetic and geometric average, inverse distance weighting, and exponential weighting), of which the exponential weighting scheme yielded the best results. We assessed our model via a 25-fold Monte Carlo cross-validation (with approximately 30% of the total data as a test set) and optimized it using a genetic algorithm. Predictions for drugs based on drugs (separate training and test sets each taken from data set 2) were found to be considerably better [root-mean-squared error (RMSE)=46.3 degrees C, r2=0.30] than those based on nondrugs (prediction of data set 2 based on the training set from data set 1, RMSE=50.3 degrees C, r2=0.20). The optimized model yields an average RMSE as low as 46.2 degrees C (r2=0.49) for data set 1, and an average RMSE of 42.2 degrees C (r2=0.42) for data set 2. It is shown that the kNN method inherently introduces a systematic error in melting point prediction. Much of the remaining error can be attributed to the lack of information about interactions in the liquid state, which are not well-captured by molecular descriptors. PMID:17125183
Loop Heat Pipe Operation Using Heat Source Temperature for Set Point Control
NASA Technical Reports Server (NTRS)
Ku, Jentung; Paiva, Kleber; Mantelli, Marcia
2011-01-01
Loop heat pipes (LHPs) have been used for thermal control of several NASA and commercial orbiting spacecraft. The LHP operating temperature is governed by the saturation temperature of its compensation chamber (CC). Most LHPs use the CC temperature for feedback control of its operating temperature. There exists a thermal resistance between the heat source to be cooled by the LHP and the LHP's CC. Even if the CC set point temperature is controlled precisely, the heat source temperature will still vary with its heat output. For most applications, controlling the heat source temperature is of most interest. A logical question to ask is: "Can the heat source temperature be used for feedback control of the LHP operation?" A test program has been implemented to answer the above question. Objective is to investigate the LHP performance using the CC temperature and the heat source temperature for feedback control
Design and development of a point focus concentrated PV module operating above 100 suns
NASA Astrophysics Data System (ADS)
Olah, S.; Ho, F.; Khemthong, S.
The present objective was to design, develop, fabricate and performance-test a highly efficient and cost-effective concentrated photovoltaic module which can operate above 100-suns concentration, which can be mass produced, and is reliable, with minimum maintenance. A point-focus module design was chosen, operating at 120 suns using a molded acrylic Fresnel lens and passive cooling. Four modules were built and tested, and a manufacturing cost analysis was made. The module and components were designed for future high volume production with the use of automated equipment in mind. The module consisted of a lightweight module body fabricated from aluminum sheet stock, a lens parquet assembly, and a 15 high efficiency solar cell-heat sink assembly, connected in series to produce 55 W under normal operating conditions.
Science Operations for the 2008 NASA Lunar Analog Field Test at Black Point Lava Flow, Arizona
NASA Technical Reports Server (NTRS)
Garry W. D.; Horz, F.; Lofgren, G. E.; Kring, D. A.; Chapman, M. G.; Eppler, D. B.; Rice, J. W., Jr.; Nelson, J.; Gernhardt, M. L.; Walheim, R. J.
2009-01-01
Surface science operations on the Moon will require merging lessons from Apollo with new operation concepts that exploit the Constellation Lunar Architecture. Prototypes of lunar vehicles and robots are already under development and will change the way we conduct science operations compared to Apollo. To prepare for future surface operations on the Moon, NASA, along with several supporting agencies and institutions, conducted a high-fidelity lunar mission simulation with prototypes of the small pressurized rover (SPR) and unpressurized rover (UPR) (Fig. 1) at Black Point lava flow (Fig. 2), 40 km north of Flagstaff, Arizona from Oct. 19-31, 2008. This field test was primarily intended to evaluate and compare the surface mobility afforded by unpressurized and pressurized rovers, the latter critically depending on the innovative suit-port concept for efficient egress and ingress. The UPR vehicle transports two astronauts who remain in their EVA suits at all times, whereas the SPR concept enables astronauts to remain in a pressurized shirt-sleeve environment during long translations and while making contextual observations and enables rapid (less than or equal to 10 minutes) transfer to and from the surface via suit-ports. A team of field geologists provided realistic science scenarios for the simulations and served as crew members, field observers, and operators of a science backroom. Here, we present a description of the science team s operations and lessons learned.
Experimental Investigation of a Point Design Optimized Arrow Wing HSCT Configuration
NASA Technical Reports Server (NTRS)
Narducci, Robert P.; Sundaram, P.; Agrawal, Shreekant; Cheung, S.; Arslan, A. E.; Martin, G. L.
1999-01-01
The M2.4-7A Arrow Wing HSCT configuration was optimized for straight and level cruise at a Mach number of 2.4 and a lift coefficient of 0.10. A quasi-Newton optimization scheme maximized the lift-to-drag ratio (by minimizing drag-to-lift) using Euler solutions from FL067 to estimate the lift and drag forces. A 1.675% wind-tunnel model of the Opt5 HSCT configuration was built to validate the design methodology. Experimental data gathered at the NASA Langley Unitary Plan Wind Tunnel (UPWT) section #2 facility verified CFL3D Euler and Navier-Stokes predictions of the Opt5 performance at the design point. In turn, CFL3D confirmed the improvement in the lift-to-drag ratio obtained during the optimization, thus validating the design procedure. A data base at off-design conditions was obtained during three wind-tunnel tests. The entry into NASA Langley UPWT section #2 obtained data at a free stream Mach number, M(sub infinity), of 2.55 as well as the design Mach number, M(sub infinity)=2.4. Data from a Mach number range of 1.8 to 2.4 was taken at UPWT section #1. Transonic and low supersonic Mach numbers, M(sub infinity)=0.6 to 1.2, was gathered at the NASA Langley 16 ft. Transonic Wind Tunnel (TWT). In addition to good agreement between CFD and experimental data, highlights from the wind-tunnel tests include a trip dot study suggesting a linear relationship between trip dot drag and Mach number, an aeroelastic study that measured the outboard wing deflection and twist, and a flap scheduling study that identifies the possibility of only one leading-edge and trailing-edge flap setting for transonic cruise and another for low supersonic acceleration.
Loop Heat Pipe Operation Using Heat Source Temperature for Set Point Control
NASA Technical Reports Server (NTRS)
Ku, Jentung; Paiva, Kleber; Mantelli, Marcia
2011-01-01
The LHP operating temperature is governed by the saturation temperature of its reservoir. Controlling the reservoir saturation temperature is commonly accomplished by cold biasing the reservoir and using electrical heaters to provide the required control power. Using this method, the loop operating temperature can be controlled within +/- 0.5K. However, because of the thermal resistance that exists between the heat source and the LHP evaporator, the heat source temperature will vary with its heat output even if LHP operating temperature is kept constant. Since maintaining a constant heat source temperature is of most interest, a question often raised is whether the heat source temperature can be used for LHP set point temperature control. A test program with a miniature LHP has been carried out to investigate the effects on the LHP operation when the control temperature sensor is placed on the heat source instead of the reservoir. In these tests, the LHP reservoir is cold-biased and is heated by a control heater. Tests results show that it is feasible to use the heat source temperature for feedback control of the LHP operation. Using this method, the heat source temperature can be maintained within a tight range for moderate and high powers. At low powers, however, temperature oscillations may occur due to interactions among the reservoir control heater power, the heat source mass, and the heat output from the heat source. In addition, the heat source temperature could temporarily deviate from its set point during fast thermal transients. The implication is that more sophisticated feedback control algorithms need to be implemented for LHP transient operation when the heat source temperature is used for feedback control.
NASA Astrophysics Data System (ADS)
Weinmann, Martin; Jutzi, Boris; Hinz, Stefan; Mallet, Clément
2015-07-01
3D scene analysis in terms of automatically assigning 3D points a respective semantic label has become a topic of great importance in photogrammetry, remote sensing, computer vision and robotics. In this paper, we address the issue of how to increase the distinctiveness of geometric features and select the most relevant ones among these for 3D scene analysis. We present a new, fully automated and versatile framework composed of four components: (i) neighborhood selection, (ii) feature extraction, (iii) feature selection and (iv) classification. For each component, we consider a variety of approaches which allow applicability in terms of simplicity, efficiency and reproducibility, so that end-users can easily apply the different components and do not require expert knowledge in the respective domains. In a detailed evaluation involving 7 neighborhood definitions, 21 geometric features, 7 approaches for feature selection, 10 classifiers and 2 benchmark datasets, we demonstrate that the selection of optimal neighborhoods for individual 3D points significantly improves the results of 3D scene analysis. Additionally, we show that the selection of adequate feature subsets may even further increase the quality of the derived results while significantly reducing both processing time and memory consumption.
Optimization of the thermogauge furnace for realizing high temperature fixed points
Wang, T.; Dong, W.; Liu, F.
2013-09-11
The thermogauge furnace was commonly used in many NMIs as a blackbody source for calibration of the radiation thermometer. It can also be used for realizing the high temperature fixed point(HTFP). According to our experience, when realizing HTFP we need the furnace provide relative good temperature uniformity to avoid the possible damage to the HTFP. To improve temperature uniformity in the furnace, the furnace tube was machined near the tube ends with a help of a simulation analysis by 'ansys workbench'. Temperature distributions before and after optimization were measured and compared at 1300 °C, 1700°C, 2500 °C, which roughly correspond to Co-C(1324 °C), Pt-C(1738 °C) and Re-C(2474 °C), respectively. The results clearly indicate that through machining the tube the temperature uniformity of the Thermogage furnace can be remarkably improved. A Pt-C high temperature fixed point was realized in the modified Thermogauge furnace subsequently, the plateaus were compared with what obtained using old heater, and the results were presented in this paper.
Optimization of the Nano-Dust Analyzer (NDA) for operation under solar UV illumination
NASA Astrophysics Data System (ADS)
O`Brien, L.; Grün, E.; Sternovsky, Z.
2015-12-01
The performance of the Nano-Dust Analyzer (NDA) instrument is analyzed for close pointing to the Sun, finding the optimal field-of-view (FOV), arrangement of internal baffles and measurement requirements. The laboratory version of the NDA instrument was recently developed (O'Brien et al., 2014) for the detection and elemental composition analysis of nano-dust particles. These particles are generated near the Sun by the collisional breakup of interplanetary dust particles (IDP), and delivered to Earth's orbit through interaction with the magnetic field of the expanding solar wind plasma. NDA is operating on the basis of impact ionization of the particle and collecting the generated ions in a time-of-flight fashion. The challenge in the measurement is that nano-dust particles arrive from a direction close to that of the Sun and thus the instrument is exposed to intense ultraviolet (UV) radiation. The performed optical ray-tracing analysis shows that it is possible to suppress the number of UV photons scattering into NDA's ion detector to levels that allow both high signal-to-noise ratio measurements, and long-term instrument operation. Analysis results show that by avoiding direct illumination of the target, the photon flux reaching the detector is reduced by a factor of about 103. Furthermore, by avoiding the target and also implementing a low-reflective coating, as well as an optimized instrument geometry consisting of an internal baffle system and a conical detector housing, the photon flux can be reduced by a factor of 106, bringing it well below the operation requirement. The instrument's FOV is optimized for the detection of nano-dust particles, while excluding the Sun. With the Sun in the FOV, the instrument can operate with reduced sensitivity and for a limited duration. The NDA instrument is suitable for future space missions to provide the unambiguous detection of nano-dust particles, to understand the conditions in the inner heliosphere and its temporal
Monitoring fleets of electric vehicles: optimizing operational use and maintenance
NASA Astrophysics Data System (ADS)
Lenain, P.; Kechmire, M.; Smaha, J. P.
Electric vehicles can make a substantial contribution to an improved urban environment. Reduced atmospheric pollution and noise emissions make the increased use of electric vehicles highly desirable and their suitability for dedicated fleets of vehicles is well recognized. As a result, a suitable system of supervision and management is necessary for fleet operators, to allow them to see the key parameters for the optimum use of the electric vehicle at all times. A computer-based data acquisition and analysis system will allow access to critical control parameters and display the operation of chargers and batteries in real time. Battery condition and charging can be followed. Information is stored in a database and can be readily analyzed and retrieved to manage extensive charging installations. In this paper, the operation of a battery/charger management system is described. The effective use of the system in electric utility vans is demonstrated.
Street curb recognition in 3d point cloud data using morphological operations
NASA Astrophysics Data System (ADS)
Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino
2015-04-01
Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a
Optimization or Simulation? Comparison of approaches to reservoir operation on the Senegal River
NASA Astrophysics Data System (ADS)
Raso, Luciano; Bader, Jean-Claude; Pouget, Jean-Christophe; Malaterre, Pierre-Olivier
2015-04-01
Design of reservoir operation rules follows, traditionally, two approaches: optimization and simulation. In simulation, the analyst hypothesizes operation rules, and selects them by what-if analysis based on effects of model simulations on different objectives indicators. In optimization, the analyst selects operational objective indicators, finding operation rules as an output. Optimization rules guarantee optimality, but they often require further model simplification, and can be hard to communicate. Selecting the most proper approach depends on the system under analysis, and the analyst expertise and objectives. We present advantage and disadvantages of both approaches, and we test them for the Manantali reservoir operation rule design, on the Senegal River, West Africa. We compare their performance in attaining the system objectives. Objective indicators are defined a-priori, in order to quantify the system performance. Results from this application are not universally generalizable to the entire class, but they allow us to draw conclusions on this system, and to give further information on their application.
Optimal operating frequency in wireless power transmission for implantable devices.
Poon, Ada S Y; O'Driscoll, Stephen; Meng, Teresa H
2007-01-01
This paper examines short-range wireless powering for implantable devices and shows that existing analysis techniques are not adequate to conclude the characteristics of power transfer efficiency over a wide frequency range. It shows, theoretically and experimentally, that the optimal frequency for power transmission in biological media can be in the GHz-range while existing solutions exclusively focus on the MHz-range. This implies that the size of the receive coil can be reduced by 10(4) times which enables the realization of fully integrated implantable devices. PMID:18003300
Field-scale operation of methane biofiltration systems to mitigate point source methane emissions.
Hettiarachchi, Vijayamala C; Hettiaratchi, Patrick J; Mehrotra, Anil K; Kumar, Sunil
2011-06-01
Methane biofiltration (MBF) is a novel low-cost technique for reducing low volume point source emissions of methane (CH₄). MBF uses a granular medium, such as soil or compost, to support the growth of methanotrophic bacteria responsible for converting CH₄ to carbon dioxide (CO₂) and water (H₂O). A field research program was undertaken to evaluate the potential to treat low volume point source engineered CH₄ emissions using an MBF at a natural gas monitoring station. A new comprehensive three-dimensional numerical model was developed incorporating advection-diffusive flow of gas, biological reactions and heat and moisture flow. The one-dimensional version of this model was used as a guiding tool for designing and operating the MBF. The long-term monitoring results of the field MBF are also presented. The field MBF operated with no control of precipitation, evaporation, and temperature, provided more than 80% of CH₄ oxidation throughout spring, summer, and fall seasons. The numerical model was able to predict the CH₄ oxidation behavior of the field MBF with high accuracy. The numerical model simulations are presented for estimating CH₄ oxidation efficiencies under various operating conditions, including different filter bed depths and CH₄ flux rates. The field observations as well as numerical model simulations indicated that the long-term performance of MBFs is strongly dependent on environmental factors, such as ambient temperature and precipitation. PMID:21414700
Optimizing operational flexibility and enforcement liability in Title V permits
McCann, G.T.
1997-12-31
Now that most states have interim or full approval of the portions of their state implementation plans (SIPs) implementing Title V (40 CFR Part 70) of the Clean Air Act Amendments (CAAA), most sources which require a Title V permit have submitted or are well on the way to submitting a Title V operating permit application. Numerous hours have been spent preparing applications to ensure the administrative completeness of the application and operational flexibility for the facility. Although much time and effort has been spent on Title V permit applications, the operating permit itself is the final goal. This paper outlines the major Federal requirements for Title V permits as given in the CAAA at 40 CFR 70.6, Permit Content. These Federal requirements and how they will effect final Title V permits and facilities will be discussed. This paper will provide information concerning the Federal requirements for Title V permits and suggestions on how to negotiate a Title V permit to maximize operational flexibility and minimize enforcement liability.
Cost optimization for series-parallel execution of a collection of intersecting operation sets
NASA Astrophysics Data System (ADS)
Dolgui, Alexandre; Levin, Genrikh; Rozin, Boris; Kasabutski, Igor
2016-05-01
A collection of intersecting sets of operations is considered. These sets of operations are performed successively. The operations of each set are activated simultaneously. Operation durations can be modified. The cost of each operation decreases with the increase in operation duration. In contrast, the additional expenses for each set of operations are proportional to its time. The problem of selecting the durations of all operations that minimize the total cost under constraint on completion time for the whole collection of operation sets is studied. The mathematical model and method to solve this problem are presented. The proposed method is based on a combination of Lagrangian relaxation and dynamic programming. The results of numerical experiments that illustrate the performance of the proposed method are presented. This approach was used for optimization multi-spindle machines and machining lines, but the problem is common in engineering optimization and thus the techniques developed could be useful for other applications.
Utilities optimize operations by cycling base-load fossil units
Not Available
1986-05-01
In the summer of 1985, an East Coast utility ''gave away'' approximately 200 MW of electricity. The utility found itself having to operate, at full capability, a 400-MW, 20-yr-old fossil station when its power pool had requested only half that load. The power went into the network and was sold, but another member of the pool got the credit. This situation developed because the utility had two stations it had to operate in the base-load mode: One was brand new, the other could operate economically only at full capacity. This predicament is becoming commonplace for many utilities with one or more base-load units that have recently come on-line. Utilities are using their older fossil units to satisfy generating capacity at these peak-demand periods by introducing them to cyclic operation. For example, in 1987, when Duke Power Co's Catawba 2 nuclear station is scheduled for commercial operation, approximately 50% of the utility's system will be base-load nuclear generation. During periods of low system demand, Duke's larger fossil units will be required either to attain sufficiently low loads or to cycle on and off daily to meet system dispatch requirements. A figure shows how Duke's fossil units will have to meet daily demand projected for the sumer of 1988. Of course, cycling a fossil plant does not involve simply turning the boiler off at 5 p.m. and switching it on again at 9 a.m. This action creates stress on equipment that can lead to severe availability problems. Utilities that opt to cycle all or some of their units do so only after careful analysis. This article describes the more serious problems associated with it.
Reservoir Stimulation Optimization with Operational Monitoring for Creation of EGS
Carlos A. Fernandez
2014-09-15
EGS field projects have not sustained production at rates greater than ½ of what is needed for economic viability. The primary limitation that makes commercial EGS infeasible is our current inability to cost-effectively create high-permeability reservoirs from impermeable, igneous rock within the 3,000-10,000 ft depth range. Our goal is to develop a novel fracturing fluid technology that maximizes reservoir permeability while reducing stimulation cost and environmental impact. Laboratory equipment development to advance laboratory characterization/monitoring is also a priority of this project to study and optimize the physicochemical properties of these fracturing fluids in a range of reservoir conditions. Barrier G is the primarily intended GTO barrier to be addressed as well as support addressing barriers D, E and I.
Reservoir Stimulation Optimization with Operational Monitoring for Creation of EGS
Fernandez, Carlos A.
2013-09-25
EGS field projects have not sustained production at rates greater than ½ of what is needed for economic viability. The primary limitation that makes commercial EGS infeasible is our current inability to cost-effectively create high-permeability reservoirs from impermeable, igneous rock within the 3,000-10,000 ft depth range. Our goal is to develop a novel fracturing fluid technology that maximizes reservoir permeability while reducing stimulation cost and environmental impact. Laboratory equipment development to advance laboratory characterization/monitoring is also a priority of this project to study and optimize the physicochemical properties of these fracturing fluids in a range of reservoir conditions. Barrier G is the primarily intended GTO barrier to be addressed as well as support addressing barriers D, E and I.
Optimizing wartime en route nursing care in Operation Iraqi Freedom.
Nagra, Michael
2011-01-01
Throughout combat operations in Iraq and Afghanistan, Army nurses have served in a new role--providing en route care in military helicopters for patients being transported to a higher level of care. From aid stations on the battlefield where forward surgical teams save lives, limbs, and eyesight, to the next higher level of care at combat support hospitals, these missions require specialized nursing skills to safely care for the high acuity patients. Little information exists about patient outcomes associated with the nursing assessment and care provided during helicopter medical evacuation (MEDEVAC) of such unstable patients and the consequent impact on the patient's condition after transport. In addition, there are no valid and reliable tools to capture care delivery, patient outcomes, and associated nursing workload and staffing requirements. During Operation Iraqi Freedom, a new process was implemented over a 2-year period to measure nursing related patient outcomes during MEDEVAC, and to capture the nursing workload. The use of standard metrics to establish patient priorities and improve nursing care during MEDEVAC allowed the level II forward surgical teams or their equivalents and level III combat support hospitals to make structural, process, and outcome improvements in the en route care programs throughout the Iraq theater of operations. Implications of this program were broad, including establishment of a process to support decision making based on data driven metrics, improvement of quality of nursing care, and defining nurse staffing requirements. PMID:22124873
Critical Point Facility (CPE) Group in the Spacelab Payload Operations Control Center (SL POCC)
NASA Technical Reports Server (NTRS)
1992-01-01
The primary payload for Space Shuttle Mission STS-42, launched January 22, 1992, was the International Microgravity Laboratory-1 (IML-1), a pressurized manned Spacelab module. The goal of IML-1 was to explore in depth the complex effects of weightlessness of living organisms and materials processing. Around-the-clock research was performed on the human nervous system's adaptation to low gravity and effects of microgravity on other life forms such as shrimp eggs, lentil seedlings, fruit fly eggs, and bacteria. Materials processing experiments were also conducted, including crystal growth from a variety of substances such as enzymes, mercury iodide, and a virus. The Huntsville Operations Support Center (HOSC) Spacelab Payload Operations Control Center (SL POCC) at the Marshall Space Flight Center (MSFC) was the air/ground communication channel used between the astronauts and ground control teams during the Spacelab missions. Featured is the Critical Point Facility (CPE) group in the SL POCC during STS-42, IML-1 mission.
Critical Point Facility (CPF) Team in the Spacelab Payload Operations Control Center (SL POCC)
NASA Technical Reports Server (NTRS)
1982-01-01
The primary payload for Space Shuttle Mission STS-42, launched January 22, 1992, was the International Microgravity Laboratory-1 (IML-1), a pressurized manned Spacelab module. The goal of IML-1 was to explore in depth the complex effects of weightlessness of living organisms and materials processing. Around-the-clock research was performed on the human nervous system's adaptation to low gravity and effects of microgravity on other life forms such as shrimp eggs, lentil seedlings, fruit fly eggs, and bacteria. Materials processing experiments were also conducted, including crystal growth from a variety of substances such as enzymes, mercury iodide, and a virus. The Huntsville Operations Support Center (HOSC) Spacelab Payload Operations Control Center (SL POCC) at the Marshall Space Flight Center (MSFC) was the air/ground communication channel used between the astronauts and ground control teams during the Spacelab missions. Featured is the Critical Point Facility (CPF) team in the SL POCC during the IML-1 mission.
Optimal Sunshade Configurations for Space-Based Geoengineering near the Sun-Earth L1 Point.
Sánchez, Joan-Pau; McInnes, Colin R
2015-01-01
Within the context of anthropogenic climate change, but also considering the Earth's natural climate variability, this paper explores the speculative possibility of large-scale active control of the Earth's radiative forcing. In particular, the paper revisits the concept of deploying a large sunshade or occulting disk at a static position near the Sun-Earth L1 Lagrange equilibrium point. Among the solar radiation management methods that have been proposed thus far, space-based concepts are generally seen as the least timely, albeit also as one of the most efficient. Large occulting structures could potentially offset all of the global mean temperature increase due to greenhouse gas emissions. This paper investigates optimal configurations of orbiting occulting disks that not only offset a global temperature increase, but also mitigate regional differences such as latitudinal and seasonal difference of monthly mean temperature. A globally resolved energy balance model is used to provide insights into the coupling between the motion of the occulting disks and the Earth's climate. This allows us to revise previous studies, but also, for the first time, to search for families of orbits that improve the efficiency of occulting disks at offsetting climate change on both global and regional scales. Although natural orbits exist near the L1 equilibrium point, their period does not match that required for geoengineering purposes, thus forced orbits were designed that require small changes to the disk attitude in order to control its motion. Finally, configurations of two occulting disks are presented which provide the same shading area as previously published studies, but achieve reductions of residual latitudinal and seasonal temperature changes. PMID:26309047
Optimal Sunshade Configurations for Space-Based Geoengineering near the Sun-Earth L1 Point
Sánchez, Joan-Pau; McInnes, Colin R.
2015-01-01
Within the context of anthropogenic climate change, but also considering the Earth’s natural climate variability, this paper explores the speculative possibility of large-scale active control of the Earth’s radiative forcing. In particular, the paper revisits the concept of deploying a large sunshade or occulting disk at a static position near the Sun-Earth L1 Lagrange equilibrium point. Among the solar radiation management methods that have been proposed thus far, space-based concepts are generally seen as the least timely, albeit also as one of the most efficient. Large occulting structures could potentially offset all of the global mean temperature increase due to greenhouse gas emissions. This paper investigates optimal configurations of orbiting occulting disks that not only offset a global temperature increase, but also mitigate regional differences such as latitudinal and seasonal difference of monthly mean temperature. A globally resolved energy balance model is used to provide insights into the coupling between the motion of the occulting disks and the Earth’s climate. This allows us to revise previous studies, but also, for the first time, to search for families of orbits that improve the efficiency of occulting disks at offsetting climate change on both global and regional scales. Although natural orbits exist near the L1 equilibrium point, their period does not match that required for geoengineering purposes, thus forced orbits were designed that require small changes to the disk attitude in order to control its motion. Finally, configurations of two occulting disks are presented which provide the same shading area as previously published studies, but achieve reductions of residual latitudinal and seasonal temperature changes. PMID:26309047
Determining the optimal operator allocation using a three-phase methodology
NASA Astrophysics Data System (ADS)
Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Ab Rahman, Mohd Nizam
2014-09-01
This paper presents the operator allocation decision in labor-intensive manufacturing system using a three-phase methodology. A two-phase methodology from literatures has been extended to a three-phase methodology which in a three-phase methodology, operators' performance is evaluated before the allocation is made. The evaluation of operators' performance in Phase 1 is realized as a requirement in operator allocation decision because it will affect the production system's performance. In Phase 2, the used of computer simulation offered flexibility in determining inputs and outputs of each operator allocation alternative. Finally, in Phase 3 the optimal operator allocation is concluded. The combination of these three phases is essential because it includes all important factors. Hence, it will assist the management of the manufacturing companies, especially SMEs in providing ideas to determine an optimal operator allocation. Based on these findings a three-phase methodology improves the current operator allocation.
Liang, Feng; Guo, Yuanyuan; Fung, Richard Y K
2015-11-01
Operation theatre is one of the most significant assets in a hospital as the greatest source of revenue as well as the largest cost unit. This paper focuses on surgery scheduling optimization, which is one of the most crucial tasks in operation theatre management. A combined scheduling policy composed of three simple scheduling rules is proposed to optimize the performance of scheduling operation theatre. Based on the real-life scenarios, a simulation-based model about surgery scheduling system is built. With two optimization objectives, the response surface method is adopted to search for the optimal weight of simple rules in a combined scheduling policy in the model. Moreover, the weights configuration can be revised to cope with dispatching dynamics according to real-time change at the operation theatre. Finally, performance comparison between the proposed combined scheduling policy and tabu search algorithm indicates that the combined scheduling policy is capable of sequencing surgery appointments more efficiently. PMID:26385551
Enabling a viable technique for the optimization of LNG carrier cargo operations
NASA Astrophysics Data System (ADS)
Alaba, Onakoya Rasheed; Nwaoha, T. C.; Okwu, M. O.
2016-07-01
In this study, we optimize the loading and discharging operations of the Liquefied Natural Gas (LNG) carrier. First, we identify the required precautions for LNG carrier cargo operations. Next, we prioritize these precautions using the analytic hierarchy process (AHP) and experts' judgments, in order to optimize the operational loading and discharging exercises of the LNG carrier, prevent system failure and human error, and reduce the risk of marine accidents. Thus, the objective of our study is to increase the level of safety during cargo operations.
Tripod operators for efficient search of point cloud data for known surface shapes
NASA Astrophysics Data System (ADS)
Pipitone, Frank; Gilbreath, Charmaine; Bonanno, David
2012-06-01
We address the problem of searching large amounts of 3D point set data for specific objects of interest, as characterized by their surface shape. Motivating applications include the detection of ambush weapons from a convoy and the search for objects of interest on the ground from an aircraft. Such data can occur in the form of relatively unstructured point sets or range images, and can be derived from a variety of sensors. We study here the performance of Tripod Operators (TOs) on synthetic range image data containing the shape of an oil drum; a cylinder with planar top. Tripod Operators are an efficient method of extracting coordinate invariant shape information from surface shape representations using discrete samples extracted in a specially constrained manner. They can be used in a variety of ways as components of a system which performs detection, recognition and localization of objects based on their surface shape. We present experimental results which characterize the approximate accuracy of detection of the test shape as a function of the accuracy of the surface shape data. This is motivated by the need for an estimate of the required accuracy of 3D surveillance data to enable detection of specific shapes.
Methods and devices for optimizing the operation of a semiconductor optical modulator
Zortman, William A.
2015-07-14
A semiconductor-based optical modulator includes a control loop to control and optimize the modulator's operation for relatively high data rates (above 1 GHz) and/or relatively high voltage levels. Both the amplitude of the modulator's driving voltage and the bias of the driving voltage may be adjusted using the control loop. Such adjustments help to optimize the operation of the modulator by reducing the number of errors present in a modulated data stream.
NASA Astrophysics Data System (ADS)
Lin, Wenwen; Yu, D. Y.; Wang, S.; Zhang, Chaoyong; Zhang, Sanqiang; Tian, Huiyu; Luo, Min; Liu, Shengqiang
2015-07-01
In addition to energy consumption, the use of cutting fluids, deposition of worn tools and certain other manufacturing activities can have environmental impacts. All these activities cause carbon emission directly or indirectly; therefore, carbon emission can be used as an environmental criterion for machining systems. In this article, a direct method is proposed to quantify the carbon emissions in turning operations. To determine the coefficients in the quantitative method, real experimental data were obtained and analysed in MATLAB. Moreover, a multi-objective teaching-learning-based optimization algorithm is proposed, and two objectives to minimize carbon emissions and operation time are considered simultaneously. Cutting parameters were optimized by the proposed algorithm. Finally, the analytic hierarchy process was used to determine the optimal solution, which was found to be more environmentally friendly than the cutting parameters determined by the design of experiments method.
NASA Astrophysics Data System (ADS)
Bukley, Jerry
The experiment is comprised of a 115,000 cubic meter helium balloon which lifts a 2,900 kg Acquisition, Tracking and Pointing (ATP) experiment package to an altitude of 26 km. The Phillips Laboratory High Altitude Balloon Experiment (HABE) has been developed as a cost-effective means of testing satellite ATP technologies in an environment similar to space. A major advantage of the concept is the flexibility in placement and timing afforded a balloon over a satellite. This flexibility allows HABE to engage targets-of-opportunity launched from the domestic ranges without requiring a dedicated or closely coordinated launch time. The placement of HABE is optimized to maximize active track time. A routine was developed to raster scan the mathematical model of a flight corridor while accumulating the intervals of continuous engagement that satisfy a list of ten rules. Although successful, this method is unable to place priorities or make trades based on the relative importance of the rules. The use of fuzzy logic in the form of approximate reasoning to evaluate the rules, while also considering goals, enables key qualitative considerations to be factored into the overall evaluation. This paper describes the application of fuzzy logic to data analysis and compares the results to conventional Boolean techniques.
NASA Astrophysics Data System (ADS)
Zamora, A.; Gutierrez, A. E.; Velasco, A. A.
2014-12-01
2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.
Vogt, Mark; van Gerwen, Dennis J; van den Dobbelsteen, John J; Hagenaars, Martin
2016-01-01
Performance of neuraxial blockade using a midline approach can be technically difficult. It is therefore important to optimize factors that are under the influence of the clinician performing the procedure. One of these factors might be the chosen point of insertion of the needle. Surprisingly few data exist on where between the tips of two adjacent spinous processes the needle should be introduced. A geometrical model was adopted to gain more insight into this issue. Spinous processes were represented by parallelograms. The length, the steepness relative to the skin, and the distance between the parallelograms were varied. The influence of the chosen point of insertion of the needle on the range of angles at which the epidural and subarachnoid space could be reached was studied. The optimal point of insertion was defined as the point where this range is the widest. The geometrical model clearly demonstrated, that the range of angles at which the epidural or subarachnoid space can be reached, is dependent on the point of insertion between the tips of the adjacent spinous processes. The steeper the spinous processes run, the more cranial the point of insertion should be. Assuming that the model is representative for patients, the performance of neuraxial blockade using a midline approach might be improved by choosing the optimal point of insertion. PMID:27570462
Control and operation cost optimization of the HISS cryogenic system
Porter, J.; Bieser, F.; Anderson, D.
1983-08-01
The Heavy Ion Spectrometer System (HISS) relies upon superconducting coils of cryostable design to provide a maximum particle bending field of 3 tesla. A previous paper describes the cryogenic facility including helium refrigeration and gas management. This paper discusses a control strategy which has allowed full time unattended operation, along with significant nitrogen and power cost reductions. Reduction of liquid nitrogen consumption has been accomplished by making use of the sensible heat available in the cold exhaust gas. Measured nitrogen throughput agrees with calculations for sensible heat utilization of zero to 70%. Calculated consumption saving over this range is 40 liters per hour for conductive losses to the supports only. The measured throughput differential for the total system is higher.
NASA Astrophysics Data System (ADS)
Jinxian, Qiu; Jilin, Cheng; Jinyao, Luo; Rentian, Zhang; Lihua, Zhang; Yi, Gong
2010-06-01
The following paper puts forward 45 combination schemes of different-type pumps in different daily-average heads and operation loads in Jiangdu Pumping Station. Based on every scheme, the minimum electricity consumption cost selected as the objective function, this paper gives the results of variable speed optimal operations with dynamic planning methods in both considering time-sharing electricity prices and not, simultaneously, it gives the results of fixed speed conventional operation considering time-sharing electricity prices. Then according to the unit energy consumption cost, the paper gives comparison analysis of the effect of different-type pumps in variable speed optimization operation, the conclusions can offer decision-making bases for optimization research of pumping stations considering time-sharing electricity prices and tide levels of Yangtze River, and offer references for transformation and economical operation of large and medium-size pumping stations.
Optimization of the design and operation of FAIMS analyzers.
Shvartsburg, Alexandre A; Tang, Keqi; Smith, Richard D
2005-01-01
Field asymmetric waveform ion mobility spectrometry (FAIMS) holds significant promise for post-ionization separations in conjunction with mass-spectrometric analyses. However, a limited understanding of fundamentals of FAIMS analyzers has made their design and operation largely an empirical exercise. Recently, we developed an a priori simulation of FAIMS that accounts for both ion diffusion (including anisotropic components) and Coulomb repulsion, and validated it by extensive comparisons with FAIMS/MS data. Here it is corroborated further by FAIMS-only measurements, and applied to explore how key instrumental parameters (analytical gap width and length, waveform frequency and profile, the identity and flow speed of buffer gas) affect FAIMS response. We find that the trade-off between resolution and sensitivity can be managed by varying gap width, RF frequency, and (in certain cases) buffer gas, with equivalent outcome. In particular, the resolving power can be approximately doubled compared to "typical" conditions. Throughput may be increased by either accelerating the gas flow (preferable) or shortening the device, but below certain minimum residence times performance deteriorates. Bisinusoidal and clipped-sinusoidal waveforms have comparable merit, but switching to rectangular waveforms would improve resolution and/or sensitivity. For any waveform profile, the ratio of two between voltages in high and low portions of the cycle produces the best performance. PMID:15653358
Optimal edge detection using multiple operators for image understanding
NASA Astrophysics Data System (ADS)
Giannarou, Stamatia; Stathaki, Tania
2011-12-01
Extraction of features, such as edges for the understanding of aerial images, has been an important objective since the early days of remote sensing. This work aims at describing a new framework which allows for the quantitative combination of a preselected set of edge detectors based on the correspondence between their outcomes. This is inspired from the problem that despite the enormous amount of literature on edge detection techniques, there is no single technique that performs well in every possible image context. Two approaches are proposed for this purpose. The first approach is the well-known receiver operating characteristics analysis which is introduced for a sound quality evaluation of the edge maps estimated by combining different edge detectors. In the second approach, the so-called kappa statistics are employed in a novel fashion to amalgamate the above-mentioned selected edge maps to form an improved final edge image. This method is unique in the sense that the balance between the false detections (false positives and false negatives) is explicitly determined in advance and incorporated in the proposed method in a mathematical fashion. For the performance evaluation of the proposed techniques, a sample set of the RADIUS/DARPA-IU Fort Hood aerial image database with known ground truth has been used.
NASA Astrophysics Data System (ADS)
Swaidan, Waleeda; Hussin, Amran
2015-10-01
Most direct methods solve finite time horizon optimal control problems with nonlinear programming solver. In this paper, we propose a numerical method for solving nonlinear optimal control problem with state and control inequality constraints. This method used quasilinearization technique and Haar wavelet operational matrix to convert the nonlinear optimal control problem into a quadratic programming problem. The linear inequality constraints for trajectories variables are converted to quadratic programming constraint by using Haar wavelet collocation method. The proposed method has been applied to solve Optimal Control of Multi-Item Inventory Model. The accuracy of the states, controls and cost can be improved by increasing the Haar wavelet resolution.
Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David
2016-01-01
Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.
System and method of cylinder deactivation for optimal engine torque-speed map operation
Sujan, Vivek A; Frazier, Timothy R; Follen, Kenneth; Moon, Suk-Min
2014-11-11
This disclosure provides a system and method for determining cylinder deactivation in a vehicle engine to optimize fuel consumption while providing the desired or demanded power. In one aspect, data indicative of terrain variation is utilized in determining a vehicle target operating state. An optimal active cylinder distribution and corresponding fueling is determined from a recommendation from a supervisory agent monitoring the operating state of the vehicle of a subset of the total number of cylinders, and a determination as to which number of cylinders provides the optimal fuel consumption. Once the optimal cylinder number is determined, a transmission gear shift recommendation is provided in view of the determined active cylinder distribution and target operating state.
Carmena, Jose M.
2016-01-01
Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to
Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M
2016-04-01
Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter
Darocha, Tomasz; Gałązkowski, Robert; Sobczyk, Dorota; Żyła, Zbigniew; Drwiła, Rafał
2014-12-01
Point-of-care ultrasound examination has been increasingly widely used in pre-hospital care. The use of ultrasound in rescue medicine allows for a quick differential diagnosis, identification of the most important medical emergencies and immediate introduction of targeted treatment. Performing and interpreting a pre-hospital ultrasound examination can improve the accuracy of diagnosis and thus reduce mortality. The authors' own experiences are presented in this paper, which consist in using a portable, hand-held ultrasound apparatus during rescue operations on board a Polish Medical Air Rescue helicopter. The possibility of using an ultrasound apparatus during helicopter rescue service allows for a full professional evaluation of the patient's health condition and enables the patient to be brought to a center with the most appropriate facilities for their condition. PMID:26674604
Mullis, A M
2011-06-01
We use a phase-field model for the growth of dendrites in dilute binary alloys under coupled thermosolutal control to explore the dependence of the dendrite tip velocity and radius of curvature upon undercooling, Lewis number (ratio of thermal to solutal diffusivity), alloy concentration, and equilibrium partition coefficient. Constructed in the quantitatively valid thin-interface limit, the model uses advanced numerical techniques such as mesh adaptivity, multigrid, and implicit time stepping to solve the nonisothermal alloy solidification problem for material parameters that are realistic for metals. From the velocity and curvature data we estimate the dendrite operating point parameter σ*. We find that σ* is nonconstant and, over a wide parameter space, displays first a local minimum followed by a local maximum as the undercooling is increased. This behavior is contrasted with a similar type of behavior to that predicted by simple marginal stability models to occur in the radius of curvature, on the assumption of constant σ*. PMID:21797374
On the fixed points of monotonic operators in the critical case
NASA Astrophysics Data System (ADS)
Engibaryan, N. B.
2006-10-01
We consider the problem of constructing positive fixed points x of monotonic operators \\varphi acting on a cone K in a Banach space E. We assume that \\Vert\\varphi x\\Vert\\le\\Vert x\\Vert+\\gamma, \\gamma>0, for all x\\in K. In the case when \\varphi has a so-called non-trivial dissipation functional we construct a solution in an extension of E, which is a Banach space or a Fréchet space. We consider examples in which we prove the solubility of a conservative integral equation on the half-line with a sum-difference kernel, and of a non-linear integral equation of Urysohn type in the critical case.
GOSIM: A multi-scale iterative multiple-point statistics algorithm with global optimization
NASA Astrophysics Data System (ADS)
Yang, Liang; Hou, Weisheng; Cui, Chanjie; Cui, Jie
2016-04-01
Most current multiple-point statistics (MPS) algorithms are based on a sequential simulation procedure, during which grid values are updated according to the local data events. Because the realization is updated only once during the sequential process, errors that occur while updating data events cannot be corrected. Error accumulation during simulations decreases the realization quality. Aimed at improving simulation quality, this study presents an MPS algorithm based on global optimization, called GOSIM. An objective function is defined for representing the dissimilarity between a realization and the TI in GOSIM, which is minimized by a multi-scale EM-like iterative method that contains an E-step and M-step in each iteration. The E-step searches for TI patterns that are most similar to the realization and match the conditioning data. A modified PatchMatch algorithm is used to accelerate the search process in E-step. M-step updates the realization based on the most similar patterns found in E-step and matches the global statistics of TI. During categorical data simulation, k-means clustering is used for transforming the obtained continuous realization into a categorical realization. The qualitative and quantitative comparison results of GOSIM, MS-CCSIM and SNESIM suggest that GOSIM has a better pattern reproduction ability for both unconditional and conditional simulations. A sensitivity analysis illustrates that pattern size significantly impacts the time costs and simulation quality. In conditional simulations, the weights of conditioning data should be as small as possible to maintain a good simulation quality. The study shows that big iteration numbers at coarser scales increase simulation quality and small iteration numbers at finer scales significantly save simulation time.
Operational point of neural cardiovascular regulation in humans up to 6 months in space.
Verheyden, B; Liu, J; Beckers, F; Aubert, A E
2010-03-01
Entering weightlessness affects central circulation in humans by enhancing venous return and cardiac output. We tested whether the operational point of neural cardiovascular regulation in space sets accordingly to adopt a level close to that found in the ground-based horizontal position. Heart rate (HR), finger blood and brachial blood pressure (BP), and respiratory frequency were collected in 11 astronauts from nine space missions. Recordings were made in supine and standing positions at least 10 days before launch and during spaceflight (days 5-19, 45-67, 77-116, 146-180). Cross-correlation analyses of HR and systolic BP were used to measure three complementary aspects of cardiac baroreflex modulation: 1) baroreflex sensitivity, 2) number of effective baroreflex estimates, and 3) baroreflex time delay. A fixed breathing protocol was performed to measure respiratory sinus arrhythmia and low-frequency power of systolic BP variability. We found that HR and mean arterial pressure did not differ from preflight supine values for up to 6 mo in space. Respiration frequency tended to decrease during prolonged spaceflight. Concerning neural markers of cardiovascular regulation, we observed in-flight adaptations toward homeostatic conditions similar to those found in the ground-based supine position. Surprisingly, this was not the case for baroreflex time delay distribution, which had somewhat longer latencies in space. Except for this finding, our results confirm that the operational point of neural cardiovascular regulation in space sets to a level close to that of an Earth-based supine position. This adaptation level suggests that circulation is chronically relaxed for at least 6 mo in space. PMID:20075261
The Automatic Formulating Method of the Optimal Operating Planning Problem for Energy Supply Systems
NASA Astrophysics Data System (ADS)
Suzuki, Naohiko; Ueda, Takaharu; Sasakawa, Koichi
The problem of the optimal operating planning for energy supply system is formulated as mixed-integer linear programming (MILP), but, it is too complicated for most untrained operators with little experience to apply the method. This paper proposes an automatic evaluating method of the optimal operating planning for energy supply system in using simple data. The problem can be formulated only from characteristics of equipment, tariff of input energy, and energy demands. The connection of equipment is defined as a matrix, and generated from property data of equipment. The constraints and objective function of the problem are generated from relation-ship data in the matrix and characteristics of equipment. An optimization calculation for the problem is automatically carried out. It is confirmed that any operator can evaluate many alternative configurations of the energy supply systems.
Seasonal-Scale Optimization of Conventional Hydropower Operations in the Upper Colorado System
NASA Astrophysics Data System (ADS)
Bier, A.; Villa, D.; Sun, A.; Lowry, T. S.; Barco, J.
2011-12-01
Sandia National Laboratories is developing the Hydropower Seasonal Concurrent Optimization for Power and the Environment (Hydro-SCOPE) tool to examine basin-wide conventional hydropower operations at seasonal time scales. This tool is part of an integrated, multi-laboratory project designed to explore different aspects of optimizing conventional hydropower operations. The Hydro-SCOPE tool couples a one-dimensional reservoir model with a river routing model to simulate hydrology and water quality. An optimization engine wraps around this model framework to solve for long-term operational strategies that best meet the specific objectives of the hydrologic system while honoring operational and environmental constraints. The optimization routines are provided by Sandia's open source DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) software. Hydro-SCOPE allows for multi-objective optimization, which can be used to gain insight into the trade-offs that must be made between objectives. The Hydro-SCOPE tool is being applied to the Upper Colorado Basin hydrologic system. This system contains six reservoirs, each with its own set of objectives (such as maximizing revenue, optimizing environmental indicators, meeting water use needs, or other objectives) and constraints. This leads to a large optimization problem with strong connectedness between objectives. The systems-level approach used by the Hydro-SCOPE tool allows simultaneous analysis of these objectives, as well as understanding of potential trade-offs related to different objectives and operating strategies. The seasonal-scale tool will be tightly integrated with the other components of this project, which examine day-ahead and real-time planning, environmental performance, hydrologic forecasting, and plant efficiency.
Optimizing transformations of stencil operations for parallel cache-based architectures
Bassetti, F.; Davis, K.
1999-06-28
This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation and applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.
Tethered Balloon Operations at ARM AMF3 Site at Oliktok Point, AK
NASA Astrophysics Data System (ADS)
Dexheimer, D.; Lucero, D. A.; Helsel, F.; Hardesty, J.; Ivey, M.
2015-12-01
Oliktok Point has been the home of the Atmospheric Radiation Measurement Program's (ARM) third ARM Mobile Facility, or AMF3, since October 2013. The AMF3 is operated through Sandia National Laboratories and hosts instrumentation collecting continuous measurements of clouds, aerosols, precipitation, energy, and other meteorological variables. The Arctic region is warming more quickly than any other region due to climate change and Arctic sea ice is declining to record lows. Sparsity of atmospheric data from the Arctic leads to uncertainty in process comprehension, and atmospheric general circulation models (AGCM) are understood to underestimate low cloud presence in the Arctic. Increased vertical resolution of meteorological properties and cloud measurements will improve process understanding and help AGCMs better characterize Arctic clouds. SNL is developing a tethered balloon system capable of regular operation at AMF3 in order to provide increased vertical resolution atmospheric data. The tethered balloon can be operated within clouds at altitudes up to 7,000' AGL within DOE's R-2204 restricted area. Pressure, relative humidity, temperature, wind speed, and wind direction are recorded at multiple altitudes along the tether. These data were validated against stationary met tower data in Albuquerque, NM. The altitudes of the sensors were determined by GPS and calculated using a line counter and clinometer and compared. Wireless wetness sensors and supercooled liquid water content sensors have also been deployed and their data has been compared with other sensors. This presentation will provide an overview of the balloons, sensors, and test flights flown, and will provide a preliminary look at data from sensor validation campaigns and test flights.
2012-02-24
GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.
NASA Astrophysics Data System (ADS)
He, Yi; Liwo, Adam; Scheraga, Harold A.
2015-12-01
Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.
He, Yi; Scheraga, Harold A.; Liwo, Adam
2015-12-28
Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.
Collaboration pathway(s) using new tools for optimizing operational climate monitoring from space
NASA Astrophysics Data System (ADS)
Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.
2014-10-01
Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the needs of decision makers, scientific investigators and global users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent (2014) rulebased decision engine modeling runs that targeted optimizing the intended NPOESS architecture, becomes a surrogate for global operational climate monitoring architecture(s). This rule-based systems tools provide valuable insight for Global climate architectures, through the comparison and evaluation of alternatives considered and the exhaustive range of trade space explored. A representative optimization of Global ECV's (essential climate variables) climate monitoring architecture(s) is explored and described in some detail with thoughts on appropriate rule-based valuations. The optimization tools(s) suggest and support global collaboration pathways and hopefully elicit responses from the audience and climate science shareholders.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-05
... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION [Docket Nos.: 50-003, 50-247, 50-286; NRC-2012-0265: License Nos.: DPR- 5, DPR-26, and DPR-64] Entergy Nuclear Operations, Inc., Entergy Nuclear Indian Point 2, LLC, and Entergy Nuclear Indian Point 3, LLC; Issuance of Director's Decision Notice...
NASA Astrophysics Data System (ADS)
Chen, Duan; Leon, Arturo S.; Gibson, Nathan L.; Hosseini, Parnian
2016-01-01
Optimizing the operation of a multireservoir system is challenging due to the high dimension of the decision variables that lead to a large and complex search space. A spectral optimization model (SOM), which transforms the decision variables from time domain to frequency domain, is proposed to reduce the dimensionality. The SOM couples a spectral dimensionality-reduction method called Karhunen-Loeve (KL) expansion within the routine of Nondominated Sorting Genetic Algorithm (NSGA-II). The KL expansion is used to represent the decision variables as a series of terms that are deterministic orthogonal functions with undetermined coefficients. The KL expansion can be truncated into fewer significant terms, and consequently, fewer coefficients by a predetermined number. During optimization, operators of the NSGA-II (e.g., crossover) are conducted only on the coefficients of the KL expansion rather than the large number of decision variables, significantly reducing the search space. The SOM is applied to the short-term operation of a 10-reservoir system in the Columbia River of the United States. Two scenarios are considered herein, the first with 140 decision variables and the second with 3360 decision variables. The hypervolume index is used to evaluate the optimization performance in terms of convergence and diversity. The evaluation of optimization performance is conducted for both conventional optimization model (i.e., NSGA-II without KL) and the SOM with different number of KL terms. The results show that the number of decision variables can be greatly reduced in the SOM to achieve a similar or better performance compared to the conventional optimization model. For the scenario with 140 decision variables, the optimal performance of the SOM model is found with six KL terms. For the scenario with 3360 decision variables, the optimal performance of the SOM model is obtained with 11 KL terms.
Development and optimization of a nonlinear multiparameter model for the human operator
NASA Technical Reports Server (NTRS)
Johannsen, G.
1972-01-01
A systematic method is proposed for the development, optimization, and comparison of controller-models for the human operator. This is suitable for any designed model, even multiparameter systems. A random search technique is chosen for the parameter optimization. As valuation criteria for the quality of the model development the criterion function - the comparison between the input and output functions of the human operator and those of the model - and the most important characteristic values and functions of the statistical signal theory are used. A nonlinear multiparameter model for the human operator is being designed which considers the complex input information rate per time in a single display. The nonlinear features of the model are effected by a modified threshold element and a decision algorithm. Different display-configurations as well as various transfer functions of the controlled element are explained by different optimized parameter-combinations.
Good, Nathan M; Martinez-Gomez, N Cecilia; Beck, David A C; Lidstrom, Mary E
2015-02-15
The metabolism of one- and two-carbon compounds by the methylotrophic bacterium Methylobacterium extorquens AM1 involves high carbon flux through the ethylmalonyl coenzyme A (ethylmalonyl-CoA) pathway (EMC pathway). During growth on ethylamine, the EMC pathway operates as a linear pathway carrying the full assimilatory flux to produce glyoxylate, malate, and succinate. Assimilatory carbon enters the ethylmalonyl-CoA pathway directly as acetyl-CoA, bypassing pathways for formaldehyde oxidation/assimilation and the regulatory mechanisms controlling them, making ethylamine growth a useful condition to study the regulation of the EMC pathway. Wild-type M. extorquens cells were grown at steady state on a limiting concentration of succinate, and the growth substrate was then switched to ethylamine, a condition where the cell must make a sudden switch from utilizing the tricarboxylic acid (TCA) cycle to using the ethylmalonyl-CoA pathway for assimilation, which has been an effective strategy for identifying metabolic control points. A 9-h lag in growth was observed, during which butyryl-CoA, a degradation product of ethylmalonyl-CoA, accumulated, suggesting a metabolic imbalance. Ethylmalonyl-CoA mutase activity increased to a level sufficient for the observed growth rate at 9 h, which correlated with an upregulation of RNA transcripts for ecm and a decrease in the levels of ethylmalonyl-CoA. When the wild-type strain overexpressing ecm was tested with the same substrate switchover experiment, ethylmalonyl-CoA did not accumulate, growth resumed earlier, and, after a transient period of slow growth, the culture grew at a higher rate than that of the control. These findings demonstrate that ethylmalonyl-CoA mutase is a metabolic control point in the EMC pathway, expanding our understanding of its regulation. PMID:25448820
Optimizing long-term reservoir operation through multi-tier interactive genetic algorithm
NASA Astrophysics Data System (ADS)
Wang, K.-W.; Chang, L.-C.; Chang, F.-J.
2012-04-01
For long-term reservoir planning and management problems, the reservoir optimal operation in each period is commonly searched year by year. The search domain for the initial reservoir storage for each year is limited to certain ranges, the over-year conditions cannot be adequately delivered over time, and therefore such operation fails to integrate the conditions of all the considered years as a whole situation. In this study, a multi-tier interactive genetic algorithm (MIGA) was applied to searching the long-term reservoir optimal solution. MIGA can decompose a large-scale task into several small-scale sub-tasks with GAs applied to each sub-task, where the multi-tier optimal solutions mutually interact among individual sub-tasks to produce the optimal solution for the original task. In such way, the long-term reservoir operation task can be divided into several independent single-year tasks; therefore, the difficulty of the optimal search for a great number of decision variables can dramatically be reduced. The Shihmen Reservoir in northern Taiwan was used as a case study, and the long-term optimal reservoir storages (decision variables) were investigated. The objective was to best satisfy water demands in the downstream area; and a 10-day period, the traditional time frame in Chinese agricultural society, was used as a time step. According to this time scale, there were two cases with different time intervals (variables): Case I- five relative drought consecutive years (2001 to 2006) with 180 variables (i.e. 36×5=180); and Case II- twenty consecutive years (1986 to 2006) with 720 variables (i.e. 36×20=720). For the purpose of comparison, a simulation based on the reservoir operating rule curves and a sole GA search would be implemented to find the solutions. In Case I, despite the number of the decision variables which was 180, the sole GA could still well search the optimal solution. In Case II (720 variables), the sole GA could not reach the optimal solution
NASA Astrophysics Data System (ADS)
Hromadka, J.; Correia, R.; Korposh, S.
2016-05-01
A fast method for the fabrication of the long period gratings (LPG) optical fibres operating at or near the phase matching turning point (PMTP) with the period of 109.0, 109.5 and 110.0 μm based on an amplitude mask writing system is described. The proposed system allows fabricating 3 cm long LPG sensors operating at PMPT within 20 min that is approximately 8 times faster than point-by-point approach. The reproducibility of the fabrication process was thoroughly studied. The response of the fabricated LPGs to the external change of the refractive index was investigated using water and methanol.
A MILP-Based Distribution Optimal Power Flow Model for Microgrid Operation
Liu, Guodong; Starke, Michael R; Zhang, Xiaohu; Tomsovic, Kevin
2016-01-01
This paper proposes a distribution optimal power flow (D-OPF) model for the operation of microgrids. The proposed model minimizes not only the operating cost, including fuel cost, purchasing cost and demand charge, but also several performance indices, including voltage deviation, network power loss and power factor. It co-optimizes the real and reactive power form distributed generators (DGs) and batteries considering their capacity and power factor limits. The D-OPF is formulated as a mixed-integer linear programming (MILP). Numerical simulation results show the effectiveness of the proposed model.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-23
... as described in Federal Register Notice (FRN) 76 FR 32994 (June 7, 2011). The NRC is currently... COMMISSION Nine Mile Point 3 Nuclear Project, LLC and UniStar Nuclear Operating Services, LLC Combined... Nuclear Project, LLC, and UniStar Nuclear Operating Services, LLC (UniStar), submitted a Combined...
Billings, Seth D; Boctor, Emad M; Taylor, Russell H
2015-01-01
We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP's probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700
Billings, Seth D.; Boctor, Emad M.; Taylor, Russell H.
2015-01-01
We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700
Practical operating points of multi-resolution frame compatible (MFC) stereo coding
NASA Astrophysics Data System (ADS)
Lu, Taoran; Ganapathy, Hariharan; Lakshminarayanan, Gopi; Chen, Tao; Yin, Peng; Brooks, David; Husak, Walt
2013-09-01
3D content is gaining popularity and the production and delivery of 3D video is now an active working item among video compression experts, content providers and the CE industry. Frame compatible stereo coding was initially adopted for the first generation of 3DTV broadcasting services for its compatibility with existing 2D decoders. However, the frame compatible solution sacrifices half of the original video resolution. In 2012, the Moving Picture Experts Group (MPEG) issued the call for proposal (CfP) for solutions that improve the resolution of frame compatible stereo 3D video signal while maintaining the backward compatibility to legacy decoders. The standardization process of the multiresolution frame compatible (MFC) stereo coding was then started. In this paper, the solution - Orthogonal Muxing Frame Compatible Full Resolution (OM-FCFR) - as a response to the CfP is introduced. In addition, this paper provides some experimental results for broadcasters to guide them in selecting operating points for MFC. It is observed that for typical broadcast bitrates, more than 0.5dB PSNR improvement can be achieved by MFC over the frame compatible solution with only 15%~20% overhead.
Brown, Daniel J.; Hartsock, Jared J.; Gill, Ruth M.; Fitzgerald, Hillary E.; Salt, Alec N.
2009-01-01
Distortion products in the cochlear microphonic (CM) and in the ear canal in the form of distortion product otoacoustic emissions (DPOAEs) are generated by nonlinear transduction in the cochlea and are related to the resting position of the organ of Corti (OC). A 4.8 Hz acoustic bias tone was used to displace the OC, while the relative amplitude and phase of distortion products evoked by a single tone [most often 500 Hz, 90 dB SPL (sound pressure level)] or two simultaneously presented tones (most often 4 kHz and 4.8 kHz, 80 dB SPL) were monitored. Electrical responses recorded from the round window, scala tympani and scala media of the basal turn, and acoustic emissions in the ear canal were simultaneously measured and compared during the bias. Bias-induced changes in the distortion products were similar to those predicted from computer models of a saturating transducer with a first-order Boltzmann distribution. Our results suggest that biased DPOAEs can be used to non-invasively estimate the OC displacement, producing a measurement equivalent to the transducer operating point obtained via Boltzmann analysis of the basal turn CM. Low-frequency biased DPOAEs might provide a diagnostic tool to objectively diagnose abnormal displacements of the OC, as might occur with endolymphatic hydrops. PMID:19354389
Technology Transfer Automated Retrieval System (TEKTRAN)
Agricultural non-point source pollution is a major source of water quality impairment. When considering responses to non-point source pollution, several policy options have been considered historically, including reducing inputs (e.g. fertilizers) altering technologies on the landscape (e.g. conserv...
Ruopp, Marcus D.; Perkins, Neil J.; Whitcomb, Brian W.; Schisterman, Enrique F.
2008-01-01
Summary The receiver operating characteristic (ROC) curve is used to evaluate a biomarker’s ability for classifying disease status. The Youden Index (J), the maximum potential effectiveness of a biomarker, is a common summary measure of the ROC curve. In biomarker development, levels may be unquantifiable below a limit of detection (LOD) and missing from the overall dataset. Disregarding these observations may negatively bias the ROC curve and thus J. Several correction methods have been suggested for mean estimation and testing; however, little has been written about the ROC curve or its summary measures. We adapt non-parametric (empirical) and semi-parametric (ROC-GLM [generalized linear model]) methods and propose parametric methods (maximum likelihood (ML)) to estimate J and the optimal cut-point (c*) for a biomarker affected by a LOD. We develop unbiased estimators of J and c* via ML for normally and gamma distributed biomarkers. Alpha level confidence intervals are proposed using delta and bootstrap methods for the ML, semi-parametric, and non-parametric approaches respectively. Simulation studies are conducted over a range of distributional scenarios and sample sizes evaluating estimators’ bias, root-mean square error, and coverage probability; the average bias was less than one percent for ML and GLM methods across scenarios and decreases with increased sample size. An example using polychlorinated biphenyl levels to classify women with and without endometriosis illustrates the potential benefits of these methods. We address the limitations and usefulness of each method in order to give researchers guidance in constructing appropriate estimates of biomarkers’ true discriminating capabilities. PMID:18435502
Shanechi, Maryam M; Orsborn, Amy; Moorman, Helene; Gowda, Suraj; Carmena, Jose M
2014-01-01
Brain-machine interface (BMI) performance has been improved using Kalman filters (KF) combined with closed-loop decoder adaptation (CLDA). CLDA fits the decoder parameters during closed-loop BMI operation based on the neural activity and inferred user velocity intention. These advances have resulted in the recent ReFIT-KF and SmoothBatch-KF decoders. Here we demonstrate high-performance and robust BMI control using a novel closed-loop BMI architecture termed adaptive optimal feedback-controlled (OFC) point process filter (PPF). Adaptive OFC-PPF allows subjects to issue neural commands and receive feedback with every spike event and hence at a faster rate than the KF. Moreover, it adapts the decoder parameters with every spike event in contrast to current CLDA techniques that do so on the time-scale of minutes. Finally, unlike current methods that rotate the decoded velocity vector, adaptive OFC-PPF constructs an infinite-horizon OFC model of the brain to infer velocity intention during adaptation. Preliminary data collected in a monkey suggests that adaptive OFC-PPF improves BMI control. OFC-PPF outperformed SmoothBatch-KF in a self-paced center-out movement task with 8 targets. This improvement was due to both the PPF's increased rate of control and feedback compared with the KF, and to the OFC model suggesting that the OFC better approximates the user's strategy. Also, the spike-by-spike adaptation resulted in faster performance convergence compared to current techniques. Thus adaptive OFC-PPF enabled proficient BMI control in this monkey. PMID:25571483
Tachim Medjo, Theodore
2010-08-15
We study in this article the Pontryagin's maximum principle for a class of control problems associated with the primitive equations (PEs) of the ocean with two point boundary state constraint. These optimal problems involve a two point boundary state constraint similar to that considered in Wang, Nonlinear Anal. 51, 509-536, 2002 for the three-dimensional Navier-Stokes (NS) equations. The main difference between this work and Wang, Nonlinear Anal. 51, 509-536, 2002 is that the nonlinearity in the PEs is stronger than in the three-dimensional NS systems.
Wroblewski, David; Katrompas, Alexander M.; Parikh, Neel J.
2009-09-01
A method and apparatus for optimizing the operation of a power generating plant using artificial intelligence techniques. One or more decisions D are determined for at least one consecutive time increment, where at least one of the decisions D is associated with a discrete variable for the operation of a power plant device in the power generating plant. In an illustrated embodiment, the power plant device is a soot cleaning device associated with a boiler.
Stage-wise optimizing operating rules for flood control in a multi-purpose reservoir
NASA Astrophysics Data System (ADS)
Chou, Frederick N.-F.; Wu, Chia-Wen
2015-02-01
This paper presents a generic framework of release rules for reservoir flood control operation during three stages. In the stage prior to flood arrival, the rules indicate the timing and release discharge of pre-releasing reservoir storage to the initial level of flood control operation. In the stage preceding the flood peak, the rules prescribe the portion of inflow to be detained to mitigate downstream flooding, without allowing the water surface level of reservoir to exceed the acceptable safety level of surcharge. After the flood peak, the rules suggest the timing for stepwise reduction of the release flows and closing the gates of spillways and other outlets to achieve the normal level of conservation use. A simulation model is developed and linked with BOBYQA, an efficient optimization algorithm, to determine the optimal rule parameters in a stage-wise manner. The release rules of Shihmen Reservoir of Taiwan are established using inflow records of 59 historical typhoons and the probable maximum flood. The deviations from target levels at the end of different stages of all calibration events are minimized by the proposed method to improve the reliability of flood control operation. The optimized rules satisfy operational objectives including dam safety, flood mitigation, achieving sufficient end-of-operation storage for conservation purposes and smooth operation.
Optimized cascade reservoir operation considering ice flood control and power generation
NASA Astrophysics Data System (ADS)
Chang, Jianxia; Meng, Xuejiao; Wang, ZongZhi; Wang, Xuebin; Huang, Qiang
2014-11-01
Ice flood control is an important objective for reservoir operation in cold regions. Maintaining the reservoir outflow in a certain range is considered an effective way to remediate ice flood damage. However, this strategy may decrease the socio-economic benefit of reservoirs, for example, reduction of hydropower production. These conflicting objectives cause a dilemma for water managers when defining reservoir operation policy. This study considers seven cascade reservoirs in the upstream Yellow River, and ice flood control storage is introduced to balance the hydropower generation and ice flood control. The relation between the ice flood control storage volume of the Liujiaxia reservoir and cascade power output is analyzed. An optimization model to explore the trade-offs between hydropower generation and ice flood control requirements is developed. The model takes into account ice flood control requirements. The optimization model compared to simulation model based on the reservoir operation rule curves. The results show that the optimal operation rules are far more efficient in balancing the benefits within the power generation and ice flood control. The cascade reservoirs operation strategies proposed in this study can be effectively and suitably used in reservoir operation systems with similar conditions.
Short-term optimal operation of water systems using ensemble forecasts
NASA Astrophysics Data System (ADS)
Raso, L.; Schwanenberg, D.; van de Giesen, N. C.; van Overloop, P. J.
2014-09-01
Short-term water system operation can be realized using Model Predictive Control (MPC). MPC is a method for operational management of complex dynamic systems. Applied to open water systems, MPC provides integrated, optimal, and proactive management, when forecasts are available. Notwithstanding these properties, if forecast uncertainty is not properly taken into account, the system performance can critically deteriorate. Ensemble forecast is a way to represent short-term forecast uncertainty. An ensemble forecast is a set of possible future trajectories of a meteorological or hydrological system. The growing ensemble forecasts’ availability and accuracy raises the question on how to use them for operational management. The theoretical innovation presented here is the use of ensemble forecasts for optimal operation. Specifically, we introduce a tree based approach. We called the new method Tree-Based Model Predictive Control (TB-MPC). In TB-MPC, a tree is used to set up a Multistage Stochastic Programming, which finds a different optimal strategy for each branch and enhances the adaptivity to forecast uncertainty. Adaptivity reduces the sensitivity to wrong forecasts and improves the operational performance. TB-MPC is applied to the operational management of Salto Grande reservoir, located at the border between Argentina and Uruguay, and compared to other methods.
Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches
NASA Astrophysics Data System (ADS)
Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo
This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.
NASA Astrophysics Data System (ADS)
Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.
2015-08-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.
Rana, Swapan; Parashar, Preeti
2011-11-15
We show that all multipartite pure states that are stochastic local operation and classical communication (SLOCC) equivalent to the N-qubit W state can be uniquely determined (among arbitrary states) from their bipartite marginals. We also prove that only (N-1) of the bipartite marginals are sufficient and that this is also the optimal number. Thus, contrary to the Greenberger-Horne-Zeilinger (GHZ) class, W-type states preserve their reducibility under SLOCC. We also study the optimal reducibility of some larger classes of states. The generic Dicke states |GD{sub N}{sup l}> are shown to be optimally determined by their (l+1)-partite marginals. The class of ''G'' states (superposition of W and W) are shown to be optimally determined by just two (N-2)-partite marginals.
A highly sensitive and simply operated protease sensor toward point-of-care testing.
Park, Seonhwa; Shin, Yu Mi; Seo, Jeongwook; Song, Ji-Joon; Yang, Haesik
2016-04-21
Protease sensors for point-of-care testing (POCT) require simple operation, a detection period of less than 20 minutes, and a detection limit of less than 1 ng mL(-1). However, it is difficult to meet these requirements with protease sensors that are based on proteolytic cleavage. This paper reports a highly reproducible protease sensor that allows the sensitive and simple electrochemical detection of the botulinum neurotoxin type E light chain (BoNT/E-LC), which is obtained using (i) low nonspecific adsorption, (ii) high signal-to-background ratio, and (iii) one-step solution treatment. The BoNT/E-LC detection is based on two-step proteolytic cleavage using BoNT/E-LC (endopeptidase) and l-leucine-aminopeptidase (LAP, exopeptidase). Indium-tin oxide (ITO) electrodes are modified partially with reduced graphene oxide (rGO) to increase their electrocatalytic activities. Avidin is then adsorbed on the electrodes to minimize the nonspecific adsorption of proteases. Low nonspecific adsorption allows a highly reproducible sensor response. Electrochemical-chemical (EC) redox cycling involving p-aminophenol (AP) and dithiothreitol (DTT) is performed to obtain a high signal-to-background ratio. After adding a C-terminally AP-labeled oligopeptide, DTT, and LAP simultaneously to a sample solution, no further treatment of the solution is necessary during detection. The detection limits of BoNT/E-LC in phosphate-buffered saline are 0.1 ng mL(-1) for an incubation period of 15 min and 5 fg mL(-1) for an incubation period of 4 h. The detection limit in commercial bottled water is 1 ng mL(-1) for an incubation period of 15 min. The developed sensor is selective to BoNT/E-LC among the four types of BoNTs tested. These results indicate that the protease sensor meets the requirements for POCT. PMID:26980003
Evaluating Operational Specifications of Point-of-Care Diagnostic Tests: A Standardized Scorecard
Lehe, Jonathan D.; Sitoe, Nádia E.; Tobaiwa, Ocean; Loquiha, Osvaldo; Quevedo, Jorge I.; Peter, Trevor F.; Jani, Ilesh V.
2012-01-01
The expansion of HIV antiretroviral therapy into decentralized rural settings will increasingly require simple point-of-care (POC) diagnostic tests that can be used without laboratory infrastructure and technical skills. New POC test devices are becoming available but decisions around which technologies to deploy may be biased without systematic assessment of their suitability for decentralized healthcare settings. To address this, we developed a standardized, quantitative scorecard tool to objectively evaluate the operational characteristics of POC diagnostic devices. The tool scores devices on a scale of 1–5 across 30 weighted characteristics such as ease of use, quality control, electrical requirements, shelf life, portability, cost and service, and provides a cumulative score that ranks products against a set of ideal POC characteristics. The scorecard was tested on 19 devices for POC CD4 T-lymphocyte cell counting, clinical chemistry or hematology testing. Single and multi-parameter devices were assessed in each of test categories. The scores across all devices ranged from 2.78 to 4.40 out of 5. The tool effectively ranked devices within each category (p<0.01) except the CD4 and multi-parameter hematology products. The tool also enabled comparison of different characteristics between products. Agreement across the four scorers for each product was high (intra-class correlation >0.80; p<0.001). Use of this tool enables the systematic evaluation of diagnostic tests to facilitate product selection and investment in appropriate technology. It is particularly relevant for countries and testing programs considering the adoption of new POC diagnostic tests. PMID:23118871
Integrated Data-Archive and Distributed Hydrological Modelling System for Optimized Dam Operation
NASA Astrophysics Data System (ADS)
Shibuo, Yoshihiro; Jaranilla-Sanchez, Patricia Ann; Koike, Toshio
2013-04-01
In 2012, typhoon Bopha, which passed through the southern part of the Philippines, devastated the nation leaving hundreds of death tolls and significant destruction of the country. Indeed the deadly events related to cyclones occur almost every year in the region. Such extremes are expected to increase both in frequency and magnitude around Southeast Asia, during the course of global climate change. Our ability to confront such hazardous events is limited by the best available engineering infrastructure and performance of weather prediction. An example of the countermeasure strategy is, for instance, early release of reservoir water (lowering the dam water level) during the flood season to protect the downstream region of impending flood. However, over release of reservoir water affect the regional economy adversely by losing water resources, which still have value for power generation, agricultural and industrial water use. Furthermore, accurate precipitation forecast itself is conundrum task, due to the chaotic nature of the atmosphere yielding uncertainty in model prediction over time. Under these circumstances we present a novel approach to optimize contradicting objectives of: preventing flood damage via priori dam release; while sustaining sufficient water supply, during the predicted storm events. By evaluating forecast performance of Meso-Scale Model Grid Point Value against observed rainfall, uncertainty in model prediction is probabilistically taken into account, and it is then applied to the next GPV issuance for generating ensemble rainfalls. The ensemble rainfalls drive the coupled land-surface- and distributed-hydrological model to derive the ensemble flood forecast. Together with dam status information taken into account, our integrated system estimates the most desirable priori dam release through the shuffled complex evolution algorithm. The strength of the optimization system is further magnified by the online link to the Data Integration and
Orbit optimization of Mars orbiters for entry navigation: From an observability point of view
NASA Astrophysics Data System (ADS)
Yu, Zhengshi; Zhu, Shengying; Cui, Pingyuan
2015-06-01
In this paper, the observability of orbiter-based Mars entry navigation is investigated and its application to the orbit optimization of Mars orbiters is demonstrated. An observability analysis of Mars entry navigation processing of range measurements to multiple orbiters based on Fisher information matrix is conducted. The determinant of Fisher information matrix is derived to quantify the degree of observability. The orbit optimization method based on the observability analysis is then proposed. Two navigation scenarios using three and four orbiters are considered in simulations. To verify the advantages of navigation performance, the orbiter-based and ground beacon-based navigation schemes are comparatively analyzed. In the simulation, an Extended Kalman Filter is used to examine the navigation accuracy. It is concluded that the proposed orbit optimization method is able to optimize the orbits of Mars orbiters with the maximum degree of observability. For the Mars entry navigation based on orbiters, a better configuration which is a main contributor to the observability, can be achieved. The navigation performance is more excellent than the ground beacon-based navigation. However, a diminishing return of navigation accuracy is obtained solely by increasing the number of orbiters.
Modeling Reservoir-River Networks in Support of Optimizing Seasonal-Scale Reservoir Operations
NASA Astrophysics Data System (ADS)
Villa, D. L.; Lowry, T. S.; Bier, A.; Barco, J.; Sun, A.
2011-12-01
HydroSCOPE (Hydropower Seasonal Concurrent Optimization of Power and the Environment) is a seasonal time-scale tool for scenario analysis and optimization of reservoir-river networks. Developed in MATLAB, HydroSCOPE is an object-oriented model that simulates basin-scale dynamics with an objective of optimizing reservoir operations to maximize revenue from power generation, reliability in the water supply, environmental performance, and flood control. HydroSCOPE is part of a larger toolset that is being developed through a Department of Energy multi-laboratory project. This project's goal is to provide conventional hydropower decision makers with better information to execute their day-ahead and seasonal operations and planning activities by integrating water balance and operational dynamics across a wide range of spatial and temporal scales. This presentation details the modeling approach and functionality of HydroSCOPE. HydroSCOPE consists of a river-reservoir network model and an optimization routine. The river-reservoir network model simulates the heat and water balance of river-reservoir networks for time-scales up to one year. The optimization routine software, DAKOTA (Design Analysis Kit for Optimization and Terascale Applications - dakota.sandia.gov), is seamlessly linked to the network model and is used to optimize daily volumetric releases from the reservoirs to best meet a set of user-defined constraints, such as maximizing revenue while minimizing environmental violations. The network model uses 1-D approximations for both the reservoirs and river reaches and is able to account for surface and sediment heat exchange as well as ice dynamics for both models. The reservoir model also accounts for inflow, density, and withdrawal zone mixing, and diffusive heat exchange. Routing for the river reaches is accomplished using a modified Muskingum-Cunge approach that automatically calculates the internal timestep and sub-reach lengths to match the conditions of
Gschwind, Michael K.
2011-03-01
Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.
Energy and operation management of a microgrid using particle swarm optimization
NASA Astrophysics Data System (ADS)
Radosavljević, Jordan; Jevtić, Miroljub; Klimenta, Dardan
2016-05-01
This article presents an efficient algorithm based on particle swarm optimization (PSO) for energy and operation management (EOM) of a microgrid including different distributed generation units and energy storage devices. The proposed approach employs PSO to minimize the total energy and operating cost of the microgrid via optimal adjustment of the control variables of the EOM, while satisfying various operating constraints. Owing to the stochastic nature of energy produced from renewable sources, i.e. wind turbines and photovoltaic systems, as well as load uncertainties and market prices, a probabilistic approach in the EOM is introduced. The proposed method is examined and tested on a typical grid-connected microgrid including fuel cell, gas-fired microturbine, wind turbine, photovoltaic and energy storage devices. The obtained results prove the efficiency of the proposed approach to solve the EOM of the microgrids.
Ho, ChinYu; Chen, Hsin-Jen; Huang, Nicole; Yeh, Jade Chienyu; deFerranti, Sarah
2016-01-01
Adolescent obesity has increased to alarming proportions globally. However, few studies have investigated the optimal waist circumference (WC) of Asian adolescents. This study sought to establish the optimal WC cutoff points that identify a cluster of cardiovascular risk factors (CVRFs) among 15-year-old ethnically Chinese adolescents. This study was a regional population-based study on the CVRFs among adolescents who enrolled in all the senior high schools in Taipei City, Taiwan, between 2011 and 2014. Four cross-sectional health examinations of first-year senior high school (grade 10) students were conducted from September to December of each year. A total of 124,643 adolescents aged 15 (boys: 63,654; girls: 60,989) were recruited. Participants who had at least three of five CVRFs were classified as the high-risk group. We used receiver-operating characteristic curves and the area under the curve (AUC) to determine the optimal WC cutoff points and the accuracy of WC in predicting high cardiovascular risk. WC was a good predictor for high cardiovascular risk for both boys (AUC: 0.845, 95% confidence interval [CI]: 0.833–0.857) and girls (AUC: 0.763, 95% CI: 0.731–0.795). The optimal WC cutoff points were ≥78.9 cm for boys (77th percentile) and ≥70.7 cm for girls (77th percentile). Adolescents with normal weight and an abnormal WC were more likely to be in the high cardiovascular risk group (odds ratio: 3.70, 95% CI: 2.65–5.17) compared to their peers with normal weight and normal WC. The optimal WC cutoff point of 15-year-old Taiwanese adolescents for identifying CVRFs should be the 77th percentile; the 90th percentile of the WC might be inadequate. The high WC criteria can help health professionals identify higher proportion of the adolescents with cardiovascular risks and refer them for further evaluations and interventions. Adolescents’ height, weight and WC should be measured as a standard practice in routine health checkups. PMID:27389572
Lee, Jason Jiunshiou; Ho, ChinYu; Chen, Hsin-Jen; Huang, Nicole; Yeh, Jade Chienyu; deFerranti, Sarah
2016-01-01
Adolescent obesity has increased to alarming proportions globally. However, few studies have investigated the optimal waist circumference (WC) of Asian adolescents. This study sought to establish the optimal WC cutoff points that identify a cluster of cardiovascular risk factors (CVRFs) among 15-year-old ethnically Chinese adolescents. This study was a regional population-based study on the CVRFs among adolescents who enrolled in all the senior high schools in Taipei City, Taiwan, between 2011 and 2014. Four cross-sectional health examinations of first-year senior high school (grade 10) students were conducted from September to December of each year. A total of 124,643 adolescents aged 15 (boys: 63,654; girls: 60,989) were recruited. Participants who had at least three of five CVRFs were classified as the high-risk group. We used receiver-operating characteristic curves and the area under the curve (AUC) to determine the optimal WC cutoff points and the accuracy of WC in predicting high cardiovascular risk. WC was a good predictor for high cardiovascular risk for both boys (AUC: 0.845, 95% confidence interval [CI]: 0.833-0.857) and girls (AUC: 0.763, 95% CI: 0.731-0.795). The optimal WC cutoff points were ≥78.9 cm for boys (77th percentile) and ≥70.7 cm for girls (77th percentile). Adolescents with normal weight and an abnormal WC were more likely to be in the high cardiovascular risk group (odds ratio: 3.70, 95% CI: 2.65-5.17) compared to their peers with normal weight and normal WC. The optimal WC cutoff point of 15-year-old Taiwanese adolescents for identifying CVRFs should be the 77th percentile; the 90th percentile of the WC might be inadequate. The high WC criteria can help health professionals identify higher proportion of the adolescents with cardiovascular risks and refer them for further evaluations and interventions. Adolescents' height, weight and WC should be measured as a standard practice in routine health checkups. PMID:27389572
Operational equations for the five-point rectangle, the geometric mean, and data in prismatic arrray
Silver, Gary L
2009-01-01
This paper describes the results of three applications of operational calculus: new representations of five data in a rectangular array, new relationships among data in a prismatic array, and the operational analog of the geometric mean.
NASA Astrophysics Data System (ADS)
Kanka, Jiri
2012-06-01
Fiber-optic long-period grating (LPG) operating near the dispersion turning point in its phase matching curve (PMC), referred to as a Turn Around Point (TAP) LPG, is known to be extremely sensitive to external parameters. Moreover, in a TAP LPG the phase matching condition can be almost satisfied over large spectral range, yielding a broadband LPG operation. TAP LPGs have been investigated, namely for use as broadband mode convertors and biosensors. So far TAP LPGs have been realized in specially designed or post-processed conventional fibers, not yet in PCFs, which allow a great degree of freedom in engineering the fiber's dispersion properties through the control of the PCF structural parameters. We have developed the design optimization technique for TAP PCF LPGs employing the finite element method for PCF modal analysis in a combination with the Nelder-Mead simplex method for minimizing the objective function based on target-specific PCF properties. Using this tool we have designed TAP PCF LPGs for specified wavelength ranges and refractive indices of medium in the air holes. Possible TAP PCF-LPG operational regimes - dual-resonance, broadband mode conversion and transmitted intensity-based operation - will be demonstrated numerically. Potential and limitations of TAP PCF-LPGs for evanescent chemical and biochemical sensing will be assessed.
NASA Astrophysics Data System (ADS)
Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.
2015-04-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.
Nodal Fermi surface pocket approaching an optimal quantum critical point in YBCO
NASA Astrophysics Data System (ADS)
Sebastian, Suchitra; Tan, Beng; Lonzarich, Gilbert; Ramshaw, Brad; Harrison, Neil; Balakirev, Fedor; Mielke, Chuck; Sabok, S.; Dabrowski, B.; Liang, Ruixing; Bonn, Doug; Hardy, Walter
2014-03-01
I present new quantum oscillation measurements over the entire underdoped regime in YBa2Cu3O6+x and YBa2Cu4O8 using ultra-high magnetic fields to destroy superconductivity and access the normal ground state. A robust small nodal Fermi surface created by charge order is found to extend over the entire underdoped range, exhibiting quantum critical signatures approaching optimal doping.
Yin, Jingjing; Samawi, Hani; Linder, Daniel
2016-07-01
A diagnostic cut-off point of a biomarker measurement is needed for classifying a random subject to be either diseased or healthy. However, the cut-off point is usually unknown and needs to be estimated by some optimization criteria. One important criterion is the Youden index, which has been widely adopted in practice. The Youden index, which is defined as the maximum of (sensitivity + specificity -1), directly measures the largest total diagnostic accuracy a biomarker can achieve. Therefore, it is desirable to estimate the optimal cut-off point associated with the Youden index. Sometimes, taking the actual measurements of a biomarker is very difficult and expensive, while ranking them without the actual measurement can be relatively easy. In such cases, ranked set sampling can give more precise estimation than simple random sampling, as ranked set samples are more likely to span the full range of the population. In this study, kernel density estimation is utilized to numerically solve for an estimate of the optimal cut-off point. The asymptotic distributions of the kernel estimators based on two sampling schemes are derived analytically and we prove that the estimators based on ranked set sampling are relatively more efficient than that of simple random sampling and both estimators are asymptotically unbiased. Furthermore, the asymptotic confidence intervals are derived. Intensive simulations are carried out to compare the proposed method using ranked set sampling with simple random sampling, with the proposed method outperforming simple random sampling in all cases. A real data set is analyzed for illustrating the proposed method. PMID:26756282
Archer, Charles J; Faraj, Ahmad A; Inglett, Todd A; Ratterman, Joseph D
2013-04-16
Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.
Collaboration pathway(s) using new tools for optimizing `operational' climate monitoring from space
NASA Astrophysics Data System (ADS)
Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.
2015-09-01
Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a long term solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the collective needs of policy makers, scientific communities and global academic users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent rule-based expert system (RBES) optimization modeling of the intended NPOESS architecture becomes a surrogate for global operational climate monitoring architecture(s). These rulebased systems tools provide valuable insight for global climate architectures, by comparison/evaluation of alternatives and the sheer range of trade space explored. Optimization of climate monitoring architecture(s) for a partial list of ECV (essential climate variables) is explored and described in detail with dialogue on appropriate rule-based valuations. These optimization tool(s) suggest global collaboration advantages and elicit responses from the audience and climate science community. This paper will focus on recent research exploring joint requirement implications of the high profile NPOESS architecture and extends the research and tools to optimization for a climate centric case study. This reflects work from SPIE RS Conferences 2013 and 2014, abridged for simplification30, 32. First, the heavily securitized NPOESS architecture; inspired the recent research question - was Complexity (as a cost/risk factor) overlooked when considering the benefits of aggregating different missions into a single platform. Now years later a complete reversal; should agencies considering Disaggregation as the answer. We'll discuss what some academic research suggests. Second, using the GCOS requirements of earth climate observations via ECV (essential climate variables) many collected from space-based sensors; and accepting their
OPTIMAL DESIGN AND OPERATION OF HELIUM REFRIGERATION SYSTEMS USING THE GANNI CYCLE
Venkatarao Ganni, Peter Knudsen
2010-04-01
The constant pressure ratio process, as implemented in the floating pressure - Ganni cycle, is a new variation to prior cryogenic refrigeration and liquefaction cycle designs that allows for optimal operation and design of helium refrigeration systems. This cycle is based upon the traditional equipment used for helium refrigeration system designs, i.e., constant volume displacement compression and critical flow expansion devices. It takes advantage of the fact that for a given load, the expander sets the compressor discharge pressure and the compressor sets its own suction pressure. This cycle not only provides an essentially constant system Carnot efficiency over a wide load range, but invalidates the traditional philosophy that the (‘TS’) design condition is the optimal operating condition for a given load using the as-built hardware. As such, the Floating Pressure- Ganni Cycle is a solution to reduce the energy consumption while increasing the reliability, flexibility and stability of these systems over a wide operating range and different operating modes and is applicable to most of the existing plants. This paper explains the basic theory behind this cycle operation and contrasts it to the traditional operational philosophies presently used.
Optimal Reservoir Operation for Hydropower Generation using Non-linear Programming Model
NASA Astrophysics Data System (ADS)
Arunkumar, R.; Jothiprakash, V.
2012-05-01
Hydropower generation is one of the vital components of reservoir operation, especially for a large multi-purpose reservoir. Deriving optimal operational rules for such a large multi-purpose reservoir serving various purposes like irrigation, hydropower and flood control are complex, because of the large dimension of the problem and the complexity is more if the hydropower production is not an incidental. Thus optimizing the operations of a reservoir serving various purposes requires a systematic study. In the present study such a large multi-purpose reservoir, namely, Koyna reservoir operations are optimized for maximizing the hydropower production subject to the condition of satisfying the irrigation demands using a non-linear programming model. The hydropower production from the reservoir is analysed for three different dependable inflow conditions, representing wet, normal and dry years. For each dependable inflow conditions, various scenarios have been analyzed based on the constraints on the releases and the results are compared. The annual power production, combined monthly power production from all the powerhouses, end of month storage levels, evaporation losses and surplus are discussed. From different scenarios, it is observed that more hydropower can be generated for various dependable inflow conditions, if the restrictions on releases are slightly relaxed. The study shows that Koyna dam is having potential to generate more hydropower.
Online Optimization Method for Operation of Generators in a Micro Grid
NASA Astrophysics Data System (ADS)
Hayashi, Yasuhiro; Miyamoto, Hideki; Matsuki, Junya; Iizuka, Toshio; Azuma, Hitoshi
Recently a lot of studies and developments about distributed generator such as photovoltaic generation system, wind turbine generation system and fuel cell have been performed under the background of the global environment issues and deregulation of the electricity market, and the technique of these distributed generators have progressed. Especially, micro grid which consists of several distributed generators, loads and storage battery is expected as one of the new operation system of distributed generator. However, since precipitous load fluctuation occurs in micro grid for the reason of its smaller capacity compared with conventional power system, high-accuracy load forecasting and control scheme to balance of supply and demand are needed. Namely, it is necessary to improve the precision of operation in micro grid by observing load fluctuation and correcting start-stop schedule and output of generators online. But it is not easy to determine the operation schedule of each generator in short time, because the problem to determine start-up, shut-down and output of each generator in micro grid is a mixed integer programming problem. In this paper, the authors propose an online optimization method for the optimal operation schedule of generators in micro grid. The proposed method is based on enumeration method and particle swarm optimization (PSO). In the proposed method, after picking up all unit commitment patterns of each generators satisfied with minimum up time and minimum down time constraint by using enumeration method, optimal schedule and output of generators are determined under the other operational constraints by using PSO. Numerical simulation is carried out for a micro grid model with five generators and photovoltaic generation system in order to examine the validity of the proposed method.
Optimizing Canal Structure Operation Using Meta-heuristic Algorithms in the Treasure Valley, Idaho
NASA Astrophysics Data System (ADS)
Hernandez, J.; Ha, W.; Campbell, A.
2012-12-01
The computer program that was proven to produce optimal operational solutions for open-channel irrigation conveyance and distribution networks for synthetic data in previous research was tested for real world data. Data gathered from databases and the field by the Boise Project, Idaho, provided input to the hydraulic model for the physical characteristics of the conveyance system. We selected three reaches of the Deer Flat Low Line in the Treasure Valley for optimizing actual gate operations. The total of 59.1 km canal with a maximum capacity of 34 m3/s irrigates mainly corn, wheat, sugar-beet and potato crops. The computer model uses an accuracy-based learning classifier system (XCS) with an embedded genetic algorithm to produce optimal rules for gate structure operation in irrigation canals. Rules are generated through the exploration and exploitation of genetic algorithm population, with the support of RootCanal, an unsteady-state hydraulic simulation model. The objective function was set for satisfying variable demand along three reaches while minimizing water level deviations from target. All canal gate structures operate simultaneously while maintaining water depth near target values during variable-demand periods, with a hydraulically stabilized system. It is noteworthy to mention that this very simple 3-reach problem, requires the computer performing several thousand simulations during continuous days for finding plausible solutions. The model is currently simulating the Deer Flat Low Line Canal in Caldwell, Idaho with promising results. The population evolution is measured by a fitness parameter, which shows that canal structure operations generated by the model are improving towards plausible solutions. This research is one step forward for optimizing the way we use and manage water resources. Relying on management practices of the past will no longer work in a world that is impacted by global climate variability.
An optimized structure on FPGA of key point description in SIFT algorithm
NASA Astrophysics Data System (ADS)
Xu, Chenyu; Peng, Jinlong; Zhu, En; Zou, Yuxin
2015-12-01
SIFT algorithm is one of the most significant and effective algorithms to describe the features of image in the field of image matching. To implement SIFT algorithm to hardware environment is apparently considerable and difficult. In this paper, we mainly discuss the realization of Key Point Description in SIFT algorithm, along with Matching process. In Key Point Description, we have proposed a new method of generating histograms, to avoid the rotation of adjacent regions and insure the rotational invariance. In Matching, we replace conventional Euclidean distance with Hamming distance. The results of the experiments fully prove that the structure we propose is real-time, accurate, and efficient. Future work is still needed to improve its performance in harsher conditions.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-25
... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Turkey Point, Units 3 and 4; Application and Amendment to Facility Operating License Involving Proposed No Significant Hazards Consideration Determination AGENCY: Nuclear Regulatory Commission. ACTION: License amendment request; opportunity...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-21
... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc., Entergy Nuclear Indian Point Unit 2, LLC, Issuance of Director's Decision Notice is hereby given that the Deputy Director, Reactor Safety Programs, Office of...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-07
... notice appearing in the Federal Register on April 3, 2013 (78 FR 20144), by extending the original public... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit No. 3 Extension of...
40 CFR Table 3 to Subpart Vvvv of... - MACT Model Point Value Formulas for Open Molding Operations 1
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 12 2010-07-01 2010-07-01 true MACT Model Point Value Formulas for Open Molding Operations 1 3 Table 3 to Subpart VVVV of Part 63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES...
We anticipate that future laboratory results will verify our preliminary findings that the BSF is capable of removing approximately 99% of enteric bacteria and roughly 90% of enteric viruses as currently configured. We hope that by understanding the operating conditions and me...
NASA Technical Reports Server (NTRS)
Roberts, Craig; Case, Sara; Reagoso, John; Webster, Cassandra
2015-01-01
The Deep Space Climate Observatory mission launched on February 11, 2015, and inserted onto a transfer trajectory toward a Lissajous orbit around the Sun-Earth L1 libration point. This paper presents an overview of the baseline transfer orbit and early mission maneuver operations leading up to the start of nominal science orbit operations. In particular, the analysis and performance of the spacecraft insertion, mid-course correction maneuvers, and the deep-space Lissajous orbit insertion maneuvers are discussed, com-paring the baseline orbit with actual mission results and highlighting mission and operations constraints..
Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D
2006-09-19
Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.
An intelligent factory-wide optimal operation system for continuous production process
NASA Astrophysics Data System (ADS)
Ding, Jinliang; Chai, Tianyou; Wang, Hongfeng; Wang, Junwei; Zheng, Xiuping
2016-03-01
In this study, a novel intelligent factory-wide operation system for a continuous production process is designed to optimise the entire production process, which consists of multiple units; furthermore, this system is developed using process operational data to avoid the complexity of mathematical modelling of the continuous production process. The data-driven approach aims to specify the structure of the optimal operation system; in particular, the operational data of the process are used to formulate each part of the system. In this context, the domain knowledge of process engineers is utilised, and a closed-loop dynamic optimisation strategy, which combines feedback, performance prediction, feed-forward, and dynamic tuning schemes into a framework, is employed. The effectiveness of the proposed system has been verified using industrial experimental results.
How does network design constrain optimal operation of intermittent water supply?
NASA Astrophysics Data System (ADS)
Lieb, Anna; Wilkening, Jon; Rycroft, Chris
2015-11-01
Urban water distribution systems do not always supply water continuously or reliably. As pipes fill and empty, pressure transients may contribute to degraded infrastructure and poor water quality. To help understand and manage this undesirable side effect of intermittent water supply--a phenomenon affecting hundreds of millions of people in cities around the world--we study the relative contributions of fixed versus dynamic properties of the network. Using a dynamical model of unsteady transition pipe flow, we study how different elements of network design, such as network geometry, pipe material, and pipe slope, contribute to undesirable pressure transients. Using an optimization framework, we then investigate to what extent network operation decisions such as supply timing and inflow rate may mitigate these effects. We characterize some aspects of network design that make them more or less amenable to operational optimization.
A study on the influence of operating circuit on the position of emission point of fluorescent lamp
NASA Astrophysics Data System (ADS)
Uetsuki, Tadao; Genba, Yuki; Kanda, Takashi
2009-10-01
High efficiency fluorescent lamp systems driven by high frequency are very popular for general lighting. Therefore it is very beneficial to be able to predict the lamp's life before the lamp dying, because people can buy a new lamp just before the lamp dying and need not have stocks. In order to judge the lifetime of a lamp it is very useful to know where the emission point is on the electrode filament. With regard to a method for grasping the emission point, it has been reported that the distance from the emission point to the end of the filament can be calculated by measuring the voltage across the filament and the currents flowing in both ends of the filament. The lamp's life can be predicted by grasping the movement of the emission point with operating time. Therefore it is very important to confirm whether the movement of the emission point changes or not when the operating circuit is changed. The authors investigated the difference in the way the emission points moved for two lamp systems which are very popular. One system had an electronic ballast having an auxiliary power source for the heating cathode. Another system had an electronic ballast with no power source, but with a capacitor connected to the lamp in parallel. In this presentation these measurement results will be reported.
Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan
2015-01-01
To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585
Bao, Chundan; Zhang, Dianfeng; Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan
2015-01-01
To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585
Better Redd than Dead: Optimizing Reservoir Operations for Wild Fish Survival During Drought
NASA Astrophysics Data System (ADS)
Adams, L. E.; Lund, J. R.; Quiñones, R.
2014-12-01
Extreme droughts are difficult to predict and may incur large economic and ecological costs. Dam operations in drought usually consider minimizing economic costs. However, dam operations also offer an opportunity to increase wild fish survival under difficult conditions. Here, we develop a probabilistic optimization approach to developing reservoir release schedules to maximize fish survival in regulated rivers. A case study applies the approach to wild Fall-run Chinook Salmon below Folsom Dam on California's American River. Our results indicate that releasing more water early in the drought will, on average, save more wild fish over the long term.
A 10 kW small solar power station and operations optimization
NASA Astrophysics Data System (ADS)
Gloel, J.
1980-12-01
Characteristics of the plant are: unpressurized hot water (95 C) for transport and storage of thermal energy; screw expansion engine operating with freon R114 for thermo-mechanical energy conversion; and an ac three phase generator interconnected with a static converter to provide line voltages and frequencies. A solar collector simulator was used in testing. Selection, assembly, and function of the system and each subsystem are detailed. Component performance results are listed. Component by component operation optimization results show the importance of correctly regulating the thermal cycle temperature as well as the freon expansion pump circuit pressure.
PLIO: a generic tool for real-time operational predictive optimal control of water networks.
Cembrano, G; Quevedo, J; Puig, V; Pérez, R; Figueras, J; Verdejo, J M; Escaler, I; Ramón, G; Barnet, G; Rodríguez, P; Casas, M
2011-01-01
This paper presents a generic tool, named PLIO, that allows to implement the real-time operational control of water networks. Control strategies are generated using predictive optimal control techniques. This tool allows the flow management in a large water supply and distribution system including reservoirs, open-flow channels for water transport, water treatment plants, pressurized water pipe networks, tanks, flow/pressure control elements and a telemetry/telecontrol system. Predictive optimal control is used to generate flow control strategies from the sources to the consumer areas to meet future demands with appropriate pressure levels, optimizing operational goals such as network safety volumes and flow control stability. PLIO allows to build the network model graphically and then to automatically generate the model equations used by the predictive optimal controller. Additionally, PLIO can work off-line (in simulation) and on-line (in real-time mode). The case study of Santiago-Chile is presented to exemplify the control results obtained using PLIO off-line (in simulation). PMID:22097020
Optimal control of a boiling water reactor load-following operation
Lin, C.; Lin, Z.P.; Jiang, W.J. . Dept. of Nuclear Engineering)
1989-06-01
The authors describe a method based on a forward dynamic programming technique applied to load-following control of a boiling water reactor. The control strategy obtained is optimal and satisfies operation constraints. A course-mesh, one-dimensional model using the two-group diffusion theory with Doppler, void, and xenon feedbacks is developed to reduce computer time. The control rods are assumed to be fixed during load maneuvers, and variations in core power are accomplished through core flow.
Borlawsky, Tara; LaFountain, Jeanne; Petty, Lynda; Saltz, Joel H; Payne, Philip R O
2008-01-01
Workflow analysis is frequently performed in the context of operations research and process optimization. In order to develop a data-driven workflow model that can be employed to assess opportunities to improve the efficiency of perioperative care teams at The Ohio State University Medical Center (OSUMC), we have developed a method for integrating standard workflow modeling formalisms, such as UML activity diagrams with data-centric annotations derived from our existing data warehouse. PMID:18999220
Del Villar, Ignacio; Cruz, Jose L; Socorro, Abian B; Corres, Jesus M; Matias, Ignacio R
2016-08-01
This work presents a refractive index sensor based on a long period fiber grating (LPFG) made in a reduced cladding fiber whose low order cladding modes have the turning point at large wavelengths. The combination of these parameters results in an improved sensitivity of 8734 nm/refractive index unit (RIU) for the LP_{0,3} mode in the 1400-1650 wavelength range. This value is similar to that obtained with thin-film coated LPFGs, which permits to avoid the coating deposition step. The numerical simulations are in agreement with the experimental results. PMID:27505736
An optimal cut-off point for the calving interval may be used as an indicator of bovine abortions.
Bronner, Anne; Morignat, Eric; Gay, Emilie; Calavas, Didier
2015-10-01
The bovine abortion surveillance system in France aims to detect as early as possible any resurgence of bovine brucellosis, a disease of which the country has been declared free since 2005. It relies on the mandatory notification and testing of each aborting cow, but under-reporting is high. This research uses a new and simple approach which considers the calving interval (CI) as a "diagnostic test" to determine optimal cut-off point c and estimate diagnostic performance of the CI to identify aborting cows, and herds with multiple abortions (i.e. three or more aborting cows per calving season). The period between two artificial inseminations (AI) was considered as a "gold standard". During the 2006-2010 calving seasons, the mean optimal CI cut-off point for identifying aborting cows was 691 days for dairy cows and 703 days for beef cows. Depending on the calving season, production type and scale at which c was computed (individual or herd), the average sensitivity of the CI varied from 42.6% to 64.4%; its average specificity from 96.7% to 99.7%; its average positive predictive value from 27.6% to 65.4%; and its average negative predictive value from 98.7% to 99.8%. When applied to the French bovine population as a whole, this indicator identified 2-3% of cows suspected to have aborted, and 10-15% of herds suspected of multiple abortions. The optimal cut-off point and CI performance were consistent over calving seasons. By applying an optimal CI cut-off point to the cattle demographics database, it becomes possible to identify herds with multiple abortions, carry out retrospective investigations to find the cause of these abortions and monitor a posteriori compliance of farmers with their obligation to report abortions for brucellosis surveillance needs. Therefore, the CI could be used as an indicator of abortions to help improve the current mandatory notification surveillance system. PMID:26318526
A model for optimal operation of land-treatment sites for oily wastes.
Unlü, K; Kivanç, S
2001-06-01
Land treatment as a disposal technology has been extensively used for the disposal of oily wastes. Effective management of land treatment sites require optimal operation of the system in order to achieve the fastest and most complete degradation of petroleum hydrocarbons without contamination of the environment. This paper describes a model that can be used for optimising the operation of land treatment sites for oily wastes. The model is composed of system simulator and optimisation submodels. Conceptually, the system simulation submodel is composed of a waste mixing zone, lower treatment zone and aquifer modules. The system simulation model allows for periodic waste applications and determines the spatial and temporal variation of the state variables such as phase summed (total) and aqueous phase contaminant concentrations and water content in the system. The optimisation submodel coupled with the system simulator determines the optimal values of system control variables, such as waste loading rate, infiltration rate, water content, frequency of waste application and the dimensions of the land treatment site. Optimisation of these system control variables is accomplished by maximising the hydrocarbon mass removal from the waste mixing zone under the constraint of satisfying a prespecified water quality criteria in the aquifer. Selected model applications are presented to demonstrate the applicability and utility of the model. Such model applications include determination of the optimal operating conditions for the land treatment of oily wastes under various different site and soil environmental conditions and practical waste disposal scenarios. PMID:11699857
Optimization of magnetic refrigerators by tuning the heat transfer medium and operating conditions
NASA Astrophysics Data System (ADS)
Ghahremani, Mohammadreza; Aslani, Amir; Bennett, Lawrence; Della Torre, Edward
A new reciprocating Active Magnetic Regenerator (AMR) experimental device has been designed, built and tested to evaluate the effect of the system's parameters on a reciprocating Active Magnetic Regenerator (AMR) near room temperature. Gadolinium turnings were used as the refrigerant, silicon oil as the heat transfer medium, and a magnetic field of 1.3 T was cycled. This study focuses on the methodology of single stage AMR operation conditions to get a higher temperature span near room temperature. Herein, the main objective is not to report the absolute maximum attainable temperature span seen in an AMR system, but rather to find the system's optimal operating conditions to reach that maximum span. The results of this work show that there is an optimal operating frequency, heat transfer fluid flow rate, flow duration, and displaced volume ratio in an AMR system. It is expected that such optimization and the results provided herein will permit the future design and development of more efficient room-temperature magnetic refrigeration systems.
Kaneda, Shohei; Ono, Koichi; Fukuba, Tatsuhiro; Nojima, Takahiko; Yamamoto, Takatoki; Fujii, Teruo
2011-01-01
In this paper, a rapid and simple method to determine the optimal temperature conditions for denaturant electrophoresis using a temperature-controlled on-chip capillary electrophoresis (CE) device is presented. Since on-chip CE operations including sample loading, injection and separation are carried out just by switching the electric field, we can repeat consecutive run-to-run CE operations on a single on-chip CE device by programming the voltage sequences. By utilizing the high-speed separation and the repeatability of the on-chip CE, a series of electrophoretic operations with different running temperatures can be implemented. Using separations of reaction products of single-stranded DNA (ssDNA) with a peptide nucleic acid (PNA) oligomer, the effectiveness of the presented method to determine the optimal temperature conditions required to discriminate a single-base substitution (SBS) between two different ssDNAs is demonstrated. It is shown that a single run for one temperature condition can be executed within 4 min, and the optimal temperature to discriminate the SBS could be successfully found using the present method. PMID:21845077
NASA Astrophysics Data System (ADS)
Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Rahman, Asmahanim Ab
2014-09-01
In a labor intensive manufacturing system, optimal operator allocation is one of the most important decisions in determining the efficiency of the system. In this paper, ten operator allocation alternatives are identified using the computer simulation ARENA. Two inputs; average wait time and average cycle time and two outputs; average operator utilization and total packet values of each alternative are generated. Four Data Envelopment Analysis (DEA) models; CCR, BCC, MCDEA and AHP/DEA are used to determine the optimal operator allocation at one of the SME food manufacturing companies in Selangor. The results of all four DEA models showed that the optimal operator allocation is six operators at peeling process, three operators at washing and slicing process, three operators at frying process and two operators at packaging process.
NASA Astrophysics Data System (ADS)
Truong, Binh Duc; Phu Le, Cuong; Halvorsen, Einar
2015-12-01
This paper presents experiments on how to approach the physical limits on power from vibration energy harvesting under displacement-constrained operation. A MEMS electrostatic vibration energy harvester with voltage-control of the system stiffness is used for this purpose. The power saturation problem, when the proof mass displacement reaches maximum amplitude for sufficient acceleration amplitude, is shifted to higher accelerations by use of load optimization and tunable electromechanical coupling k2. Measurement results show that harvested power can be made to follow the optimal velocity-damped generator also for a range of accelerations that implies displacement constraints. Comparing to the saturated power, the power increases 1.5 times with the optimal load and an electromechanical coupling k2=8.7%. This value is 2.3 times for a higher coupling k2=17.9%. The obtained system effectiveness is beyond 60% under the optimization. This work also shows a first demonstration of reaching optimal power in the intermediate acceleration-range between the two extremes of maximum efficiency and maximum power transfer.
Results of JET operation with continuous carbon and beryllium X-point target plates
NASA Astrophysics Data System (ADS)
Lowry, C. G.; Ady, W. N.; Campbell, D. J.; Carman, P.; Clement, S.; Deksnis, E. B.; Gondhalekar, A.; Harbour, P. J.; Horton, L.; Janeschitz, G.; Lesourd, M.; Lingertat, J.; Pick, M. A.; Saibene, G.; Summers, D. D. R.; Thomas, P. R.
1992-12-01
The 1991/92 JET experimental campaign assessed the performance of three different toroidally continuous X-point target plates. The main differences were in the tile material, beryllium and carbon, and the presence of exposed edges. These three configurations have been tested up to power levels in excess of 22 MW and with gas fuelling at the X-point and in the midplane. With the beryllium a radiating divertor was achieved by puffing deuterium into the X-point region, while rapid ELMs resulted from deuterium puffing on the carbon target. The investigation into the importance of small edges, up to 1.5 mm, yielded some interesting results. Although the surface temperature rise was substantially reduced by eliminating exposed tile edges, the onset of the carbon bloom was not delayed by a similar amount. In this paper a model is presented which can explain this and other features of the bloom.
NASA Astrophysics Data System (ADS)
Braun, Robert Joseph
The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell
78 FR 44881 - Drawbridge Operation Regulation; York River, Between Yorktown and Gloucester Point, VA
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-25
... maintenance work on the moveable spans on the Coleman Memorial Bridge. This temporary deviation allows the... operating regulation set out in 33 CFR 117.1025, to facilitate maintenance of the moveable spans on...
The use of experimental design to find the operating maximum power point of PEM fuel cells
Crăciunescu, Aurelian; Pătularu, Laurenţiu; Ciumbulea, Gloria; Olteanu, Valentin; Pitorac, Cristina; Drugan, Elena
2015-03-10
Proton Exchange Membrane (PEM) Fuel Cells are difficult to model due to their complex nonlinear nature. In this paper, the development of a PEM Fuel Cells mathematical model based on the Design of Experiment methodology is described. The Design of Experiment provides a very efficient methodology to obtain a mathematical model for the studied multivariable system with only a few experiments. The obtained results can be used for optimization and control of the PEM Fuel Cells systems.
Optimization principle of operating parameters of heat exchanger by using CFD simulation
NASA Astrophysics Data System (ADS)
Mičieta, Jozef; Jiří, Vondál; Jandačka, Jozef; Lenhard, Richard
2016-03-01
Design of effective heat transfer devices and minimizing costs are desired sections in industry and they are important for both engineers and users due to the wide-scale use of heat exchangers. Traditional approach to design is based on iterative process in which is gradually changed design parameters, until a satisfactory solution is achieved. The design process of the heat exchanger is very dependent on the experience of the engineer, thereby the use of computational software is a major advantage in view of time. Determination of operating parameters of the heat exchanger and the subsequent estimation of operating costs have a major impact on the expected profitability of the device. There are on the one hand the material and production costs, which are immediately reflected in the cost of device. But on the other hand, there are somewhat hidden costs in view of economic operation of the heat exchanger. The economic balance of operation significantly affects the technical solution and accompanies the design of the heat exchanger since its inception. Therefore, there is important not underestimate the choice of operating parameters. The article describes an optimization procedure for choice of cost-effective operational parameters for a simple double pipe heat exchanger by using CFD software and the subsequent proposal to modify its design for more economical operation.
78 FR 20144 - Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit 3
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-03
... exemption and FONSI were published in the Federal Register (FR) on the same day the exemption was issued (72 FR 55254). The exemption was then implemented at Indian Point Unit 3. A draft EA for public comment.... See 75 FR 20248 (April 19, 2010). That 2010 rulemaking expanded the scope of an existing...
78 FR 26248 - Drawbridge Operation Regulation; York River, between Yorktown and Gloucester Point, VA
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-06
... draw of the US 17/George P. Coleman Memorial Swing Bridge across the York River, at mile 7.0, between Gloucester Point and Yorktown, VA. The deviation is necessary to facilitate electrical work on the George P... Avenue SE., Washington, DC 20590, between 9 a.m. and 5 p.m., Monday through Friday, except...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-09
... draw of the US 17/George P. Coleman Memorial Swing Bridge across the York River, at mile 7.0, between Gloucester Point and Yorktown, VA. This deviation is necessary to facilitate maintenance on the George P... Avenue SE., Washington, DC 20590, between 9 a.m. and 5 p.m., Monday through Friday, except...
Nelson, Stacy; English, Shawn; Briggs, Timothy
2016-05-06
Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less
NASA Astrophysics Data System (ADS)
Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.
2015-11-01
Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.
Decision support system for optimal reservoir operation modeling within sediment deposition control.
Hadihardaja, Iwan K
2009-01-01
Suspended sediment deals with surface runoff moving toward watershed affects reservoir sustainability due to the reduction of storage capacity. The purpose of this study is to introduce a reservoir operation model aimed at minimizing sediment deposition and maximizing energy production expected to obtain optimal decision policy for both objectives. The reservoir sediment-control operation model is formulated by using Non-Linear Programming with an iterative procedure based on a multi-objective measurement in order to achieve optimal decision policy that is established in association with the development of a relationship between stream inflow and sediment rate by utilizing the Artificial Neural Network. Trade off evaluation is introduced to generate a strategy for controlling sediment deposition at same level of target ratio while producing hydroelectric energy. The case study is carried out at the Sanmenxia Reservoir in China where redesign and reconstruction have been accomplished. However, this model deals only with the original design and focuses on a wet year operation. This study will also observe a five-year operation period to show the accumulation of sediment due to the impact of reservoir storage capacity. PMID:19214002
Boskovic, Goran; Jovicic, Nebojsa
2015-12-01
This paper concerns the development of a methodology aimed at determining the optimal number of waste bins as well optimizing the location of collection points. The methodology was based on a geographic information system, which handled different sets of information, such as street directions, spatial location of objects and number of inhabitants, location of waste bins, and radius of their coverage. The study was conducted in a district in the central area of the city of Kragujevac. Due to a lack of information about the existing situation, all necessary data was collected by fieldwork and by using GPS equipment. By using the developed methodology, the results indicated a reduction of 24% in the number of collection points and 33.5% in the number of waste bins, without reducing the quality of the provided services. It has led to cost and time savings for waste collection and environmental benefits. All users of the services were covered within a 75-m radius, and the usage of bins is more efficient. According to the reduction in the number of waste bins, a total amount of €26,000 may be achieved. In addition, the time for waste collection was reduced, resulting in a €1700 saving per year in fuel costs, as well as 4.5 tons of emitted CO2 into the atmosphere. PMID:26467320
Jenny, Richard M; Jasper, Micah N; Simmons, Otto D; Shatalov, Max; Ducoste, Joel J
2015-10-15
Alternative disinfection sources such as ultraviolet light (UV) are being pursued to inactivate pathogenic microorganisms such as Cryptosporidium and Giardia, while simultaneously reducing the risk of exposure to carcinogenic disinfection by-products (DBPs) in drinking water. UV-LEDs offer a UV disinfecting source that do not contain mercury, have the potential for long lifetimes, are robust, and have a high degree of design flexibility. However, the increased flexibility in design options will add a substantial level of complexity when developing a UV-LED reactor, particularly with regards to reactor shape, size, spatial orientation of light, and germicidal emission wavelength. Anticipating that LEDs are the future of UV disinfection, new methods are needed for designing such reactors. In this research study, the evaluation of a new design paradigm using a point-of-use UV-LED disinfection reactor has been performed. ModeFrontier, a numerical optimization platform, was coupled with COMSOL Multi-physics, a computational fluid dynamics (CFD) software package, to generate an optimized UV-LED continuous flow reactor. Three optimality conditions were considered: 1) single objective analysis minimizing input supply power while achieving at least (2.0) log10 inactivation of Escherichia coli ATCC 11229; and 2) two multi-objective analyses (one of which maximized the log10 inactivation of E. coli ATCC 11229 and minimized the supply power). All tests were completed at a flow rate of 109 mL/min and 92% UVT (measured at 254 nm). The numerical solution for the first objective was validated experimentally using biodosimetry. The optimal design predictions displayed good agreement with the experimental data and contained several non-intuitive features, particularly with the UV-LED spatial arrangement, where the lights were unevenly populated throughout the reactor. The optimal designs may not have been developed from experienced designers due to the increased degrees of
NASA Technical Reports Server (NTRS)
Fabinsky, Beth
2006-01-01
WISE, the Wide Field Infrared Survey Explorer, is scheduled for launch in June 2010. The mission operations system for WISE requires a software modeling tool to help plan, integrate and simulate all spacecraft pointing and verify that no attitude constraints are violated. In the course of developing the requirements for this tool, an investigation was conducted into the design of similar tools for other space-based telescopes. This paper summarizes the ground software and processes used to plan and validate pointing for a selection of space telescopes; with this information as background, the design for WISE is presented.
Xu, Wu; Xiao, Jie; Zhang, Jian; Wang, Deyu; Zhang, Jiguang
2009-07-07
The selection and optimization of non-aqueous electrolytes for ambient operations of lithium/air batteries has been studied. Organic solvents with low volatility and low moisture absorption are necessary to minimize the change of electrolyte compositions and the reaction between lithium anode and water during discharge process. It is critical to make the electrolytes with high polarity so that it can reduce wetting and flooding of carbon based air electrode and lead to improved battery performance. For ambient operations, the viscosity, ionic conductivity, and oxygen solubility of the electrolyte are less important than the polarity of organic solvents once the electrolyte has reasonable viscosity, conductivity, and oxygen solubility. It has been found that PC/EC mixture is the best solvent system and LiTFSI is the most feasible salt for ambient operations of Li/air batteries. Battery performance is not very sensitive to PC/EC ratio or salt concentration.
Long-term energy capture and the effects of optimizing wind turbine operating strategies
NASA Technical Reports Server (NTRS)
Miller, A. H.; Formica, W. J.
1982-01-01
Methods of increasing energy capture without affecting the turbine design were investigated. The emphasis was on optimizing the wind turbine operating strategy. The operating strategy embodies the startup and shutdown algorithm as well as the algorithm for determining when to yaw (rotate) the axis of the turbine more directly into the wind. Using data collected at a number of sites, the time-dependent simulation of a MOD-2 wind turbine using various, site-dependent operating strategies provided evidence that site-specific fine tuning can produce significant increases in long-term energy capture as well as reduce the number of start-stop cycles and yawing maneuvers, which may result in reduced fatigue and subsequent maintenance.
Optimizing the CEBAF Injector for Beam Operation with a Higher Voltage Electron Gun
F.E. Hannon, A.S. Hofler, R. Kazimi
2011-03-01
Recent developments in the DC gun technology used at CEBAF have allowed an increase in operational voltage from 100kV to 130kV. In the near future this will be extended further to 200kV with the purchase of a new power supply. The injector components and layout at this time have been designed specifically for 100kV operation. It is anticipated that with an increase in gun voltage and optimization of the layout and components for 200kV operation, that the electron bunch length and beam brightness can be improved upon. This paper explores some upgrade possibilities for a 200kV gun CEBAF injector through beam dynamic simulations.
Partial difference operators on weighted graphs for image processing on surfaces and point clouds.
Lozes, Francois; Elmoataz, Abderrahim; Lezoray, Olivier
2014-09-01
Partial difference equations (PDEs) and variational methods for image processing on Euclidean domains spaces are very well established because they permit to solve a large range of real computer vision problems. With the recent advent of many 3D sensors, there is a growing interest in transposing and solving PDEs on surfaces and point clouds. In this paper, we propose a simple method to solve such PDEs using the framework of PDEs on graphs. This latter approach enables us to transcribe, for surfaces and point clouds, many models and algorithms designed for image processing. To illustrate our proposal, three problems are considered: (1) p -Laplacian restoration and inpainting; (2) PDEs mathematical morphology; and (3) active contours segmentation. PMID:25020095
NASA Astrophysics Data System (ADS)
Miller, Arthur C., Jr.; Cuttino, James F.
1997-11-01
This paper describes a new, fast tool servo system designed for fabrication of non-rotationally symmetric components using single point diamond turning machines. A prototype device, designed for flexible interfacing to typical machine tool controllers, will be described along with performance testing data of tilted flat and off-axis conic sections. Evaluation data show that servo produced surfaces have an RMS roughness less than 175 angstroms. Techniques for linearizing the hysteretic effects in the piezoelectric actuator are also discussed. The nonlinear effects due to hysteresis are reduced using a dynamic compensator module in conjunction with a linear controller. The compensator samples the reduced using a dynamic compensator module in conjunction with a linear controller. The compensator samples the hysteretic voltage/displacement relationship in real time and modifies the effective gain accordingly. Simulation results indicate that errors in the performance of the system caused by hysteresis in the system can be compensated and reduced by 90 percent. Experimental implementation results in an 80 percent reduction in the motion error caused by hysteresis, but peak-to-valley errors are limited by side effects from the compensation. The uncompensated servo system demonstrated a peak-to-valley error of less than 0.80 micrometer for an off-axis conic section turned on-axis.
Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information
NASA Technical Reports Server (NTRS)
Pence, William D.; White, R. L.; Seaman, R.
2010-01-01
We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.
Point focusing using loudspeaker arrays from the perspective of optimal beamforming.
Bai, Mingsian R; Hsieh, Yu-Hao
2015-06-01
Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance. PMID:26093429
Miller, A.C. Jr.; Cuttino, J.F.
1997-08-01
This paper describes a new, fast tool servo system for fabricating non-rotationally symmetric components using single point diamond turning machines. A prototype, designed for flexible interfacing to typical machine tool controllers, will be described along with performance testing data of tilted flat and off-axis conic sections. Evaluation data show that servo produced surfaces have an rms roughness less than 175 angstroms (2-200 {mu}m spatial filter). Techniques for linearizing the hysteretic effects in the piezoelectric actuator are also discussed. The nonlinear effects due to hysteresis are reduced using a dynamic compensator module in conjunction with a linear controller. The compensator samples the hysteretic voltage/displacement relation in real time and modifies the effective gain accordingly. Simulation results indicate that errors in the performance of the system caused by hysteresis can be compensated and reduced by 90%. Experimental implementation results in 80% reduction in motion error caused by hysteresis, but peak-to- valley errors are limited by side effects from the compensation. The uncompensated servo system demonstrated a peak-to-valley error of less than 0.80 micrometer for an off-axis conic section turned on-axis.
NASA Astrophysics Data System (ADS)
Best, R.; Biermann, W.; Reimann, R. C.
1985-01-01
The returned fifteen ton Solar Absorption Machine (SAM) 015 chiller was given a cursory visual inspection, some obvious problems were remedied, and then it was placed on a test stand to get a measure of dirty performance. It was then given a standard acid clean, the water side of the tubes was brushed clean, and then the machine was retested. The before and after cleaning data were compared to equivalent data taken before the machine was shipped. The second part of the work statement was to experimentally demonstrate the technical feasibility of operating the chiller at evaporator temperatures below 0(0)C (32(0)F) and identify any operational problems.
NASA Astrophysics Data System (ADS)
Polewski, P.; Yao, W.; Heurich, M.; Krzystek, P.; Stilla, U.
2015-08-01
In this paper, a new family of shape descriptors called Free Shape Contexts (FSC) is introduced to generalize the existing 3D Shape Contexts. The FSC introduces more degrees of freedom than its predecessor by allowing the level of complexity to vary between its parts. Also, each part of the FSC has an associated activity state which controls whether the part can contribute a feature value. We describe a method of evolving the FSC parameters for the purpose of creating highly discriminative features suitable for detecting specific objects in sparse point clouds. The evolutionary process is built on a genetic algorithm (GA) which optimizes the parameters with respect to cross-validated overall classification accuracy. The GA manipulates both the structure of the FSC and the activity flags, allowing it to perform an implicit feature selection alongside the structure optimization by turning off segments which do not augment the discriminative capabilities. We apply the proposed descriptor to the problem of detecting single standing dead tree trunks from ALS point clouds. The experiment, carried out on a set of 285 objects, reveals that an FSC optimized through a GA with manually tuned recombination parameters is able to attain a classification accuracy of 84.2%, yielding an increase of 4.2 pp compared to features derived from eigenvalues of the 3D covariance matrix. Also, we address the issue of automatically tuning the GA recombination metaparameters. For this purpose, a fuzzy logic controller (FLC) which dynamically adjusts the magnitude of the recombination effects is co-evolved with the FSC parameters in a two-tier evolution scheme. We find that it is possible to obtain an FLC which retains the classification accuracy of the manually tuned variant, thereby limiting the need for guessing the appropriate meta-parameter values.
Rounds, Stewart A.
2007-01-01
Water temperature is an important factor influencing the migration, rearing, and spawning of several important fish species in rivers of the Pacific Northwest. To protect these fish populations and to fulfill its responsibilities under the Federal Clean Water Act, the Oregon Department of Environmental Quality set a water temperature Total Maximum Daily Load (TMDL) in 2006 for the Willamette River and the lower reaches of its largest tributaries in northwestern Oregon. As a result, the thermal discharges of the largest point sources of heat to the Willamette River now are limited at certain times of the year, riparian vegetation has been targeted for restoration, and upstream dams are recognized as important influences on downstream temperatures. Many of the prescribed point-source heat-load allocations are sufficiently restrictive that management agencies may need to expend considerable resources to meet those allocations. Trading heat allocations among point-source dischargers may be a more economical and efficient means of meeting the cumulative point-source temperature limits set by the TMDL. The cumulative nature of these limits, however, precludes simple one-to-one trades of heat from one point source to another; a more detailed spatial analysis is needed. In this investigation, the flow and temperature models that formed the basis of the Willamette temperature TMDL were used to determine a spatially indexed 'heating signature' for each of the modeled point sources, and those signatures then were combined into a user-friendly, spreadsheet-based screening tool. The Willamette River Point-Source Heat-Trading Tool allows the user to increase or decrease the heating signature of each source and thereby evaluate the effects of a wide range of potential point-source heat trades. The predictions of the Trading Tool were verified by running the Willamette flow and temperature models under four different trading scenarios, and the predictions typically were accurate
Deriving multiple near-optimal solutions to deterministic reservoir operation problems
NASA Astrophysics Data System (ADS)
Liu, Pan; Cai, Ximing; Guo, Shenglian
2011-08-01
Even deterministic reservoir operation problems with a single objective function may have multiple near-optimal solutions (MNOS) whose objective values are equal or sufficiently close to the optimum. MNOS is valuable for practical reservoir operation decisions because having a set of alternatives from which to choose allows reservoir operators to explore multiple options whereas the traditional algorithm that produces a single optimum does not offer them this flexibility. This paper presents three methods: the near-shortest paths (NSP) method, the genetic algorithm (GA) method, and the Markov chain Monte Carlo (MCMC) method, to explore the MNOS. These methods, all of which require a long computation time, find MNOS using different approaches. To reduce the computation time, a narrower subspace, namely a near-optimal space (NOSP, described by the maximum and minimum bounds of MNOS) is derived. By confining the MNOS search within the NOSP, the computation time of the three methods is reduced. The proposed methods are validated with a test function before they are examined with case studies of both a single reservoir (the Three Gorges Reservoir in China) and a multireservoir system (the Qing River Cascade Reservoirs in China). It is found that MNOS exists for the deterministic reservoir operation problems. When comparing the three methods, the NSP method is unsuitable for large-scale problems but provides a benchmark to which solutions of small- and medium-scale problems can be compared. The GA method can produce some MNOS but is not very efficient in terms of the computation time. Finally, the MCMC method performs best in terms of goodness-of-fit to the benchmark and computation time, since it yields a wide variety of MNOS based on all retained intermediate results as potential MNOS. Two case studies demonstrate that the MNOS identified in this study are useful for real-world reservoir operation, such as the identification of important operation time periods and
Ishii, Shun'ichi; Suzuki, Shino; Norden-Krichmar, Trina M; Wu, Angela; Yamanaka, Yuko; Nealson, Kenneth H; Bretschger, Orianna
2013-12-01
Microbial fuel cells (MFCs) are devices that exploit microorganisms as "biocatalysts" to recover energy from organic matter in the form of electricity. MFCs have been explored as possible energy neutral wastewater treatment systems; however, fundamental knowledge is still required about how MFC-associated microbial communities are affected by different operational conditions and can be optimized for accelerated wastewater treatment rates. In this study, we explored how electricity-generating microbial biofilms were established at MFC anodes and responded to three different operational conditions during wastewater treatment: 1) MFC operation using a 750 Ω external resistor (0.3 mA current production); 2) set-potential (SP) operation with the anode electrode potentiostatically controlled to +100 mV vs SHE (4.0 mA current production); and 3) open circuit (OC) operation (zero current generation). For all reactors, primary clarifier effluent collected from a municipal wastewater plant was used as the sole carbon and microbial source. Batch operation demonstrated nearly complete organic matter consumption after a residence time of 8-12 days for the MFC condition, 4-6 days for the SP condition, and 15-20 days for the OC condition. These results indicate that higher current generation accelerates organic matter degradation during MFC wastewater treatment. The microbial community analysis was conducted for the three reactors using 16S rRNA gene sequencing. Although the inoculated wastewater was dominated by members of Epsilonproteobacteria, Gammaproteobacteria, and Bacteroidetes species, the electricity-generating biofilms in MFC and SP reactors were dominated by Deltaproteobacteria and Bacteroidetes. Within Deltaproteobacteria, phylotypes classified to family Desulfobulbaceae and Geobacteraceae increased significantly under the SP condition with higher current generation; however those phylotypes were not found in the OC reactor. These analyses suggest that species
NASA Astrophysics Data System (ADS)
Zhang, Dingcheng; Yu, Dejie; Zhang, Wenyi
2015-11-01
Compound faults diagnosis is a challenge for rotating machinery fault diagnosis. The vibration signals measured from gearboxes are usually complex, non-stationary, and nonlinear. When compound faults occur in a gearbox, weak fault characteristic signals are always submerged by the strong ones. Therefore, it is difficult to detect a weak fault by using the demodulating analysis of vibration signals of gearboxes directly. The key to compound faults diagnosis of gearboxes is to separate different fault characteristic signals from the collected vibration signals. Aiming at that problem, a new method for the compound faults diagnosis of gearboxes is proposed based on the energy operator demodulating of optimal resonance components. In this method, the genetic algorithm is first used to obtain the optimal decomposition parameters. Then the compound faults vibration signals of a gearbox are subject to resonance-based signal sparse decomposition (RSSD) to separate the fault characteristic signals of the gear and the bearing by using the optimal decomposition parameters. Finally, the separated fault characteristic signals are analyzed by energy operator demodulating, and each one’s instantaneous amplitude can be calculated. According to the spectra of instantaneous amplitudes of fault characteristic signals, the faults of the gear and the bearing can be diagnosed, respectively. The performance of the proposed method is validated by using the simulation data and the experiment vibration signals from a gearbox with compound faults.
A New Tool for Environmental and Economic Optimization of Hydropower Operations
NASA Astrophysics Data System (ADS)
Saha, S.; Hayse, J. W.
2012-12-01
As part of a project funded by the U.S. Department of Energy, researchers from Argonne, Oak Ridge, Pacific Northwest, and Sandia National Laboratories collaborated on the development of an integrated toolset to enhance hydropower operational decisions related to economic value and environmental performance. As part of this effort, we developed an analytical approach (Index of River Functionality, IRF) and an associated software tool to evaluate how well discharge regimes achieve ecosystem management goals for hydropower facilities. This approach defines site-specific environmental objectives using relationships between environmental metrics and hydropower-influenced flow characteristics (e.g., discharge or temperature), with consideration given to seasonal timing, duration, and return frequency requirements for the environmental objectives. The IRF approach evaluates the degree to which an operational regime meets each objective and produces a score representing how well that regime meets the overall set of defined objectives. When integrated with other components in the toolset that are used to plan hydropower operations based upon hydrologic forecasts and various constraints on operations, the IRF approach allows an optimal release pattern to be developed based upon tradeoffs between environmental performance and economic value. We tested the toolset prototype to generate a virtual planning operation for a hydropower facility located in the Upper Colorado River basin as a demonstration exercise. We conducted planning as if looking five months into the future using data for the recently concluded 2012 water year. The environmental objectives for this demonstration were related to spawning and nursery habitat for endangered fishes using metrics associated with maintenance of instream habitat and reconnection of the main channel with floodplain wetlands in a representative reach of the river. We also applied existing mandatory operational constraints for the
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-15
... FR 55,834 (Oct. 1, 2007). \\2\\ Establishment of Atomic Safety and Licensing Board, 72 FR 60,394 (Oct... and 3); Notice of Atomic Safety and Licensing Board Reconstitution, 77 FR 22,361 (Apr. 13, 2012). On... Renewal of Facility Operating License Nos. DPR-26 and DPR-64 for an Additional 20-Year Period, 72 FR...
Particulate emissions calculations from fall tillage operations using point and remote sensors
Technology Transfer Automated Retrieval System (TEKTRAN)
Preparation of soil for agricultural crops produces aerosols that may significantly contribute to seasonal atmospheric loadings of particulate matter (PM). Efforts to reduce PM emissions from tillage operations through a variety of conservation management practices (CMP) have been made but the reduc...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-13
... in order to facilitate barrier gate replacement and extensive electrical remedial work on the bridge... clearance in the closed position to vessels of 10 feet, above mean high water. The current operating... consist of the replacement of bridge traffic control devices (barrier gates) and extensive...
Space tug point design study. Volume 2: Operations, performance and requirements
NASA Technical Reports Server (NTRS)
1973-01-01
A design study to determine the configuration and characteristics of a space tug was conducted. Among the subjects analyzed in the study are: (1) flight and ground operations, (2) vehicle flight performance and performance enhancement techniques, (3) flight requirements, (4) basic design criteria, and (5) functional and procedural interface requirements between the tug and other systems.
Optimization of PHEV Power Split Gear Ratio to Minimize Fuel Consumption and Operation Cost
NASA Astrophysics Data System (ADS)
Li, Yanhe
A Plug-in Hybrid Electric Vehicle (PHEV) is a vehicle powered by a combination of an internal combustion engine and an electric motor with a battery pack. The battery pack can be charged by plugging the vehicle to the electric grid and from using excess engine power. The research activity performed in this thesis focused on the development of an innovative optimization approach of PHEV Power Split Device (PSD) gear ratio with the aim to minimize the vehicle operation costs. Three research activity lines have been followed: • Activity 1: The PHEV control strategy optimization by using the Dynamic Programming (DP) and the development of PHEV rule-based control strategy based on the DP results. • Activity 2: The PHEV rule-based control strategy parameter optimization by using the Non-dominated Sorting Genetic Algorithm (NSGA-II). • Activity 3: The comprehensive analysis of the single mode PHEV architecture to offer the innovative approach to optimize the PHEV PSD gear ratio.
NASA Astrophysics Data System (ADS)
Regis, Rommel G.
2014-02-01
This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.
Correlation of part-span damper losses through transonic rotors operating near design point
NASA Technical Reports Server (NTRS)
Roberts, W. B.
1977-01-01
The design-point losses caused by part-span dampers (PSD) were correlated for 21 transonic axial flow fan rotors that had tip speeds varying from 350 to 488 meters per second and design pressure ratios of 1.5 to 2.0. For these rotors a correlation using mean inlet Mach number at the damper location, along with relevant geometric and aerodynamic loading parameters, predicts the variation of total pressure loss coefficient in the region of the damper to a good approximation.
Optimal operational conditions for supercontinuum-based ultrahigh-resolution endoscopic OCT imaging.
Yuan, Wu; Mavadia-Shukla, Jessica; Xi, Jiefeng; Liang, Wenxuan; Yu, Xiaoyun; Yu, Shaoyong; Li, Xingde
2016-01-15
We investigated the optimal operational conditions for utilizing a broadband supercontinuum (SC) source in a portable 800 nm spectral-domain (SD) endoscopic OCT system to enable high resolution, high-sensitivity, and high-speed imaging in vivo. A SC source with a 3-dB bandwidth of ∼246 nm was employed to obtain an axial resolution of ∼2.7 μm (in air) and an optimal detection sensitivity of ∼-107 dB with an imaging speed up to 35 frames/s (at 70 k A-scans/s). The performance of the SC-based SD-OCT endoscopy system was demonstrated by imaging guinea pig esophagus in vivo, achieving image quality comparable to that acquired with a broadband home-built Ti:sapphire laser. PMID:26766686
Schulze, Anja; Römmelt, Horst; Ehrenstein, Vera; van Strien, Rob; Praml, Georg; Küchenhoff, Helmut; Nowak, Dennis; Radon, Katja
2011-01-01
Potential adverse health effects of concentrated animal feeding operations (CAFOs), which were also shown in the authors' Lower Saxony Lung Study, are of public concern. The authors aimed to investigate pulmonary health effect of neighboring residents assessed using optimized estimation technique. Annual ammonia emission was measured to assess the emission from CAFO and from surrounding fields. Location of sampling points was optimized using cluster analysis. Individual exposure of 457 nonfarm subjects was interpolated by weighting method. Mean estimated annual ammonia levels varied between 16 and 24 μg/m³. Higher exposed participants were more likely to be sensitized against ubiquitous allergens as compared to lower exposed subjects (adjusted odds ratio [OR] 4.2; 95% confidence interval [CI] 1.2-13.2). In addition, they showed a significantly lower forced expiratory volume in 1 second (FEV₁) (adjusted mean difference in % of predicted -8%; 95% CI -13% to -3%). The authors' previous findings that CAFOs may contribute to burden of respiratory diseases were confirmed by this study. PMID:21864103
NASA Astrophysics Data System (ADS)
Wittekindt, Anna; Abel, Cornelius; Kössl, Manfred
2009-02-01
The mammalian efferent medial olivo-cochlear system is known to modulate active amplification of low-level sound in the cochlea. We investigated the effect of contralateral acoustic stimulation (CAS), known to elicit efferent activity, on distortion product otoacoustic emissions (DPOAEs) in the gerbil and, in second approach, biased the position of the cochlear partition and hence the operating point of the cochlear amplifier periodically by a low frequency tone (5 Hz). The study focussed on the quadratic distortion product f2-f1 that is sensitive to changes in the operating point of the amplifier transfer function. During CAS, a significant increase of the amplitude of f2-f1 was found while 2f1-f2 was less affected. Biasing by the low frequency tone resulted in a phase related amplitude modulation of f2-f1. This modulation pattern was changed pronouncedly during CAS, in dependence on the CAS-level. The current results suggest that efferent effects on DPOAEs might be produced by changes in the operating point of the cochlear amplifier and were in good agreement with a simple model based on a Boltzman function.
Joe Wilson; Venkatarao Ganni; Dana Arenius; Jonathan Creel
2004-06-01
Jefferson Lab's (JLab) Continuous Electron Beam Accelerator Facility (CEBAF) and Free Electron Laser (FEL) are supported by 2 K helium refrigerator known as the Central Helium Liquefier (CHL), which maintains a constant low vapor pressure over the accelerators' large liquid helium inventory with a five-stage centrifugal compressor train. The cold compressor train operates with constrained discharge pressure and can be varied over a range of suction pressures and mass flows to meet the operational requirements of the two accelerators. Using data from commissioning and routine operations of the cold compressor system, the presented procedure predicts an operating point for each cold compressor such that maximum efficiency is attained for the overall cold compressor system for a given combination of mass flow and vapor pressure. The procedure predicts expected efficiency of the system and relative compressors speeds for operating vapor pressures from 4 to 2.5 kPa (corresponds to overall pressure ratios of 29 to 56) and flow rates of 135 g/s to 250 g/s. The results of the predictions are verified by test for a few operating conditions of mass flows and vapor pressures.
Wilson, J.D. Jr.; Ganni, V.; Arenius, D.M.; Creel, J.D.
2004-06-23
Jefferson Lab's (JLab) Continuous Electron Beam Accelerator Facility (CEBAF) and Free Electron Laser (FEL) are supported by 2 K helium refrigerator known as the Central Helium Liquefier (CHL), which maintains a constant low vapor pressure over the accelerators' large liquid helium inventory with a five-stage centrifugal compressor train. The cold compressor train operates with constrained discharge pressure and can be varied over a range of suction pressures and mass flows to meet the operational requirements of the two accelerators. Using data from commissioning and routine operations of the cold compressor system, the presented procedure predicts an operating point for each cold compressor such that maximum efficiency is attained for the overall cold compressor system for a given combination of mass flow and vapor pressure. The procedure predicts expected efficiency of the system and relative compressors speeds for operating vapor pressures from 4 to 2.5 kPa (corresponds to overall pressure ratios of 29 to 56) and flow rates of 135 g/s to 250 g/s. The results of the predictions are verified by test for a few operating conditions of mass flows and vapor pressures.
Real-time combined heat and power operational strategy using a hierarchical optimization algorithm
Yun, K.; Cho, H.; Luck, R.; Mago, P. J.
2011-06-01
Existing attempts to optimize the operation of combined heat and power (CHP) systems for building applications have two major limitations: the electrical and thermal loads are obtained from historical weather profiles; and the CHP system models ignore transient responses by using constant equipment efficiencies. This article considers the transient response of a building combined with a hierarchical CHP optimal control algorithm to obtain a real-time integrated system that uses the most recent weather and electric load information. This is accomplished by running concurrent simulations of two transient building models. The first transient building model uses current as well as forecast input information to obtain short-term predictions of the thermal and electric building loads. The predictions are then used by an optimization algorithm (i.e. a hierarchical controller that decides the amount of fuel and of electrical energy to be allocated at the current time step). In a simulation, the actual physical building is not available and, hence, to simulate a real-time environment, a second, building model with similar but not identical input loads are used to represent the actual building. A state-variable feedback loop is completed at the beginning of each time step by copying (i.e. measuring, the state variable from the actual building and restarting the predictive model using these ‘measured’ values as initial conditions). The simulation environment presented in this article features non-linear effects such as the dependence of the heat exchanger effectiveness on their operating conditions. Finally, the results indicate that the CHP engine operation dictated by the proposed hierarchical controller with uncertain weather conditions has the potential to yield significant savings when compared with conventional systems using current values of electricity and fuel prices.
MagRad: A code to optimize the operation of superconducting magnets in a radiation environment
Yeaw, C.T.
1995-12-31
A powerful computational tool, called MagRad, has been developed which optimizes magnet design for operation in radiation fields. Specifically, MagRad has been used for the analysis and design modification of the cable-in-conduit conductors of the TF magnet systems in fusion reactor designs. Since the TF magnets must operate in a radiation environment which damages the material components of the conductor and degrades their performance, the optimization of conductor design must account not only for start-up magnet performance, but also shut-down performance. The degradation in performance consists primarily of three effects: reduced stability margin of the conductor; a transition out of the well-cooled operating regime; and an increased maximum quench temperature attained in the conductor. Full analysis of the magnet performance over the lifetime of the reactor includes: radiation damage to the conductor, stability, protection, steady state heat removal, shielding effectiveness, optimal annealing schedules, and finally costing of the magnet and reactor. Free variables include primary and secondary conductor geometric and compositional parameters, as well as fusion reactor parameters. A means of dealing with the radiation damage to the conductor, namely high temperature superconductor anneals, is proposed, examined, and demonstrated to be both technically feasible and cost effective. Additionally, two relevant reactor designs (ITER CDA and ARIES-II/IV) have been analyzed. Upon addition of pure copper strands to the cable, the ITER CDA TF magnet design was found to be marginally acceptable, although much room for both performance improvement and cost reduction exists. A cost reduction of 10-15% of the capital cost of the reactor can be achieved by adopting a suitable superconductor annealing schedule. In both of these reactor analyses, the performance predictive capability of MagRad and its associated costing techniques have been demonstrated.
Optimization of operation rule curves and flushing schedule in a reservoir
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Lai, Jihn-Sung; Kao, Li-Shan
2003-06-01
Flushing sediment through a reservoir has been practiced successfully and found to be inexpensive in many cases. However, the great amount of water consumed in the flushing operation might affect the water supply. To satisfy the water demand and water consumed in the flushing operation, two models combining the reservoir simulation model and the sediment flushing model are established. In the reservoir simulation model, the genetic algorithm (GA) is used to optimize and determine the flushing operation rule curves. The sediment-flushing model is developed to estimate the amount of the flushed sediment volume, and the simulated results update the elevation-storage curve, which can be taken into account in the reservoir simulation model. The models are successfully applied to the Tapu reservoir, which has faced serious sedimentation problems. Based on 36 years historical sequential data, the results show that (i) the simulated flushing operation rule curves model has superior performance, in terms of lower shortage index (SI) and higher flushing efficiency (FE), than that by the original reservoir operation; (ii) the rational and riskless flushing schedule for the Tapu reservoir is suggested to be set within an interval of every 2 or 4 years in the months of May or June.
Optimizing the efficiency and reliability of fluid system operations: An ongoing process
Casada, D.A. |
1996-05-01
At most industrial facilities, motor loads associated with pumps and fans are the dominant electric energy users. As plant loads and consequent system functions change, the optimal operating conditions for these components change. In response, modifications to system operations are often made with only one consideration in mind - keeping the system on line. At the Y-12 plant in Oak Ridge, a fluid system energy efficiency improvement methodology is being developed to facilitate the systematic review and modification of system design and operations to increase operational efficiency. Since the bulk of the changes are associated with reducing the numbers and/or loads of motor-driven pumps or fans, there are direct benefits in reduced electrical generation and consequent waste heat production and air emissions. This paper will discuss the types of inefficiencies that tend to evolve as system functional requirements change and equipment ages, describe some of the fundamental parameters that are useful in identifying these inefficiencies, provide examples of design and operating changes being made, and detail the resultant savings in energy.
NASA Technical Reports Server (NTRS)
Burns, John A.; Marrekchi, Hamadi
1993-01-01
The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.
Configuration Optimization of a Reflective Bistable-Twisted-Nematic Cell for High-Contrast Operation
NASA Astrophysics Data System (ADS)
Lee, Gi-Dong; Kim, Gi-Hong; Yoon, Tae-Hoon; Kim, Jae Chang
2000-05-01
In this study, the configuration of a reflective bistable-twisted-nematic (BTN) liquid-crystal cell is optimized for high contrast and high brightness operation. We searched for the optimum optical parameters of a reflective BTN cell by calculating its optical performances at three wavelengths; red, green, and blue. By studying the effect of each optical parameter on the optical performances, we found that the angle of the polarizer is more important than any other optical parameter in the design of a reflective BTN cell. We fabricated a reflective BTN cell with a wide-band retardation film, whose measured contrast ratio is 10.6:1.
NASA Astrophysics Data System (ADS)
Rodrigo, Deepal
2007-12-01
reinforced the selection of these algorithms. The results obtained from each of the three algorithms used in the evaluations are very comparable. Thus one could safely conclude that the results obtained are valid. Three distinct test power systems operating under different conditions were studied for evaluating the suitability of each of these algorithms. The test cases included scenarios in which the power system was unconstrained as well as constrained. Repeated simulations carried out for the same test case with varying starting points provided evidence that the algorithms and the solutions were robust. Influences of different market concentrations on the optimal economic dispatch are evidenced by the pareto-optimal-fronts obtained for each test case studied. Results obtained from a traditional linear programming (LP) based solution algorithm that is used at present by many market operators are also presented for comparison. Very high market-concentration-indices were found for each solution from the LP algorithm. This suggests the need to use a formal method for mitigating market concentration. Operating the market at industry-recommended threshold levels of market concentration for selecting an optimal operational point is presented for all test cases studied. Given that a solution-set instead of a single operating point is found from the multi-objective optimization methods, additional flexibility to select any operational point based on the preference of those operating the market clearly is an added benefit of using multi-objective optimization methods. However, in order to help the market operator, a more logical fuzzy decision criterion was tested for selecting a suitable operating point. The results show that the optimal operating point chosen using the fuzzy decision criterion provides a higher economic benefit to the market, although at a slightly increased market concentration. Since the main objective of this research was to simultaneously optimize the
Optimal trajectories from the Earth-Moon L1 and L3 points to deflect hazardous asteroids and comets.
Maccone, Claudio
2004-05-01
Software code named asteroff was recently created by the author to simulate the deflection of hazardous asteroids off of their collision course with the Earth. This code was both copyrighted and patented to avoid unauthorized use of ideas that could possibly be vital to construct a planetary defense system in the vicinity of the Earth. Having so said, the basic ideas and equations underlying the asteroff simulation code are openly described in this paper. A system of two space bases housing missiles is proposed to achieve the planetary defense of the Earth against dangerous asteroids and comets, collectively called impactors herein. We show that the layout of the Earth-Moon system with the five relevant Lagrangian (or libration) points in space leads naturally to only one, unmistakable location of these two space bases within the sphere of influence of the Earth. These locations are at the two Lagrangian points L(1) (between the Earth and the Moon) and L(3) (in the direction opposite to the Moon from the Earth). We show that placing missile bases at L(1) and L(3) would enable those missiles to deflect the trajectory of impactors by hitting them orthogonally to their impact trajectory toward the Earth, so as to maximize their deflection. We show that confocal conics are the best class of trajectories fulfilling this orthogonal deflection requirement. One additional remark is that the theory developed in this paper is just a beginning for a wider set of future research. In fact, we only develop the Keplerian analytical theory for the optimal planetary defense achievable from the Earth-Moon Lagrangian points L(1) and L(3). Much more sophisticated analytical refinements would be needed to: (1) take into account many perturbation forces of all kinds acting on both the impactors and missiles shot from L(1) and L(3); (2) add more (non-optimal) trajectories of missiles shot from either the Lagrangian points L(4) and L(5) of the Earth-Moon System or from the surface of the
Optimizing the Operating Temperature for an array of MOX Sensors on an Open Sampling System
NASA Astrophysics Data System (ADS)
Trincavelli, M.; Vergara, A.; Rulkov, N.; Murguia, J. S.; Lilienthal, A.; Huerta, R.
2011-09-01
Chemo-resistive transduction is essential for capturing the spatio-temporal structure of chemical compounds dispersed in different environments. Due to gas dispersion mechanisms, namely diffusion, turbulence and advection, the sensors in an open sampling system, i.e. directly exposed to the environment to be monitored, are exposed to low concentrations of gases with many fluctuations making, as a consequence, the identification and monitoring of the gases even more complicated and challenging than in a controlled laboratory setting. Therefore, tuning the value of the operating temperature becomes crucial for successfully identifying and monitoring the pollutant gases, particularly in applications such as exploration of hazardous areas, air pollution monitoring, and search and rescue1. In this study we demonstrate the benefit of optimizing the sensor's operating temperature when the sensors are deployed in an open sampling system, i.e. directly exposed to the environment to be monitored.
Optimization of the terrain following radar flight cues in special operations aircraft
NASA Astrophysics Data System (ADS)
Garman, Patrick J.; Trang, Jeff A.
1995-05-01
Over the past 18 months the Army has been developing a terrain following capability in it's next generation special operations aircraft (SOA), the MH-60K and the MH-47E. As two experimental test pilots assigned to the Army's Airworthiness Qualification Test Directorate of the US Army Aviation Technical Test Center, we would like to convey the role that human factors has played in the development of the MMR for terrain following operations in the SOA. In the MH-60K, the pilot remains the interface between the aircraft, via the flight controls and the processed radar data, and the flight director cues. The presentation of the processed radar data to the pilot significantly affects the overall system performance, and is directly driven by the way humans see, process, and react to stimuli. Our development has been centered around the optimization of this man-machine interface.
Methodology for optimizing the development and operation of gas storage fields
Mercer, J.C.; Ammer, J.R.; Mroz, T.H.
1995-04-01
The Morgantown Energy Technology Center is pursuing the development of a methodology that uses geologic modeling and reservoir simulation for optimizing the development and operation of gas storage fields. Several Cooperative Research and Development Agreements (CRADAs) will serve as the vehicle to implement this product. CRADAs have been signed with National Fuel Gas and Equitrans, Inc. A geologic model is currently being developed for the Equitrans CRADA. Results from the CRADA with National Fuel Gas are discussed here. The first phase of the CRADA, based on original well data, was completed last year and reported at the 1993 Natural Gas RD&D Contractors Review Meeting. Phase 2 analysis was completed based on additional core and geophysical well log data obtained during a deepening/relogging program conducted by the storage operator. Good matches, within 10 percent, of wellhead pressure were obtained using a numerical simulator to history match 2 1/2 injection withdrawal cycles.
NASA Astrophysics Data System (ADS)
Vertogradov, G. G.; Uryadov, V. P.; Vertogradova, E. G.
2008-01-01
A hardware-software complex for real-time automatic determination of the optimal operating frequencies of a communication radio line according to oblique chirp ionosphere sounding is created. Path tests of the chirp complex on midlatitude radio lines are performed. Bit error probability and the reliability of HF communication for narrow-band and broadband communication systems are estimated from the results of oblique chirp sounding. It is shown that the quality of a communication channel greatly depends on the ratio of the regular and fluctuation components of a signal, as well as on the magnetic activity level. The created chirp complex can be used as a part of the ionospheric-wave and frequency-control service for dynamic management of the radio-line frequency resource in the interests of efficient operation of different-purpose radioelectronic systems.
Zhou, F.; Bohler, D.; Ding, Y.; Gilevich, S.; Huang, Z.; Loos, H.; Ratner, D.; Vetter, S.
2015-12-07
Photocathode RF gun has been widely used for generation of high-brightness electron beams for many different applications. We found that the drive laser distributions in such RF guns play important roles in minimizing the electron beam emittance. Characterizing the laser distributions with measurable parameters and optimizing beam emittance versus the laser distribution parameters in both spatial and temporal directions are highly desired for high-brightness electron beam operation. In this paper, we report systematic measurements and simulations of emittance dependence on the measurable parameters represented for spatial and temporal laser distributions at the photocathode RF gun systems of Linac Coherent Light Source. The tolerable parameter ranges for photocathode drive laser distributions in both directions are presented for ultra-low emittance beam operations.
Tatematsu, Y. Yamaguchi, Y.; Kawase, T.; Ichioka, R.; Ogawa, I.; Saito, T.; Idehara, T.
2014-08-15
The oscillation characteristics of Gyrotron FU CW GIII and its wave frequency and output power dependences on the magnetic field strength, the gun coil current, and the anode voltage were investigated experimentally. The experimental results were analyzed theoretically using a self-consistent code that included the electron properties in the cavity, corresponding to the actual operating conditions in the experiments. As a result, it was found that the variation in frequency with the magnetic field strength was related to an axial profile change in the electromagnetic wave in the cavity. In addition, the optimal condition that gives the maximum output power was found to be determined by the pitch factor rather than by the electron beam radius under the given operating conditions.
2014-01-01
Background In the recent study, optimum operational conditions of cathode compartment of microbial fuel cell were determined by using Response Surface Methodology (RSM) with a central composite design to maximize power density and COD removal. Methods The interactive effects of parameters such as, pH, buffer concentration and ionic strength on power density and COD removal were evaluated in two-chamber microbial batch-mode fuel cell. Results Power density and COD removal for optimal conditions (pH of 6.75, buffer concentration of 0.177 M and ionic strength of cathode chamber of 4.69 mM) improve by 17 and 5%, respectively, in comparison with normal conditions (pH of 7, buffer concentration of 0.1 M and ionic strength of 2.5 mM). Conclusions In conclusion, results verify that response surface methodology could successfully determine cathode chamber optimum operational conditions. PMID:24423039
Optimized autonomous operations of a 20 K space hydrogen sorption cryocooler
NASA Astrophysics Data System (ADS)
Borders, J.; Morgante, G.; Prina, M.; Pearson, D.; Bhandari, P.
2004-06-01
activation of the system, particularly useful in case of restarts after inadvertent shutdowns arising from malfunctions in the spacecraft. The capacity of the system to detect J-T plugs was increased to the point that the cooler is able to autonomously identify actual contaminants clogging from gas flow reductions due to off-nominal operating conditions. Once a plug is confirmed, the software autonomously energizes, and subsequently turns off, a J-T defrost heater until the clog is removed, bringing the system back to normal operating conditions. In this paper, all the cooler Operational Modes are presented, together with the description of the logic structure of the procedures and the advantages they produce for the operations.
NASA Astrophysics Data System (ADS)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission. PMID:26754955
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission. PMID:26754955
NASA Astrophysics Data System (ADS)
Song, Lei; Santos-Sacchi, Joseph
2015-12-01
Recent identification of a calmodulin binding site within prestin's C-terminus indicates that calcium can significantly alter prestin's operating voltage range as gauged by the Boltzmann parameter Vh (Keller et al., J. Neuroscience, 2014). We reasoned that those experiments may have identified the molecular substrate for the protein's tension sensitivity. In an effort to understand how this may happen, we evaluated the effects of turgor pressure on such shifts produced by calcium. We find that the shifts are induced by calcium's ability to reduce turgor pressure during whole cell voltage clamp recording. Clamping turgor pressure to 1kPa, the cell's normal intracellular pressure, completely counters the calcium effect. Furthermore, following unrestrained shifts, collapsing the cells abolishes induced shifts. We conclude that calcium does not work by direct action on prestin's conformational state. The possibility remains that calcium interaction with prestin alters water movements within the cell, possibly via its anion transport function.
Towards optimizing two-qubit operations in three-electron double quantum dots
NASA Astrophysics Data System (ADS)
Frees, Adam; Gamble, John King; Mehl, Sebastian; Friesen, Mark; Coppersmith, S. N.
The successful implementation of single-qubit gates in the quantum dot hybrid qubit motivates our interest in developing a high fidelity two-qubit gate protocol. Recently, extensive work has been done to characterize the theoretical limitations and advantages in performing two-qubit operations at an operation point located in the charge transition region. Additionally, there is evidence to support that single-qubit gate fidelities improve while operating in the so-called ``far-detuned'' region, away from the charge transition. Here we explore the possibility of performing two-qubit gates in this region, considering the challenges and the benefits that may present themselves while implementing such an operational paradigm. This work was supported in part by ARO (W911NF-12-0607) (W911NF-12-R-0012), NSF (PHY-1104660), ONR (N00014-15-1-0029). The authors gratefully acknowledge support from the Sandia National Laboratories Truman Fellowship Program, which is funded by the Laboratory Directed Research and Development (LDRD) Program. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.
Particulate emissions calculations from fall tillage operations using point and remote sensors.
Moore, Kori D; Wojcik, Michael D; Martin, Randal S; Marchant, Christian C; Bingham, Gail E; Pfeiffer, Richard L; Prueger, John H; Hatfield, Jerry L
2013-07-01
Soil preparation for agricultural crops produces aerosols that may significantly contribute to seasonal atmospheric particulate matter (PM). Efforts to reduce PM emissions from tillage through a variety of conservation management practices (CMPs) have been made, but the reductions from many of these practices have not been measured in the field. A study was conducted in California's San Joaquin Valley to quantify emissions reductions from fall tillage CMP. Emissions were measured from conventional tillage methods and from a "combined operations" CMP, which combines several implements to reduce tractor passes. Measurements were made of soil moisture, bulk density, meteorological profiles, filter-based total suspended PM (TSP), concentrations of PM with an equivalent aerodynamic diameter ≤10 μm (PM) and PM with an equivalent aerodynamic diameter ≤2.5 μm (PM), and aerosol size distribution. A mass-calibrated, scanning, three-wavelength light detection and ranging (LIDAR) procedure estimated PM through a series of algorithms. Emissions were calculated via inverse modeling with mass concentration measurements and applying a mass balance to LIDAR data. Inverse modeling emission estimates were higher, often with statistically significant differences. Derived PM emissions for conventional operations generally agree with literature values. Sampling irregularities with a few filter-based samples prevented calculation of a complete set of emissions through inverse modeling; however, the LIDAR-based emissions dataset was complete. The CMP control effectiveness was calculated based on LIDAR-derived emissions to be 29 ± 2%, 60 ± 1%, and 25 ± 1% for PM, PM, and TSP size fractions, respectively. Implementation of this CMP provides an effective method for the reduction of PM emissions. PMID:24216354
Pansuriya, Ruchir C; Singhal, Rekha S
2010-05-01
Serratiopeptidase (SRP), a 50 kDa metalloprotease produced from Serratia marcescens species is a drug with potent anti-inflammatory property. In this study, a powerful statistical design, Evolutionary operation (EVOP) was applied to optimize the media composition for SRP production in shake-flask culture of Serratia. marcescens NRRL B-23112. Initially, factors such as inoculum size, initial pH, carbon source and organic nitrogen source were optimized using one factor at a time. Most significant medium components affecting the production of SRP were identified as maltose, soybean meal and KHPO. The SRP so produced was not found to be dependent on whey protein, rather notably induced by most of the organic nitrogen sources used in the study and free from other concomitant protease contaminant revealed by protease inhibition study. Further, experiments were performed using different sets of EVOP design with each factor varied at three levels. The experimental data were analyzed with standard set of statistical formula. The EVOP optimized medium, maltose 4.5%, soybean meal 6.5%, KHPO 0.8% and NaCl 0.5% w/v gave SRP production of 7,333 EU/ml, which was 17-fold higher than the unoptimized media. The application of EVOP resulted in significant enhancement of SRP production. PMID:20519921
Robust optimal sensor placement for operational modal analysis based on maximum expected utility
NASA Astrophysics Data System (ADS)
Li, Binbin; Der Kiureghian, Armen
2016-06-01
Optimal sensor placement is essentially a decision problem under uncertainty. The maximum expected utility theory and a Bayesian linear model are used in this paper for robust sensor placement aimed at operational modal identification. To avoid nonlinear relations between modal parameters and measured responses, we choose to optimize the sensor locations relative to identifying modal responses. Since the modal responses contain all the information necessary to identify the modal parameters, the optimal sensor locations for modal response estimation provide at least a suboptimal solution for identification of modal parameters. First, a probabilistic model for sensor placement considering model uncertainty, load uncertainty and measurement error is proposed. The maximum expected utility theory is then applied with this model by considering utility functions based on three principles: quadratic loss, Shannon information, and K-L divergence. In addition, the prior covariance of modal responses under band-limited white-noise excitation is derived and the nearest Kronecker product approximation is employed to accelerate evaluation of the utility function. As demonstration and validation examples, sensor placements in a 16-degrees-of-freedom shear-type building and in Guangzhou TV Tower under ground motion and wind load are considered. Placements of individual displacement meter, velocimeter, accelerometer and placement of mixed sensors are illustrated.
NASA Astrophysics Data System (ADS)
Ibanez, Eduardo
Most U.S. energy usage is for electricity production and vehicle transportation, two interdependent infrastructures. The strength and number of the interdependencies will increase rapidly as hybrid electric transportation systems, including plug-in hybrid electric vehicles and hybrid electric trains, become more prominent. There are several new energy supply technologies reaching maturity, accelerated by public concern over global warming. The National Energy and Transportation Planning Tool (NETPLAN) is the implementation of the long-term investment and operation model for the transportation and energy networks. An evolutionary approach with underlying fast linear optimization are in place to determine the solutions with the best investment portfolios in terms of cost, resiliency and sustainability, i.e., the solutions that form the Pareto front. The popular NSGA-II algorithm is used as the base for the multiobjective optimization and metrics are developed for to evaluate the energy and transportation portfolios. An integrating approach to resiliency is presented, allowing the evaluation of high-consequence events, like hurricanes or widespread blackouts. A scheme to parallelize the multiobjective solver is presented, along with a decomposition method for the cost minimization program. The modular and data-driven design of the software is presented. The modeling tool is applied in a numerical example to optimize the national investment in energy and transportation in the next 40 years.
Is there an optimal resting velopharyngeal gap in operated cleft palate patients?
Yellinedi, Rajesh; Damalacheruvu, Mukunda Reddy
2013-01-01
Context: Videofluoroscopy in operated cleft palate patients. Aims: To determine the existence of an optimal resting velopharyngeal (VP) gap in operated cleft palate patients Settings and Design: A retrospective analysis of lateral view videofluoroscopy of operated cleft palate patients. Materials and Methods: A total of 117 cases of operated cleft palate underwent videofluoroscopy between 2006 and 2011. The lateral view of videofluoroscopy was utilised in the study. A retrospective analysis of the lateral view of videofluoroscopy of these 117 patients was performed to analyse the resting VP gap and its relationship to VP closure. Statistical analysis used: None. Results: Of the 117 cases, 35 had a resting gap of less than 6 mm, 34 had a resting gap between 6 and 10 mm and 48 patients had a resting gap of more than 10 mm. Conclusions: The conclusive finding was that almost all the patients with a resting gap of <6 mm (group C) achieved radiological closure of the velopharynx with speech; thus, they had the least chance of VP insufficiency (VPI). Those patients with a resting gap of >10 mm (group A) did not achieve VP closure on phonation, thus having full-blown VPI. Therefore, it can be concluded that the ideal resting VP gap is approximately 6 mm so as to get the maximal chance of VP closure and thus prevent VPI. PMID:23960311
A Concept and Implementation of Optimized Operations of Airport Surface Traffic
NASA Technical Reports Server (NTRS)
Jung, Yoon C.; Hoang, Ty; Montoya, Justin; Gupta, Gautam; Malik, Waqar; Tobias, Leonard
2010-01-01
This paper presents a new concept of optimized surface operations at busy airports to improve the efficiency of taxi operations, as well as reduce environmental impacts. The suggested system architecture consists of the integration of two decoupled optimization algorithms. The Spot Release Planner provides sequence and timing advisories to tower controllers for releasing departure aircraft into the movement area to reduce taxi delay while achieving maximum throughput. The Runway Scheduler provides take-off sequence and arrival runway crossing sequence to the controllers to maximize the runway usage. The description of a prototype implementation of this integrated decision support tool for the airport control tower controllers is also provided. The prototype decision support tool was evaluated through a human-in-the-loop experiment, where both the Spot Release Planner and Runway Scheduler provided advisories to the Ground and Local Controllers. Initial results indicate the average number of stops made by each departure aircraft in the departure runway queue was reduced by more than half when the controllers were using the advisories, which resulted in reduced taxi times in the departure queue.
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Litt, Jonathan S.
2010-01-01
This paper presents an algorithm that automatically identifies and extracts steady-state engine operating points from engine flight data. It calculates the mean and standard deviation of select parameters contained in the incoming flight data stream. If the standard deviation of the data falls below defined constraints, the engine is assumed to be at a steady-state operating point, and the mean measurement data at that point are archived for subsequent condition monitoring purposes. The fundamental design of the steady-state data filter is completely generic and applicable for any dynamic system. Additional domain-specific logic constraints are applied to reduce data outliers and variance within the collected steady-state data. The filter is designed for on-line real-time processing of streaming data as opposed to post-processing of the data in batch mode. Results of applying the steady-state data filter to recorded helicopter engine flight data are shown, demonstrating its utility for engine condition monitoring applications.
NASA Technical Reports Server (NTRS)
Quast, Peter; Tung, Frank; West, Mark; Wider, John
2000-01-01
The Chandra X-ray Observatory (CXO, formerly AXAF) is the third of the four NASA great observatories. It was launched from Kennedy Space Flight Center on 23 July 1999 aboard the Space Shuttle Columbia and was successfully inserted in a 330 x 72,000 km orbit by the Inertial Upper Stage (IUS). Through a series of five Integral Propulsion System burns, CXO was placed in a 10,000 x 139,000 km orbit. After initial on-orbit checkout, Chandra's first light images were unveiled to the public on 26 August, 1999. The CXO Pointing Control and Aspect Determination (PCAD) subsystem is designed to perform attitude control and determination functions in support of transfer orbit operations and on-orbit science mission. After a brief description of the PCAD subsystem, the paper highlights the PCAD activities during the transfer orbit and initial on-orbit operations. These activities include: CXO/IUS separation, attitude and gyro bias estimation with earth sensor and sun sensor, attitude control and disturbance torque estimation for delta-v burns, momentum build-up due to gravity gradient and solar pressure, momentum unloading with thrusters, attitude initialization with star measurements, gyro alignment calibration, maneuvering and transition to normal pointing, and PCAD pointing and stability performance.
[Garbage incineration plants -- planning, organisation and operation from health point of view].
Thriene, B
2004-12-01
The Waste Disposal Regulation which became effective March 1, 2001 stipulates that from June 1, 2005 biodegradable residential household and commercial waste may only be deposited on landfills after thermal or mechanical-biological pre-treatment. The Regulation aims at preventing generation of landfill gases that are detrimental to health and climate, and discharge of pollutants from landfills into the groundwater. Waste calculations for the year 2005 predict a volume of 28 million tons. Existing incineration and mechanical-biological treatment plants cover volumes of 14 and 2.5 million tons, respectively. Consequently, their capacity does not meet the demand in Germany. Waste disposal plans have been prepared in the German Federal State of Saxony-Anhalt since 1996 and potential sites for garbage incineration plants have been identified. Energy and waste management companies have initiated application procedures for thermal waste treatment plants and utilization of energy. Health Departments and the Hygiene Institute contributed to the approval procedure by providing the required Health Impact Assessment. We recommended selecting sites in the vicinity of large cities and conurbations and - taking into account the main wind direction - preferably in the northeast. Long-distance transport should be avoided. Based on immission forecasts for territorial background pollution, additional noise and air pollution were examined for reasonableness. In addition, providing structural safety of plants and guaranteeing continuous monitoring of emission limit values of air pollutants, was a prerequisite for strict observance of the 17 (th) BImSchV (Federal Decree on the Prevention of Immissions). The paper informs about planning, construction and conditions for operating the combined garbage heating and power station in Magdeburg-Rothensee (600,000 t/a). Saxony-Anhalt's waste legislation requires non-recyclable waste to be disposed of at the place of its generation, if possible
NASA Astrophysics Data System (ADS)
Popa, R.; Popa, F.; Popa, B.; Zachia-Zlatea, D.
2010-08-01
It is presented an optimization model based on genetic algorithms for the operation of a multipurpose hydroelectric power development consisting in a pumped storage plant (PSP) with weekly operation cycle. The lower reservoir of the PSP is supplied upstream from a peak hydropower plant (HPP) with a large reservoir and supplies the own HPP which provides the required discharges towards downstream. Under these conditions, the optimum operation of the assembly consisting in 3 reservoirs and hydropower plants becomes a difficult problem if there are considered the restrictions as regards: the gradients allowed for the reservoirs filling/emptying, compliance with of a long-term policy of the upper reservoir from the hydroelectric development and of the weekly cycle for the PSP upper reservoir, correspondence between the power output/consumption in the weekly load schedule, turning to account of the water resource at maximum overall efficiencies, etc. Maximization of the net energy value (generated minus consumed) was selected as performance function of the model, considering the differentiated price of the electric energy over the week (working or weekend days, peak, half-peak or base hours). The analysis time step was required to be of 3 hours, resulting a weekly horizon of 56 steps and 168 decision variables, respectively, for the 3 HPPs of the system. These were allowed to be the flows turbined at the HPP and the number of working hydrounits at PSP, on each time step. The numerical application has considered the guiding data of Fantanele-Tarnita-Lapustesti hydroelectric development. Results of various simulations carried out proved the qualities of the proposed optimization model, which will allow its use within a decisional support program for such a development.
Research on Optimal Operation by Adjusting Blade Angle in Jiangdu No. 4 Pumping Station of China
NASA Astrophysics Data System (ADS)
Lihua, Zhang; Jilin, Chang; Rentian, Zhang; Yi, Gong
2010-06-01
A Nonlinear Programming Model for the optimal day-operation of multi-units pump in one pumping station by adjusting blade angle has been put out, where the peak-valley electricity prices is considered in this paper. The model takes the minimal operation cost of pump assembly as objective function. In the meantime, the periods are defined as stage variables. The blade angle and the number of the working-pumps are expressed as decision variables and the water volume pumped in one day as constraint condition. The problem is very difficult to be settled by regular methods. This paper presents a new method which adopts experimental optimization method of adjusting blade angle in different periods and linear integral programming method to select the number of pumps. After applying the method to the optimal operation of Jiangdu No.4 pumping station, which is the source pump station of Eastern Route Project of South-to-North Water Diversion(Where there are seven pumps and the design flow rate of single-unit is 30.0m3/sec), we get the results which are as follows:(1) With the constraint conditions of typical tidal process which are average tidal levels from December to February of next year, designed average pumping head of 7.8m, and the operation load at 100%,80%,60% of full-load(the water volume when the pumps working with the blade angle of 0 degree and the speed of 150r/min in full day), the relative energy-saving reaches 5.18%˜33.02% comparing with the state of keeping the pump operating at its designed blade angle which is 0 degree when considering peak-valley electricity prices. While not considering the peak-valley electricity prices, the number is 1.96%˜9.71%, and less load corresponds to more cost-saving. (2) The key factory on deciding the operation state of pumps is electricity price when we consider the peak-valley electricity prices. All the pumps should be working and the blade angle should be in the largest state when at the valley price, while the number
Kurek, Wojciech; Ostfeld, Avi
2013-01-30
A multi-objective methodology utilizing the Strength Pareto Evolutionary Algorithm (SPEA2) linked to EPANET for trading-off pumping costs, water quality, and tanks sizing of water distribution systems is developed and demonstrated. The model integrates variable speed pumps for modeling the pumps operation, two water quality objectives (one based on chlorine disinfectant concentrations and one on water age), and tanks sizing cost which are assumed to vary with location and diameter. The water distribution system is subject to extended period simulations, variable energy tariffs, Kirchhoff's laws 1 and 2 for continuity of flow and pressure, tanks water level closure constraints, and storage-reliability requirements. EPANET Example 3 is employed for demonstrating the methodology on two multi-objective models, which differ in the imposed water quality objective (i.e., either with disinfectant or water age considerations). Three-fold Pareto optimal fronts are presented. Sensitivity analysis on the storage-reliability constraint, its influence on pumping cost, water quality, and tank sizing are explored. The contribution of this study is in tailoring design (tank sizing), pumps operational costs, water quality of two types, and reliability through residual storage requirements, in a single multi-objective framework. The model was found to be stable in generating multi-objective three-fold Pareto fronts, while producing explainable engineering outcomes. The model can be used as a decision tool for both pumps operation, water quality, required storage for reliability considerations, and tank sizing decision-making. PMID:23262407
Optimizing operational water management with soil moisture data from Sentinel-1 satellites
NASA Astrophysics Data System (ADS)
Pezij, Michiel; Augustijn, Denie; Hendriks, Dimmie; Hulscher, Suzanne
2016-04-01
In the Netherlands, regional water authorities are responsible for management and maintenance of regional water bodies. Due to socio-economic developments (e.g. agricultural intensification and on-going urbanisation) and an increase in climate variability, the pressure on these water bodies is growing. Optimization of water availability by taking into account the needs of different users, both in wet and dry periods, is crucial for sustainable developments. To support timely and well-directed operational water management, accurate information on the current state of the system as well as reliable models to evaluate water management optimization measures are essential. Previous studies showed that the use of remote sensing data (for example soil moisture data) in water management offers many opportunities (e.g. Wanders et al. (2014)). However, these data are not yet used in operational applications at a large scale. The Sentinel-1 satellites programme offers high spatiotemporal resolution soil moisture data (1 image per 6 days with a spatial resolution of 10 by 10 m) that are freely available. In this study, these data will be used to improve the Netherlands Hydrological Instrument (NHI). The NHI consists of coupled models for the unsaturated zone (MetaSWAP), groundwater (iMODFLOW) and surface water (Mozart and DM). The NHI is used for scenario analyses and operational water management in the Netherlands (De Lange et al., 2014). Due to the lack of soil moisture data, the unsaturated zone model is not yet thoroughly validated and its output is not used by regional water authorities for decision-making. Therefore, the newly acquired remotely sensed soil moisture data will be used to improve the skill of the MetaSWAP-model and the NHI as whole. The research will focus among other things on the calibration of soil parameters by comparing model output (MetaSWAP) with the remotely sensed soil moisture data. Eventually, we want to apply data-assimilation to improve
Optimal Technology Selection and Operation of Microgrids inCommercial Buildings
Marnay, Chris; Venkataramanan, Giri; Stadler, Michael; Siddiqui,Afzal; Firestone, Ryan; Chandran, Bala
2007-01-15
The deployment of small (<1-2 MW) clusters of generators,heat and electrical storage, efficiency investments, and combined heatand power (CHP) applications (particularly involving heat activatedcooling) in commercial buildings promises significant benefits but posesmany technical and financial challenges, both in system choice and itsoperation; if successful, such systems may be precursors to widespreadmicrogrid deployment. The presented optimization approach to choosingsuch systems and their operating schedules uses Berkeley Lab'sDistributed Energy Resources Customer Adoption Model [DER-CAM], extendedto incorporate electrical storage options. DER-CAM chooses annual energybill minimizing systems in a fully technology-neutral manner. Anillustrative example for a San Francisco hotel is reported. The chosensystem includes two engines and an absorption chiller, providing anestimated 11 percent cost savings and 10 percent carbon emissionreductions, under idealized circumstances.
NASA Technical Reports Server (NTRS)
Smith, J. M.; Nichols, L. D.
1977-01-01
The value of percent seed, oxygen to fuel ratio, combustion pressure, Mach number, and magnetic field strength which maximize either the electrical conductivity or power density at the entrance of an MHD power generator was obtained. The working fluid is the combustion product of H2 and O2 seeded with CsOH. The ideal theoretical segmented Faraday generator along with an empirical form found from correlating the data of many experimenters working with generators of different sizes, electrode configurations, and working fluids, are investigated. The conductivity and power densities optimize at a seed fraction of 3.5 mole percent and an oxygen to hydrogen weight ratio of 7.5. The optimum values of combustion pressure and Mach number depend on the operating magnetic field strength.
NASA Technical Reports Server (NTRS)
Leininger, G. G.; Lehtinen, B.; Riehl, J. P.
1972-01-01
A method is presented for designing optimal feedback controllers for systems having subsystem sensitivity constraints. Such constraints reflect the presence of subsystem performance indices which are in conflict with the performance index of the overall system. The key to the approach is the use of relative performance index sensitivity (a measure of the deviation of a performance index from its optimum value). The weighted sum of subsystem and/or operational mode relative performance index sensitivies is defined as an overall performance index. A method is developed to handle linear systems with quadratic performance indices and either full or partial state feedback. The usefulness of this method is demonstrated by applying it to the design of a stability augmentation system (SAS) for a VTOL aircraft. A desirable VTOL SAS design is one that produces good VTOL transient response both with and without active pilot control. The system designed using this method is shown to effect a satisfactory compromise solution to this problem.
Optimizing operational efficiencies in early phase trials: The Pediatric Trials Network experience.
England, Amanda; Wade, Kelly; Smith, P Brian; Berezny, Katherine; Laughon, Matthew
2016-03-01
Performing drug trials in pediatrics is challenging. In support of the Best Pharmaceuticals for Children Act, the Eunice Kennedy Shriver National Institute of Child Health and Human Development funded the formation of the Pediatric Trials Network (PTN) in 2010. Since its inception, the PTN has developed strategies to increase both efficiency and safety of pediatric drug trials. Through use of innovative techniques such as sparse and scavenged blood sampling as well as opportunistic study design, participation in trials has grown. The PTN has also strived to improve consistency of adverse event reporting in neonatal drug trials through the development of a standardized adverse event table. We review how the PTN is optimizing operational efficiencies in pediatric drug trials to increase the safety of drugs in children. PMID:26968616
NASA Astrophysics Data System (ADS)
Kudryashov, Nikolay A.; Shilnikov, Kirill E.
2016-06-01
Numerical computation of the three dimensional problem of the freezing interface propagation during the cryosurgery coupled with the multi-objective optimization methods is used in order to improve the efficiency and safety of the cryosurgery operations performing. Prostate cancer treatment and cutaneous cryosurgery are considered. The heat transfer in soft tissue during the thermal exposure to low temperature is described by the Pennes bioheat model and is coupled with an enthalpy method for blurred phase change computations. The finite volume method combined with the control volume approximation of the heat fluxes is applied for the cryosurgery numerical modeling on the tumor tissue of a quite arbitrary shape. The flux relaxation approach is used for the stability improvement of the explicit finite difference schemes. The method of the additional heating elements mounting is studied as an approach to control the cellular necrosis front propagation. Whereas the undestucted tumor tissue and destucted healthy tissue volumes are considered as objective functions, the locations of additional heating elements in cutaneous cryosurgery and cryotips in prostate cancer cryotreatment are considered as objective variables in multi-objective problem. The quasi-gradient method is proposed for the searching of the Pareto front segments as the multi-objective optimization problem solutions.
Long Series Multi-objectives Optimal Operation of Water And Sediment Regulation
NASA Astrophysics Data System (ADS)
Bai, T.; Jin, W.
2015-12-01
Secondary suspended river in Inner Mongolia reaches have formed and the security of reach and ecological health of the river are threatened. Therefore, researches on water-sediment regulation by cascade reservoirs are urgent and necessary. Under this emergency background, multi-objectives water and sediment regulation are studied in this paper. Firstly, multi-objective optimal operation models of Longyangxia and Liujiaxia cascade reservoirs are established. Secondly, based on constraints handling and feasible search space techniques, the Non-dominated Sorting Genetic Algorithm (NSGA-II) is greatly improved to solve the model. Thirdly, four different scenarios are set. It is demonstrated that: (1) scatter diagrams of perato front are obtained to show optimal solutions of power generation maximization, sediment maximization and the global equilibrium solutions between the two; (2) the potentiality of water-sediment regulation by Longyangxia and Liujiaxia cascade reservoirs are analyzed; (3) with the increasing water supply in future, conflict between water supply and water-sediment regulation occurred, and the sustainability of water and sediment regulation will confront with negative influences for decreasing transferable water in cascade reservoirs; (4) the transfer project has less benefit for water-sediment regulation. The research results have an important practical significance and application on water-sediment regulation by cascade reservoirs in the Upper Yellow River, to construct water and sediment control system in the whole Yellow River basin.
Kainz, K; Prah, D; Ahunbay, E; Li, X
2014-06-01
Purpose: A novel modulated arc therapy technique, mARC, enables superposition of step-and-shoot IMRT segments upon a subset of the optimization points (OPs) of a continuous-arc delivery. We compare two approaches to mARC planning: one with the number of OPs fixed throughout optimization, and another where the planning system determines the number of OPs in the final plan, subject to an upper limit defined at the outset. Methods: Fixed-OP mARC planning was performed for representative cases using Panther v. 5.01 (Prowess, Inc.), while variable-OP mARC planning used Monaco v. 5.00 (Elekta, Inc.). All Monaco planning used an upper limit of 91 OPs; those OPs with minimal MU were removed during optimization. Plans were delivered, and delivery times recorded, on a Siemens Artiste accelerator using a flat 6MV beam with 300 MU/min rate. Dose distributions measured using ArcCheck (Sun Nuclear Corporation, Inc.) were compared with the plan calculation; the two were deemed consistent if they agreed to within 3.5% in absolute dose and 3.5 mm in distance-to-agreement among > 95% of the diodes within the direct beam. Results: Example cases included a prostate and a head-and-neck planned with a single arc and fraction doses of 1.8 and 2.0 Gy, respectively. Aside from slightly more uniform target dose for the variable-OP plans, the DVHs for the two techniques were similar. For the fixed-OP technique, the number of OPs was 38 and 39, and the delivery time was 228 and 259 seconds, respectively, for the prostate and head-and-neck cases. For the final variable-OP plans, there were 91 and 85 OPs, and the delivery time was 296 and 440 seconds, correspondingly longer than for fixed-OP. Conclusion: For mARC, both the fixed-OP and variable-OP approaches produced comparable-quality plans whose delivery was successfully verified. To keep delivery time per fraction short, a fixed-OP planning approach is preferred.
Starling, Melissa J.; Branson, Nicholas; Cody, Denis; Starling, Timothy R.; McGreevy, Paul D.
2014-01-01
Recent advances in animal welfare science used judgement bias, a type of cognitive bias, as a means to objectively measure an animal's affective state. It is postulated that animals showing heightened expectation of positive outcomes may be categorised optimistic, while those showing heightened expectations of negative outcomes may be considered pessimistic. This study pioneers the use of a portable, automated apparatus to train and test the judgement bias of dogs. Dogs were trained in a discrimination task in which they learned to touch a target after a tone associated with a lactose-free milk reward and abstain from touching the target after a tone associated with water. Their judgement bias was then probed by presenting tones between those learned in the discrimination task and measuring their latency to respond by touching the target. A Cox's Proportional Hazards model was used to analyse censored response latency data. Dog and Cue both had a highly significant effect on latency and risk of touching a target. This indicates that judgement bias both exists in dogs and differs between dogs. Test number also had a significant effect, indicating that dogs were less likely to touch the target over successive tests. Detailed examination of the response latencies revealed tipping points where average latency increased by 100% or more, giving an indication of where dogs began to treat ambiguous cues as predicting more negative outcomes than positive ones. Variability scores were calculated to provide an index of optimism using average latency and standard deviation at cues after the tipping point. The use of a mathematical approach to assessing judgement bias data in animal studies offers a more detailed interpretation than traditional statistical analyses. This study provides proof of concept for the use of an automated apparatus for measuring cognitive bias in dogs. PMID:25229458
Starling, Melissa J; Branson, Nicholas; Cody, Denis; Starling, Timothy R; McGreevy, Paul D
2014-01-01
Recent advances in animal welfare science used judgement bias, a type of cognitive bias, as a means to objectively measure an animal's affective state. It is postulated that animals showing heightened expectation of positive outcomes may be categorised optimistic, while those showing heightened expectations of negative outcomes may be considered pessimistic. This study pioneers the use of a portable, automated apparatus to train and test the judgement bias of dogs. Dogs were trained in a discrimination task in which they learned to touch a target after a tone associated with a lactose-free milk reward and abstain from touching the target after a tone associated with water. Their judgement bias was then probed by presenting tones between those learned in the discrimination task and measuring their latency to respond by touching the target. A Cox's Proportional Hazards model was used to analyse censored response latency data. Dog and Cue both had a highly significant effect on latency and risk of touching a target. This indicates that judgement bias both exists in dogs and differs between dogs. Test number also had a significant effect, indicating that dogs were less likely to touch the target over successive tests. Detailed examination of the response latencies revealed tipping points where average latency increased by 100% or more, giving an indication of where dogs began to treat ambiguous cues as predicting more negative outcomes than positive ones. Variability scores were calculated to provide an index of optimism using average latency and standard deviation at cues after the tipping point. The use of a mathematical approach to assessing judgement bias data in animal studies offers a more detailed interpretation than traditional statistical analyses. This study provides proof of concept for the use of an automated apparatus for measuring cognitive bias in dogs. PMID:25229458
Optimization of Preprocessing and Densification of Sorghum Stover at Full-scale Operation
Neal A. Yancey; Jaya Shankar Tumuluru; Craig C. Conner; Christopher T. Wright
2011-08-01
Transportation costs can be a prohibitive step in bringing biomass to a preprocessing location or biofuel refinery. One alternative to transporting biomass in baled or loose format to a preprocessing location, is to utilize a mobile preprocessing system that can be relocated to various locations where biomass is stored, preprocess and densify the biomass, then ship it to the refinery as needed. The Idaho National Laboratory has a full scale 'Process Demonstration Unit' PDU which includes a stage 1 grinder, hammer mill, drier, pellet mill, and cooler with the associated conveyance system components. Testing at bench and pilot scale has been conducted to determine effects of moisture on preprocessing, crop varieties on preprocessing efficiency and product quality. The INLs PDU provides an opportunity to test the conclusions made at the bench and pilot scale on full industrial scale systems. Each component of the PDU is operated from a central operating station where data is collected to determine power consumption rates for each step in the process. The power for each electrical motor in the system is monitored from the control station to monitor for problems and determine optimal conditions for the system performance. The data can then be viewed to observe how changes in biomass input parameters (moisture and crop type for example), mechanical changes (screen size, biomass drying, pellet size, grinding speed, etc.,), or other variations effect the power consumption of the system. Sorgum in four foot round bales was tested in the system using a series of 6 different screen sizes including: 3/16 in., 1 in., 2 in., 3 in., 4 in., and 6 in. The effect on power consumption, product quality, and production rate were measured to determine optimal conditions.
Energetic optimization of a piezo-based touch-operated button for man-machine interfaces
NASA Astrophysics Data System (ADS)
Sun, Hao; de Vries, Theo J. A.; de Vries, Rene; van Dalen, Harry
2012-03-01
This paper discusses the optimization of a touch-operated button for man-machine interfaces based on piezoelectric energy harvesting techniques. In the mechanical button, a common piezoelectric diaphragm, is assembled to harvest the ambient energy from the source, i.e. the operator’s touch. Under touch force load, the integrated diaphragm will have a bending deformation. Then, its mechanical strain is converted into the required electrical energy by means of the piezoelectric effect presented to the diaphragm. Structural design (i) makes the piezoceramic work under static compressive stress instead of static or dynamic tensile stress, (ii) achieves a satisfactory stress level and (iii) provides the diaphragm and the button with a fatigue lifetime in excess of millions of touch operations. To improve the button’s function, the effect of some key properties consisting of dimension, boundary condition and load condition on electrical behavior of the piezoelectric diaphragm are evaluated by electromechanical coupling analysis in ANSYS. The finite element analysis (FEA) results indicate that the modification of these properties could enhance the diaphragm significantly. Based on the key properties’ different contributions to the improvement of the diaphragm’s electrical energy output, they are incorporated into the piezoelectric diaphragm’s redesign or the structural design of the piezo-based button. The comparison of the original structure and the optimal result shows that electrical energy stored in the diaphragm and the voltage output are increased by 1576% and 120%, respectively, and the volume of the piezoceramic is reduced to 33.6%. These results will be adopted to update the design of the self-powered button, thus enabling a large decrease of energy consumption and lifetime cost of the MMI.
Eco-operation of co-generation systems optimized by environmental load value
Kato, Seizo; Nomura, Nobukazu; Maruyama, Naoki
1998-07-01
In this paper the authors introduce a life cycle assessment scheme with the aid of the environmental load value (ELV) as a numerical measure to estimate the quantitative load of any industrial activity on the environment. The value is calculated from the total summation of the respective environmental load indexes through the life cycle activity from cradle to grave. An algorithm and a software using a combined simplex and branch-bound technique are accomplished to give the numerical ELV and its optimization. This ELV scheme is applied to co-generation energy systems consisting of gas turbines, waste-heat boilers, auxiliary boilers, steam turbines, electricity operated turbo refrigerators, steam absorption refrigerators and heat exchangers, which can be easily set up on the computer display in an ICON and Q and A style, including various kinds of parameters. The two kinds of environmental loads respecting the fossil fuel depletion and the CO{sub 2} global warming due to electricity generation from power stations in Japan are chosen as the ELV criterion. The ELV optimization is calculated corresponding to the hourly energy demands for electricity, air cooling, air heating, and hot water from a district consisting eight office buildings and four hotels. As a result, the ELV scheme constructed here is found to be an attractive and powerful tool to quantitatively estimate the LCA environmental loads of any industrial activity like co-generation energy systems and to propose the eco-operation of the industrial activity of interest. The cost estimation can be made as well.
Design optimization of MR-compatible rotating anode x-ray tubes for stable operation
Shin, Mihye; Lillaney, Prasheel; Hinshaw, Waldo; Fahrig, Rebecca
2013-11-15
Purpose: Hybrid x-ray/MR systems can enhance the diagnosis and treatment of endovascular, cardiac, and neurologic disorders by using the complementary advantages of both modalities for image guidance during interventional procedures. Conventional rotating anode x-ray tubes fail near an MR imaging system, since MR fringe fields create eddy currents in the metal rotor which cause a reduction in the rotation speed of the x-ray tube motor. A new x-ray tube motor prototype has been designed and built to be operated close to a magnet. To ensure the stability and safety of the motor operation, dynamic characteristics must be analyzed to identify possible modes of mechanical failure. In this study a 3D finite element method (FEM) model was developed in order to explore possible modifications, and to optimize the motor design. The FEM provides a valuable tool that permits testing and evaluation using numerical simulation instead of building multiple prototypes.Methods: Two experimental approaches were used to measure resonance characteristics: the first obtained the angular speed curves of the x-ray tube motor employing an angle encoder; the second measured the power spectrum using a spectrum analyzer, in which the large amplitude of peaks indicates large vibrations. An estimate of the bearing stiffness is required to generate an accurate FEM model of motor operation. This stiffness depends on both the bearing geometry and adjacent structures (e.g., the number of balls, clearances, preload, etc.) in an assembly, and is therefore unknown. This parameter was set by matching the FEM results to measurements carried out with the anode attached to the motor, and verified by comparing FEM predictions and measurements with the anode removed. The validated FEM model was then used to sweep through design parameters [bearing stiffness (1×10{sup 5}–5×10{sup 7} N/m), shaft diameter (0.372–0.625 in.), rotor diameter (2.4–2.9 in.), and total length of motor (5.66–7.36 in.)] to
Design optimization of MR-compatible rotating anode x-ray tubes for stable operation
Shin, Mihye; Lillaney, Prasheel; Hinshaw, Waldo; Fahrig, Rebecca
2013-01-01
Purpose: Hybrid x-ray/MR systems can enhance the diagnosis and treatment of endovascular, cardiac, and neurologic disorders by using the complementary advantages of both modalities for image guidance during interventional procedures. Conventional rotating anode x-ray tubes fail near an MR imaging system, since MR fringe fields create eddy currents in the metal rotor which cause a reduction in the rotation speed of the x-ray tube motor. A new x-ray tube motor prototype has been designed and built to be operated close to a magnet. To ensure the stability and safety of the motor operation, dynamic characteristics must be analyzed to identify possible modes of mechanical failure. In this study a 3D finite element method (FEM) model was developed in order to explore possible modifications, and to optimize the motor design. The FEM provides a valuable tool that permits testing and evaluation using numerical simulation instead of building multiple prototypes. Methods: Two experimental approaches were used to measure resonance characteristics: the first obtained the angular speed curves of the x-ray tube motor employing an angle encoder; the second measured the power spectrum using a spectrum analyzer, in which the large amplitude of peaks indicates large vibrations. An estimate of the bearing stiffness is required to generate an accurate FEM model of motor operation. This stiffness depends on both the bearing geometry and adjacent structures (e.g., the number of balls, clearances, preload, etc.) in an assembly, and is therefore unknown. This parameter was set by matching the FEM results to measurements carried out with the anode attached to the motor, and verified by comparing FEM predictions and measurements with the anode removed. The validated FEM model was then used to sweep through design parameters [bearing stiffness (1×105–5×107 N/m), shaft diameter (0.372–0.625 in.), rotor diameter (2.4–2.9 in.), and total length of motor (5.66–7.36 in.)] to increase the
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H.; Han, X.; Martinez, F.; Jimenez, M.; Manzano, J.; Chanzy, A.; Vereecken, H.
2013-12-01
Data assimilation (DA) techniques, like the local ensemble transform Kalman filter (LETKF) not only offer the opportunity to update model predictions by assimilating new measurement data in real time, but also provide an improved basis for real-time (DA-based) control. This study focuses on the optimization of real-time irrigation scheduling for fields of citrus trees near Picassent (Spain). For three selected fields the irrigation was optimized with DA-based control, and for other fields irrigation was optimized on the basis of a more traditional approach where reference evapotranspiration for citrus trees was estimated using the FAO-method. The performance of the two methods is compared for the year 2013. The DA-based real-time control approach is based on ensemble predictions of soil moisture profiles, using the Community Land Model (CLM). The uncertainty in the model predictions is introduced by feeding the model with weather predictions from an ensemble prediction system (EPS) and uncertain soil hydraulic parameters. The model predictions are updated daily by assimilating soil moisture data measured by capacitance probes. The measurement data are assimilated with help of LETKF. The irrigation need was calculated for each of the ensemble members, averaged, and logistic constraints (hydraulics, energy costs) were taken into account for the final assigning of irrigation in space and time. For the operational scheduling based on this approach only model states and no model parameters were updated by the model. Other, non-operational simulation experiments for the same period were carried out where (1) neither ensemble weather forecast nor DA were used (open loop), (2) Only ensemble weather forecast was used, (3) Only DA was used, (4) also soil hydraulic parameters were updated in data assimilation and (5) both soil hydraulic and plant specific parameters were updated. The FAO-based and DA-based real-time irrigation control are compared in terms of soil moisture
NASA Astrophysics Data System (ADS)
Kitwadkar, Amol Hanmant
Over 60% of the nation's total energy is supplied by oil and natural gas together and this demand for energy will continue to grow in the future (Radler et al. 2012). The growing demand is pushing the exploration and exploitation of onshore oil and natural gas reservoirs. Hydraulic fracturing has proven to not only create jobs and achieve economic growth, but also has proven to exert a lot of stress on natural resources---such as water. As water is one of the most important factors in the world of hydraulic fracturing, proper fluids management during the development of a field of operation is perhaps the key element to address a lot of these issues. Almost 30% of the water used during hydraulic fracturing comes out of the well in the form of flowback water during the first month after the well is fractured (Bai et. al. 2012). Handling this large amount of water coming out of the newly fractured wells is one of the major issues as the volume of the water after this period drops off and remains constant for a long time (Bai et. al. 2012) and permanent facilities can be constructed to take care of the water over a longer period. This paper illustrates development of a GIS based tool for optimizing the location of a mobile produced water treatment facility while development is still occurring. A methodology was developed based on a multi criteria decision analysis (MCDA) to optimize the location of the mobile treatment facilities. The criteria for MCDA include well density, ease of access (from roads considering truck hauls) and piping minimization if piping is used and water volume produced. The area of study is 72 square miles east of Greeley, CO in the Wattenberg Field in northeastern Colorado that will be developed for oil and gas production starting in the year 2014. A quarterly analysis is done so that we can observe the effect of future development plans and current circumstances on the location as we move from quarter to quarter. This will help the operators to
NASA Astrophysics Data System (ADS)
Adu, Stephen Aboagye
Laminated carbon fiber-reinforced polymer composites (CFRPs) possess very high specific strength and stiffness and this has accounted for their wide use in structural applications, most especially in the aerospace industry, where the trade-off between weight and strength is critical. Even though they possess much larger strength ratio as compared to metals like aluminum and lithium, damage in the metals mentioned is rather localized. However, CFRPs generate complex damage zones at stress concentration, with damage progression in the form of matrix cracking, delamination and fiber fracture or fiber/matrix de-bonding. This thesis is aimed at performing; stiffness degradation analysis on composite coupons, containing embedded delamination using the Four-Point Bend Test. The Lamb wave-based approach as a structural health monitoring (SHM) technique is used for damage detection in the composite coupons. Tests were carried-out on unidirectional composite coupons, obtained from panels manufactured with pre-existing defect in the form of embedded delamination in a laminate of stacking sequence [06/904/0 6]T. Composite coupons were obtained from panels, fabricated using vacuum assisted resin transfer molding (VARTM), a liquid composite molding (LCM) process. The discontinuity in the laminate structure due to the de-bonding of the middle plies caused by the insertion of a 0.3 mm thick wax, in-between the middle four (4) ninety degree (90°) plies, is detected using lamb waves generated by surface mounted piezoelectric (PZT) actuators. From the surface mounted piezoelectric sensors, response for both undamaged (coupon with no defect) and damaged (delaminated coupon) is obtained. A numerical study of the embedded crack propagation in the composite coupon under four-point and three-point bending was carried out using FEM. Model validation was then carried out comparing the numerical results with the experimental. Here, surface-to-surface contact property was used to model the
NASA Astrophysics Data System (ADS)
Bensalah, W.; Feki, M.; De-Petris Wery, M.; Ayedi, H. F.
2015-02-01
The bending failure of anodized aluminum in tartaric/sulphuric acid bath was modeled using Doehlert design. Bath temperature, anodic current density, sulphuric acid, and tartaric acid concentrations were retained as variables. Thickness measurements and 3-point bending experiments were conducted. The deflection at failure ( D f) and the maximum load ( F m) of each sample were, then, deducted from the corresponding flexural responses. The treatment of experimental results has established mathematical models of second degree reflecting the relation of cause and effect between the factors and the studied properties. The optimum path study of thickness, deflection at failure, and maximum load, showed that the three optima were opposite. Multicriteria optimization using the desirability function was achieved in order to maximize simultaneously the three responses. The optimum conditions were: C tar = 18.2 g L-1, T = 17.3 °C, J = 2.37 A dm-2, C sul = 191 g L-1, while the estimated response values were e = 57.7 µm, D f = 5.6 mm, and F m = 835 N. Using the established models, a mathematical correlation was found between deflection at failure and thickness of the anodic oxide layer. Before bending tests, aluminum oxide layer was examined by scanning electron microscopy (SEM) and atomic force microscopy. After tests, the morphology and the composition of the anodic oxide layer were inspected by SEM, optical microscopy, and glow-discharge optical emission spectroscopy.
Baig, Jameel A; Kazi, Tasneem G; Shah, Abdul Q; Arain, Mohammad B; Afridi, Hassan I; Kandhro, Ghulam A; Khan, Sumaira
2009-09-28
The simple and rapid pre-concentration techniques viz. cloud point extraction (CPE) and solid phase extraction (SPE) were applied for the determination of As(3+) and total inorganic arsenic (iAs) in surface and ground water samples. The As(3+) was formed complex with ammonium pyrrolidinedithiocarbamate (APDC) and extracted by surfactant-rich phases in the non-ionic surfactant Triton X-114, after centrifugation the surfactant-rich phase was diluted with 0.1 mol L(-1) HNO(3) in methanol. While total iAs in water samples was adsorbed on titanium dioxide (TiO(2)); after centrifugation, the solid phase was prepared to be slurry for determination. The extracted As species were determined by electrothermal atomic absorption spectrometry. The multivariate strategy was applied to estimate the optimum values of experimental factors for the recovery of As(3+) and total iAs by CPE and SPE. The standard addition method was used to validate the optimized methods. The obtained result showed sufficient recoveries for As(3+) and iAs (>98.0%). The concentration factor in both cases was found to be 40. PMID:19733735
De Rosario, Helios; Page, Alvaro; Mata, Vicente
2014-05-01
This paper proposes a variation of the instantaneous helical pivot technique for locating centers of rotation. The point of optimal kinematic error (POKE), which minimizes the velocity at the center of rotation, may be obtained by just adding a weighting factor equal to the square of angular velocity in Woltring׳s equation of the pivot of instantaneous helical axes (PIHA). Calculations are simplified with respect to the original method, since it is not necessary to make explicit calculations of the helical axis, and the effect of accidental errors is reduced. The improved performance of this method was validated by simulations based on a functional calibration task for the gleno-humeral joint center. Noisy data caused a systematic dislocation of the calculated center of rotation towards the center of the arm marker cluster. This error in PIHA could even exceed the effect of soft tissue artifacts associated to small and medium deformations, but it was successfully reduced by the POKE estimation. PMID:24650972
Parameter Optimization and Operating Strategy of a TEG System for Railway Vehicles
NASA Astrophysics Data System (ADS)
Heghmanns, A.; Wilbrecht, S.; Beitelschmidt, M.; Geradts, K.
2016-03-01
A thermoelectric generator (TEG) system demonstrator for diesel electric locomotives with the objective of reducing the mechanical load on the thermoelectric modules (TEM) is developed and constructed to validate a one-dimensional thermo-fluid flow simulation model. The model is in good agreement with the measurements and basis for the optimization of the TEG's geometry by a genetic multi objective algorithm. The best solution has a maximum power output of approx. 2.7 kW and does not exceed the maximum back pressure of the diesel engine nor the maximum TEM hot side temperature. To maximize the reduction of the fuel consumption, an operating strategy regarding the system power output for the TEG system is developed. Finally, the potential consumption reduction in passenger and freight traffic operating modes is estimated under realistic driving conditions by means of a power train and lateral dynamics model. The fuel savings are between 0.5% and 0.7%, depending on the driving style.
Hopf, T.; Vassilevski, K. V. Escobedo-Cousin, E.; King, P. J.; Wright, N. G.; O'Neill, A. G.; Horsfall, A. B.; Goss, J. P.; Wells, G. H.; Hunt, M. R. C.
2014-10-21
Top-gated graphene field-effect transistors (GFETs) have been fabricated using bilayer epitaxial graphene grown on the Si-face of 4H-SiC substrates by thermal decomposition of silicon carbide in high vacuum. Graphene films were characterized by Raman spectroscopy, Atomic Force Microscopy, Scanning Tunnelling Microscopy, and Hall measurements to estimate graphene thickness, morphology, and charge transport properties. A 27 nm thick Al₂O₃ gate dielectric was grown by atomic layer deposition with an e-beam evaporated Al seed layer. Electrical characterization of the GFETs has been performed at operating temperatures up to 100 °C limited by deterioration of the gate dielectric performance at higher temperatures. Devices displayed stable operation with the gate oxide dielectric strength exceeding 4.5 MV/cm at 100 °C. Significant shifting of the charge neutrality point and an increase of the peak transconductance were observed in the GFETs as the operating temperature was elevated from room temperature to 100 °C.
Design optimization for plasma performance and assessment of operation regimes in JT-60SA
NASA Astrophysics Data System (ADS)
Fujita, T.; Tamai, H.; Matsukawa, M.; Kurita, G.; Bialek, J.; Aiba, N.; Tsuchiya, K.; Sakurai, S.; Suzuki, Y.; Hamamatsu, K.; Hayashi, N.; Oyama, N.; Suzuki, T.; Navratil, G. A.; Kamada, Y.; Miura, Y.; Takase, Y.; Campbell, D.; Pamela, J.; Romanelli, F.; Kikuchi, M.
2007-11-01
The design of the modification of JT-60U, JT-60SA has been optimized from the viewpoint of plasma performance, and operation regimes have been evaluated with the latest design. Upper and lower divertors with different geometries will be prepared for flexibility of the plasma shape, which will enable both low aspect ratio (A ~ 2.65) and ITER shape (A = 3.1) configurations. The beam lines of negative-ion neutral beam injection will be shifted downwards by ~0.6 m for the off-axis current drive (CD), in order to obtain a weak/reversed shear plasma, as well as having the capability of heating the central region. The feedback control coils along the openings in the stabilizing plate are found effective in suppressing the resistive wall mode and sustaining high βN close to the ideal wall limit. Sustainment of plasma current of 3-3.5 MA for 100 s will be possible in ELMy H-mode plasmas with moderate heating power, βN, and density within an available flux swing. It is also expected that higher βN, high-density ELMy H-mode plasmas will be maintained for 100 s with higher heating power. The expected regime of full CD operation has been extended with upgraded heating and CD power. Full CD operation for 100 s with reactor-relevant high values of normalized beta and bootstrap current fraction (Ip = 2.4 MA, βN = 4.3, fBS = 0.69, \\bar{n}_{\\rme}/n_GW = 0.86 , HH98y2 = 1.3) is expected in a highly-shaped low-aspect-ratio configuration (A = 2.65).
NASA Astrophysics Data System (ADS)
Wilkerson, Thomas D.; Bingham, Gail E.; Zavyalov, Vladimir V.; Marchant, Christian C.; Anderson, Jan M.; Andrew, Luke P.
2007-10-01
AGLITE is a multiwavelength lidar developed for Agricultural Research Service of the United States Department of Agriculture and its program on particle emissions from animal production facilities. The lidar transmission system is a pulsed Nd:YAG laser (355, 532, 1064 nm) operating at a pulse rate of 10 kHz. We analyze and model lidar backscatter and extinction coefficients to extract aerosol physical properties. All wavelength channels operate simultaneously, day or night, using photon counting and high speed data acquisition. The lidar housing is a transportable trailer suitable for all-weather operation at any accessible site. We direct the laser and telescope field of views to targets of interest in both azimuth and elevation. Arrays of particle samplers and turbulence detectors were also used by colleagues specializing in those fields and are compared with the lidar data. The value of multiwavelength, eyesafe lidars for agricultural aerosol measurements has been confirmed by the successful operation of AGLITE. In this paper, we demonstrate the ability of the lidar system to quantitatively characterize particulate emissions as mass concentration fields applicable for USEPA regulations. The combination of lidar with point characterization information allows the development of 3-D distributions of standard USEPA mass concentration fractions (PM10, PM2.5, and other interesting groupings such as PM10-PM2.5 and PM1). Lidar measurements are also focused on air motion as seen by long duration scans of the farm region. We demonstrate the ability to use "standoff" lidar methods to determine the movement and concentrations of emissions over an entire agricultural facility.
NASA Astrophysics Data System (ADS)
Mazaheri, K.; Nejati, A.; Chaharlang Kiani, K.; Taheri, R.
2015-08-01
A shock control bump (SCB) is a flow control method which uses local small deformations in a flexible wing surface to considerably reduce the strength of shock waves and the resulting wave drag in transonic flows. Most of the reported research is devoted to optimization in a single flow condition. Here, we have used a multi-point adjoint optimization scheme to optimize shape and location of the SCB. Practically, this introduces transonic airfoils equipped with the SCB which are simultaneously optimized for different off-design transonic flight conditions. Here, we use this optimization algorithm to enhance and optimize the performance of SCBs in two benchmark airfoils, i.e., RAE-2822 and NACA-64A010, over a wide range of off-design Mach numbers. All results are compared with the usual single-point optimization. We use numerical simulation of the turbulent viscous flow and a gradient-based adjoint algorithm to find the optimum location and shape of the SCB. We show that the application of SCBs may increase the aerodynamic performance of an RAE-2822 airfoil by 21.9 and by 22.8 % for a NACA-64A010 airfoil compared to the no-bump design in a particular flight condition. We have also investigated the simultaneous usage of two bumps for the upper and the lower surfaces of the airfoil. This has resulted in a 26.1 % improvement for the RAE-2822 compared to the clean airfoil in one flight condition.
High-fidelity two-qubit gates via dynamical decoupling of local 1 /f noise at the optimal point
NASA Astrophysics Data System (ADS)
D'Arrigo, A.; Falci, G.; Paladino, E.
2016-08-01
We investigate the possibility of achieving high-fidelity universal two-qubit gates by supplementing optimal tuning of individual qubits with dynamical decoupling (DD) of local 1 /f noise. We consider simultaneous local pulse sequences applied during the gate operation and compare the efficiencies of periodic, Carr-Purcell, and Uhrig DD with hard π pulses along two directions (πz /y pulses). We present analytical perturbative results (Magnus expansion) in the quasistatic noise approximation combined with numerical simulations for realistic 1 /f noise spectra. The gate efficiency is studied as a function of the gate duration, of the number n of pulses, and of the high-frequency roll-off. We find that the gate error is nonmonotonic in n , decreasing as n-α in the asymptotic limit, α ≥2 , depending on the DD sequence. In this limit πz-Urhig is the most efficient scheme for quasistatic 1 /f noise, but it is highly sensitive to the soft UV cutoff. For small number of pulses, πz control yields anti-Zeno behavior, whereas πy pulses minimize the error for a finite n . For the current noise figures in superconducting qubits, two-qubit gate errors ˜10-6 , meeting the requirements for fault-tolerant quantum computation, can be achieved. The Carr-Purcell-Meiboom-Gill sequence is the most efficient procedure, stable for 1 /f noise with UV cutoff up to gigahertz.
Liu, H.; Kehne, D.; Benson, S.
1995-12-31
A high charge CW FEL injector test stand is being built at CEBAF based on a 500 kV DC laser gun, a 1500 MHz room-temperature buncher, and a high-gradient ({approx}10 MV/m) CEBAF cryounit containing two 1500 MHz CEBAF SRF cavities. Space-charge-dominated beam dynamics simulations show that this injector should be an excellent high-brightness electron beam source for CW UV FELs if the nominal parameters assigned to each component of the system are experimentally achieved. Extensive sensitivity and alternative operating point studies have been conducted numerically to establish tolerances on the parameters of various injector system components. The consequences of degraded injector performance, due to failure to establish and/or maintain the nominal system design parameters, on the performance of the main accelerator and the FEL itself are discussed.
NASA Astrophysics Data System (ADS)
Johnson, Nathan H.
This dissertation is concerned with several problems of instrumentation and data analysis encountered by the Apache Point Observatory Lunar Laser-ranging Operation. Chapter 2 considers crosstalk between elements of a single-photon avalanche photodiode detector. Experimental and analytic methods were developed to determine crosstalk rates, and empirical findings are presented. Chapter 3 details electronics developments that have improved the quality of data collected by detectors of the same type. Chapter 4 explores the challenges of estimating gravitational parameters on the basis of ranging data collected by this and other experiments and presents resampling techniques for the derivation of standard errors for estimates of such parameters determined by the Planetary Ephemeris Program (PEP), a solar-system model and data-fitting code. Possible directions for future work are discussed in Chapter 5. A manual of instructions for working with PEP is presented as an appendix.
Penney, Carla; Porter, Robert; O'Brien, Mary; Daley, Peter
2016-01-01
Background. Acute pharyngitis caused by Group A Streptococcus (GAS) is a common presentation to pediatric emergency departments (ED). Diagnosis with conventional throat culture requires 18-24 hours, which prevents point-of-care treatment decisions. Rapid antigen detection tests (RADT) are faster, but previous reports demonstrate significant operator influence on performance. Objective. To measure operator influence on the diagnostic accuracy of a RADT when performed by pediatric ED nurses and clinical microbiology laboratory technologists, using conventional culture as the reference standard. Methods. Children presenting to a pediatric ED with suspected acute pharyngitis were recruited. Three pharyngeal swabs were collected at once. One swab was used to perform the RADT in the ED, and two were sent to the clinical microbiology laboratory for RADT and conventional culture testing. Results. The RADT when performed by technologists compared to nurses had a 5.1% increased sensitivity (81.4% versus 76.3%) (p = 0.791) (95% CI for difference between technologists and nurses = -11% to +21%) but similar specificity (97.7% versus 96.6%). Conclusion. The performance of the RADT was similar between technologists and ED nurses, although adequate power was not achieved. RADT may be employed in the ED without clinically significant loss of sensitivity. PMID:27579047
O'Brien, Mary
2016-01-01
Background. Acute pharyngitis caused by Group A Streptococcus (GAS) is a common presentation to pediatric emergency departments (ED). Diagnosis with conventional throat culture requires 18–24 hours, which prevents point-of-care treatment decisions. Rapid antigen detection tests (RADT) are faster, but previous reports demonstrate significant operator influence on performance. Objective. To measure operator influence on the diagnostic accuracy of a RADT when performed by pediatric ED nurses and clinical microbiology laboratory technologists, using conventional culture as the reference standard. Methods. Children presenting to a pediatric ED with suspected acute pharyngitis were recruited. Three pharyngeal swabs were collected at once. One swab was used to perform the RADT in the ED, and two were sent to the clinical microbiology laboratory for RADT and conventional culture testing. Results. The RADT when performed by technologists compared to nurses had a 5.1% increased sensitivity (81.4% versus 76.3%) (p = 0.791) (95% CI for difference between technologists and nurses = −11% to +21%) but similar specificity (97.7% versus 96.6%). Conclusion. The performance of the RADT was similar between technologists and ED nurses, although adequate power was not achieved. RADT may be employed in the ED without clinically significant loss of sensitivity. PMID:27579047
Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward
2016-06-10
Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty informationmore » on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.« less
NASA Astrophysics Data System (ADS)
Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward
2016-06-01
Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty information on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.
Dynamic emulation modelling for the optimal operation of water systems: an overview
NASA Astrophysics Data System (ADS)
Castelletti, A.; Galelli, S.; Giuliani, M.
2014-12-01
Despite sustained increase in computing power over recent decades, computational limitations remain a major barrier to the effective and systematic use of large-scale, process-based simulation models in rational environmental decision-making. Whereas complex models may provide clear advantages when the goal of the modelling exercise is to enhance our understanding of the natural processes, they introduce problems of model identifiability caused by over-parameterization and suffer from high computational burden when used in management and planning problems. As a result, increasing attention is now being devoted to emulation modelling (or model reduction) as a way of overcoming these limitations. An emulation model, or emulator, is a low-order approximation of the process-based model that can be substituted for it in order to solve high resource-demanding problems. In this talk, an overview of emulation modelling within the context of the optimal operation of water systems will be provided. Particular emphasis will be given to Dynamic Emulation Modelling (DEMo), a special type of model complexity reduction in which the dynamic nature of the original process-based model is preserved, with consequent advantages in a wide range of problems, particularly feedback control problems. This will be contrasted with traditional non-dynamic emulators (e.g. response surface and surrogate models) that have been studied extensively in recent years and are mainly used for planning purposes. A number of real world numerical experiences will be used to support the discussion ranging from multi-outlet water quality control in water reservoir through erosion/sedimentation rebalancing in the operation of run-off-river power plants to salinity control in lake and reservoirs.
Li, Kun; Wang, Jianxing; Liu, Jibao; Wei, Yuansong; Chen, Meixue
2016-05-01
Municipal sewage from an oxidation ditch was treated for reuse by nanofiltration (NF) in this study. The NF performance was optimized, and its fouling characteristics after different operational durations (i.e., 48 and 169hr) were analyzed to investigate the applicability of nanofiltration for water reuse. The optimum performance was achieved when transmembrane pressure=12bar, pH=4 and flow rate=8L/min using a GE membrane. The permeate water quality could satisfy the requirements of water reclamation for different uses and local standards for water reuse in Beijing. Flux decline in the fouling experiments could be divided into a rapid flux decline and a quasi-steady state. The boundary flux theory was used to predict the evolution of permeate flux. The expected operational duration based on the 169-hr experiment was 392.6hr which is 175% longer than that of the 48-hr one. High molecular weight (MW) protein-like substances were suggested to be the dominant foulants after an extended period based on the MW distribution and the fluorescence characteristics. The analyses of infrared spectra and extracellular polymeric substances revealed that the roles of both humic- and polysaccharide-like substances were diminished, while that of protein-like substances were strengthened in the contribution of membrane fouling with time prolonged. Inorganic salts were found to have marginally influence on membrane fouling. Additionally, alkali washing was more efficient at removing organic foulants in the long term, and a combination of water flushing and alkali washing was appropriate for NF fouling control in municipal sewage treatment. PMID:27155415
Operationally optimal vertex-based shape coding with arbitrary direction edge encoding structures
NASA Astrophysics Data System (ADS)
Lai, Zhongyuan; Zhu, Junhuan; Luo, Jiebo
2014-07-01
The intention of shape coding in the MPEG-4 is to improve the coding efficiency as well as to facilitate the object-oriented applications, such as shape-based object recognition and retrieval. These require both efficient shape compression and effective shape description. Although these two issues have been intensively investigated in data compression and pattern recognition fields separately, it remains an open problem when both objectives need to be considered together. To achieve high coding gain, the operational rate-distortion optimal framework can be applied, but the direction restriction of the traditional eight-direction edge encoding structure reduces its compression efficiency and description effectiveness. We present two arbitrary direction edge encoding structures to relax this direction restriction. They consist of a sector number, a short component, and a long component, which represent both the direction and the magnitude information of an encoding edge. Experiments on both shape coding and hand gesture recognition validate that our structures can reduce a large number of encoding vertices and save up to 48.9% bits. Besides, the object contours are effectively described and suitable for the object-oriented applications.
Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng
2014-01-01
In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818
Liu, Weihua; Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng
2014-01-01
In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel
2016-04-01
This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to
NASA Astrophysics Data System (ADS)
Madani, Kaveh; Hooshyar, Milad
2014-11-01
Reservoir systems with multiple operators can benefit from coordination of operation policies. To maximize the total benefit of these systems the literature has normally used the social planner's approach. Based on this approach operation decisions are optimized using a multi-objective optimization model with a compound system's objective. While the utility of the system can be increased this way, fair allocation of benefits among the operators remains challenging for the social planner who has to assign controversial weights to the system's beneficiaries and their objectives. Cooperative game theory provides an alternative framework for fair and efficient allocation of the incremental benefits of cooperation. To determine the fair and efficient utility shares of the beneficiaries, cooperative game theory solution methods consider the gains of each party in the status quo (non-cooperation) as well as what can be gained through the grand coalition (social planner's solution or full cooperation) and partial coalitions. Nevertheless, estimation of the benefits of different coalitions can be challenging in complex multi-beneficiary systems. Reinforcement learning can be used to address this challenge and determine the gains of the beneficiaries for different levels of cooperation, i.e., non-cooperation, partial cooperation, and full cooperation, providing the essential input for allocation based on cooperative game theory. This paper develops a game theory-reinforcement learning (GT-RL) method for determining the optimal operation policies in multi-operator multi-reservoir systems with respect to fairness and efficiency criteria. As the first step to underline the utility of the GT-RL method in solving complex multi-agent multi-reservoir problems without a need for developing compound objectives and weight assignment, the proposed method is applied to a hypothetical three-agent three-reservoir system.
NASA Astrophysics Data System (ADS)
Bai, Tao; Chang, Jian-xia; Chang, Fi-John; Huang, Qiang; Wang, Yi-min; Chen, Guang-sheng
2015-04-01
The Yellow River, known as China's "mother river", originates from the Qinghai-Tibet Plateau and flows through nine provinces with a basin area of 0.75 million km2 and an annual runoff of 53.5 billion m3. In the last decades, a series of reservoirs have been constructed and operated along the Upper Yellow River for hydropower generation, flood and ice control, and water resources management. However, these reservoirs are managed by different institutions, and the gains owing to the joint operation of reservoirs are neither clear nor recognized, which prohibits the applicability of reservoir joint operation. To inspire the incentive of joint operation, the contribution of reservoirs to joint operation needs to be quantified. This study investigates the synergistic gains from the optimal joint operation of two pivotal reservoirs (i.e., Longyangxia and Liujiaxia) along the Upper Yellow River. Synergistic gains of optimal joint operation are analyzed based on three scenarios: (1) neither reservoir participates in flow regulation; (2) one reservoir (i.e., Liujiaxia) participates in flow regulation; and (3) both reservoirs participate in flow regulation. We develop a multi-objective optimal operation model of cascade reservoirs by implementing the Progressive Optimality Algorithm-Dynamic Programming Successive Approximation (POA-DPSA) method for estimating the gains of reservoirs based on long series data (1987-2010). The results demonstrate that the optimal joint operation of both reservoirs can increase the amount of hydropower generation to 1.307 billion kW h/year (about 594 million USD) and increase the amount of water supply to 36.57 billion m3/year (about 15% improvement). Furthermore both pivotal reservoirs play an extremely essential role to ensure the safety of downstream regions for ice and flood management, and to significantly increase the minimum flow in the Upper Yellow River during dry periods. Therefore, the synergistic gains of both reservoirs can be
NASA Technical Reports Server (NTRS)
Sliwa, S. M.
1980-01-01
Constrained parameter optimization was used to perform the optimal preliminary design of a medium range transport configuration. The impact of choosing a performance index was studied and the required fare for a 15 percent return-on-investment was proposed as a figure-of-merit. A number of design constants and constraint functions were systematically varied to document the sensitivities of the optimal design to a variety of economic and technological assumptions. Additionally, a comparison is made for each of the parameter variations between the baseline configuration and the optimally redesigned configuration.
Applying operations research to optimize a novel population management system for cancer screening
Zai, Adrian H; Kim, Seokjin; Kamis, Arnold; Hung, Ken; Ronquillo, Jeremiah G; Chueh, Henry C; Atlas, Steven J
2014-01-01
Objective To optimize a new visit-independent, population-based cancer screening system (TopCare) by using operations research techniques to simulate changes in patient outreach staffing levels (delegates, navigators), modifications to user workflow within the information technology (IT) system, and changes in cancer screening recommendations. Materials and methods TopCare was modeled as a multiserver, multiphase queueing system. Simulation experiments implemented the queueing network model following a next-event time-advance mechanism, in which systematic adjustments were made to staffing levels, IT workflow settings, and cancer screening frequency in order to assess their impact on overdue screenings per patient. Results TopCare reduced the average number of overdue screenings per patient from 1.17 at inception to 0.86 during simulation to 0.23 at steady state. Increases in the workforce improved the effectiveness of TopCare. In particular, increasing the delegate or navigator staff level by one person improved screening completion rates by 1.3% or 12.2%, respectively. In contrast, changes in the amount of time a patient entry stays on delegate and navigator lists had little impact on overdue screenings. Finally, lengthening the screening interval increased efficiency within TopCare by decreasing overdue screenings at the patient level, resulting in a smaller number of overdue patients needing delegates for screening and a higher fraction of screenings completed by delegates. Conclusions Simulating the impact of changes in staffing, system parameters, and clinical inputs on the effectiveness and efficiency of care can inform the allocation of limited resources in population management. PMID:24043318
Meng Lingguo; Lin Zhaojun; Xing Jianping; Liang Zhihu; Liu Chunliang
2010-05-10
We introduce the idea of a pressure-independent point (PIP) in a group of current-voltage curves for the coplanar electrode microplasma device (CEMPD) at neon pressures ranging from 15 to 95 kPa. We studied four samples of CEMPDs with different sizes of the microcavity and observed the PIP phenomenon for each sample. The PIP voltage depends on the area of the microcavity and is independent of the height of the microcavity. The PIP discharge current, I{sub PIP}, is proportional to the volume (Vol) of the microcavity and can be expressed by the formula I{sub PIP}=I{sub PIP0}+DxVol. For our samples, I{sub PIP0} (the discharge current when Vol is zero) is about zero and D (discharge current density) is about 3.95 mA mm{sup -3}. The error in D is 0.411 mA mm{sup -3} (less than 11% of D). When the CEMPD operates at V{sub PIP}, the discharge current is quite stable under different neon pressures.
Tyree, Melvin T.; Sperry, John S.
1988-01-01
We discuss the relationship between the dynamically changing tension gradients required to move water rapidly through the xylem conduits of plants and the proportion of conduits lost through embolism as a result of water tension. We consider the implications of this relationship to the water relations of trees. We have compiled quantitative data on the water relations, hydraulic architecture and vulnerability of embolism of four widely different species: Rhizophora mangle, Cassipourea elliptica, Acer saccharum, and Thuja occidentalis. Using these data, we modeled the dynamics of water flow and xylem blockage for these species. The model is specifically focused on the conditions required to generate `runaway embolism,' whereby the blockage of xylem conduits through embolism leads to reduced hydraulic conductance causing increased tension in the remaining vessels and generating more tension in a vicious circle. The model predicted that all species operate near the point of catastrophic xylem failure due to dynamic water stress. The model supports Zimmermann's plant segmentation hypothesis. Zimmermann suggested that plants are designed hydraulically to sacrifice highly vulnerable minor branches and thus improve the water balance of remaining parts. The model results are discussed in terms of the morphology, hydraulic architecture, eco-physiology, and evolution of woody plants. PMID:16666351
NASA Astrophysics Data System (ADS)
Meng, Lingguo; Xing, Jianping; Liang, Zhihu; Liu, Chunliang; Lin, Zhaojun
2010-05-01
We introduce the idea of a pressure-independent point (PIP) in a group of current-voltage curves for the coplanar electrode microplasma device (CEMPD) at neon pressures ranging from 15 to 95 kPa. We studied four samples of CEMPDs with different sizes of the microcavity and observed the PIP phenomenon for each sample. The PIP voltage depends on the area of the microcavity and is independent of the height of the microcavity. The PIP discharge current, IPIP, is proportional to the volume (Vol) of the microcavity and can be expressed by the formula IPIP=IPIP0+D×Vol. For our samples, IPIP0 (the discharge current when Vol is zero) is about zero and D (discharge current density) is about 3.95 mA mm-3. The error in D is 0.411 mA mm-3 (less than 11% of D). When the CEMPD operates at VPIP, the discharge current is quite stable under different neon pressures.
Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Rocca, David Della; Rocca, Robert C Della; Andron, Aleza; Jain, Vandana
2015-01-01
Objective: To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery. PMID:26655001
Haugh, Michael; Stewart, Richard
2010-10-01
This paper describes the operation and testing for a Vertical Johann Spectrometer (VJS) operating in the 13 keV range. The spectrometer is designed to use thin curved mica crystals or thick germanium crystals. The VJS must have a resolution E/ΔE=3000 or better to measure Doppler broadening of highly ionized krypton and operate at a small X-ray angle in order to be used as a diagnostic in a laser plasma target chamber. The VJS was aligned, tested, and optimized using a fluorescer type high energy X-ray (HEX) source located at National Security Technologies, LLC (NSTec), in Livermore, California. The HEX uses a 160 kV X-ray tube to excite fluorescence from various targets. Both rubidium and bismuth fluorescers were used for this effort. This presentation describes the NSTec HEX system and the methods used to optimize and characterize the VJS performance.
Haugh, Michael; Stewart, Richard
2010-10-15
This paper describes the operation and testing for a vertical Johann spectrometer (VJS) operating in the 13 keV range. The spectrometer is designed to use thin curved mica crystals or thick germanium crystals. The VJS must have a resolution of E/{Delta}E=3000 or better to measure the Doppler broadening of highly ionized krypton and operate at a small x-ray angle in order to be used as a diagnostic in a laser plasma target chamber. The VJS was aligned, tested, and optimized using a fluorescer type high energy x-ray (HEX) source located at National Security Technologies (NSTec), LLC, in Livermore, CA. The HEX uses a 160 kV x-ray tube to excite fluorescence from various targets. Both rubidium and bismuth fluorescers were used for this effort. This presentation describes the NSTec HEX system and the methods used to optimize and characterize the VJS performance.
Gouge, Brian; Dowlatabadi, Hadi; Ries, Francis J
2013-04-16
In contrast to capital control strategies (i.e., investments in new technology), the potential of operational control strategies (e.g., vehicle scheduling optimization) to reduce the health and climate impacts of the emissions from public transportation bus fleets has not been widely considered. This case study demonstrates that heterogeneity in the emission levels of different bus technologies and the exposure potential of bus routes can be exploited though optimization (e.g., how vehicles are assigned to routes) to minimize these impacts as well as operating costs. The magnitude of the benefits of the optimization depend on the specific transit system and region. Health impacts were found to be particularly sensitive to different vehicle assignments and ranged from worst to best case assignment by more than a factor of 2, suggesting there is significant potential to reduce health impacts. Trade-offs between climate, health, and cost objectives were also found. Transit agencies that do not consider these objectives in an integrated framework and, for example, optimize for costs and/or climate impacts alone, risk inadvertently increasing health impacts by as much as 49%. Cost-benefit analysis was used to evaluate trade-offs between objectives, but large uncertainties make identifying an optimal solution challenging. PMID:23477749
Chester, T L
2003-10-24
The goal of a separation can be defined in terms of business needs. One goal often used is to provide the required separation in minimum time, but many other goals are also possible. These include maximizing resolution within an analysis-time limit, or minimizing the overall cost. The remaining requirements of the separation can be applied as constraints in the optimization of the goal. We will present a flexible, business-objective-based approach for optimizing the operational parameters of high performance liquid chromatography (HPLC) methods. After selecting the stationary phase and the mobile-phase components, several isocratic experiments are required to build a retention model. Multivariate optimization is performed, within the model, to find the best combination of the parameters being varied so that the result satisfies the goal to the fullest extent possible within the constraints. Interdependencies of parameters can be revealed by plotting the loci of optimal variable values or the function being optimized against a constraint. We demonstrate the concepts with a model separation originally requiring a 54 min analysis time. Multivariate optimization reduces the predicted analysis time to as short as 8 min, depending on the goals and constraints specified. PMID:14601838
NASA Astrophysics Data System (ADS)
Çetin, Füsun
2006-09-01
In the nuclear pumped laser, passage of the energetic nuclear fragments through gas causes a non-uniform energy deposition. This spatial non-uniformity induces gas motion, which results in density hence, refractive index gradients. Since the refractive index gradient of the gas determines the degree of beam refraction as it propagates through the cavity, refractive index gradient adversely affects the resonator stability and beam quality. Therefore, optimal gas parameters should improve optical homogeneity in addition to output power. Refractive index gradient are here considered to be a measure of optical inhomogeneity and its variations with tube parameter are examined to ensure the necessary optical quality of the supplied gas. Spatial and temporal variations of normalized refractive index gradients in the 3He gas excited by 3He(n, p) 3H reactions are calculated by using the density field obtained from the previously reported dynamic model for energy deposition for various operating pressures and tube radii. Additionally, variation of power deposition per pulse with the operating pressure and variation of average power deposition density with tube diameter are calculated and used in determining optimal parameters, as a measure for improving the output power. The optimal operating pressure and tube size, from the point of view of power deposition and optical homogeneity, are determined for the present conditions. Calculated results are obtained for a closed 3He-filled cylindrical laser tube, with a maximum thermal neutron flux of 8 × 10 16 n/cm 2 sn, by using characteristics of the TRIGA Mark II Reactor at Istanbul Technical University (ITU).
Cost related sensitivity analysis for optimal operation of a grid-parallel PEM fuel cell power plant
NASA Astrophysics Data System (ADS)
El-Sharkh, M. Y.; Tanrioven, M.; Rahman, A.; Alam, M. S.
Fuel cell power plants (FCPP) as a combined source of heat, power and hydrogen (CHP&H) can be considered as a potential option to supply both thermal and electrical loads. Hydrogen produced from the FCPP can be stored for future use of the FCPP or can be sold for profit. In such a system, tariff rates for purchasing or selling electricity, the fuel cost for the FCPP/thermal load, and hydrogen selling price are the main factors that affect the operational strategy. This paper presents a hybrid evolutionary programming and Hill-Climbing based approach to evaluate the impact of change of the above mentioned cost parameters on the optimal operational strategy of the FCPP. The optimal operational strategy of the FCPP for different tariffs is achieved through the estimation of the following: hourly generated power, the amount of thermal power recovered, power trade with the local grid, and the quantity of hydrogen that can be produced. Results show the importance of optimizing system cost parameters in order to minimize overall operating cost.
NASA Astrophysics Data System (ADS)
Bostan, Mohamad; Hadi Afshar, Mohamad; Khadem, Majed
2015-04-01
This article proposes a hybrid linear programming (LP-LP) methodology for the simultaneous optimal design and operation of groundwater utilization systems. The proposed model is an extension of an earlier LP-LP model proposed by the authors for the optimal operation of a set of existing wells. The proposed model can be used to optimally determine the number, configuration and pumping rates of the operational wells out of potential wells with fixed locations to minimize the total cost of utilizing a two-dimensional confined aquifer under steady-state flow conditions. The model is able to take into account the well installation, piping and pump installation costs in addition to the operational costs, including the cost of energy and maintenance. The solution to the problem is defined by well locations and their pumping rates, minimizing the total cost while satisfying a downstream demand, lower/upper bound on the pumping rates, and lower/upper bound on the water level drawdown at the wells. A discretized version of the differential equation governing the flow is first embedded into the model formulation as a set of additional constraints. The resulting mixed-integer highly constrained nonlinear optimization problem is then decomposed into two subproblems with different sets of decision variables, one with a piezometric head and the other with the operational well locations and the corresponding pumping rates. The binary variables representing the well locations are approximated by a continuous variable leading to two LP subproblems. Having started with a random value for all decision variables, the two subproblems are solved iteratively until convergence is achieved. The performance and ability of the proposed method are tested against a hypothetical problem from the literature and the results are presented and compared with those obtained using a mixed-integer nonlinear programming method. The results show the efficiency and effectiveness of the proposed method for
NASA Astrophysics Data System (ADS)
Mahmoodabadi, M. J.; Bagheri, A.; Nariman-zadeh, N.; Jamali, A.
2012-10-01
Particle swarm optimization (PSO) is a randomized and population-based optimization method that was inspired by the flocking behaviour of birds and human social interactions. In this work, multi-objective PSO is modified in two stages. In the first stage, PSO is combined with convergence and divergence operators. Here, this method is named CDPSO. In the second stage, to produce a set of Pareto optimal solutions which has good convergence, diversity and distribution, two mechanisms are used. In the first mechanism, a new leader selection method is defined, which uses the periodic iteration and the concept of the particle's neighbour number. This method is named periodic multi-objective algorithm. In the second mechanism, an adaptive elimination method is employed to limit the number of non-dominated solutions in the archive, which has influences on computational time, convergence and diversity of solution. Single-objective results show that CDPSO performs very well on the complex test functions in terms of solution accuracy and convergence speed. Furthermore, some benchmark functions are used to evaluate the performance of periodic multi-objective CDPSO. This analysis demonstrates that the proposed algorithm operates better in three metrics through comparison with three well-known elitist multi-objective evolutionary algorithms. Finally, the algorithm is used for Pareto optimal design of a two-degree of freedom vehicle vibration model. The conflicting objective functions are sprung mass acceleration and relative displacement between sprung mass and tyre. The feasibility and efficiency of periodic multi-objective CDPSO are assessed in comparison with multi-objective modified NSGAII.
NASA Astrophysics Data System (ADS)
Polprasert, Jirawadee; Ongsakul, Weerakorn; Dieu, Vo Ngoc
2011-06-01
This paper proposes a self-organizing hierarchical particle swarm optimization (SPSO) with time-varying acceleration coefficients (TVAC) for solving economic dispatch (ED) problem with non-smooth functions including multiple fuel options (MFO) and valve-point loading effects (VPLE). The proposed SPSO with TVAC is the new approach optimizer and good performance for solving ED problems. It can handle the premature convergence of the problem by re-initialization of velocity whenever particles are stagnated in the search space. To properly control both local and global explorations of the swarm during the optimization process, the performance of TVAC is included. The proposed method is tested in different ED problems with non-smooth cost functions and the obtained results are compared to those from many other methods in the literature. The results have revealed that the proposed SPSO with TVAC is effective in finding higher quality solutions for non-smooth ED problems than many other methods.
Mauro, Marina; Radovic, Vladimir; Zhou, Pengfei; Wolfe, Melanie; Kamath, Markad; Bercik, Premsyl; Croitoru, Ken; Armstrong, David
2006-01-01
AIM: To determine the test characteristics and the optimal cut-off point for the 13C urea breath test (13C UBT) in a Canadian community laboratory setting. METHODS: Of 2232 patients (mean age ± SD: 51±21 years, 56% female) who completed a 13C UBT, 1209 were tested to evaluate the primary diagnosis of Helicobacter pylori infection and 1023 were tested for confirmation of eradication following treatment. Cluster analysis was performed on the 13C UBT data to determine the optimal cut-off point and the risk of false-positive and false-negative results. Additionally, 176 patients underwent endoscopic biopsy to allow validation of the sensitivity and specificity of the 13C UBT against histology and microbiology using the calculated cut-off point. RESULTS: The calculated cut-off points were 3.09 δ‰ for the whole study population (n=2232), 3.09 δ‰ for the diagnosis group (n=1209) and 2.88 δ‰ for the post-treatment group (n=1023). When replacing the calculated cut-off points by a practical cut-off point of 3.0 δ‰, the risk of false-positive and false-negative results was lower than 2.3%. The 13C UBT showed 100% sensitivity and 98.5% specificity compared with histology and microbiology (n=176) for the diagnosis of active H pylori infection. CONCLUSIONS: The 13C UBT is an accurate, noninvasive test for the diagnosis of H pylori infection and for confirmation of cure after eradication therapy. The present study confirms the validity of a cutoff point of 3.0 δ‰ for the 13C UBT when used in a large Canadian community population according to a standard protocol. PMID:17171195
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Chen, Neil; Ng, Hok K.
2010-01-01
There is increased awareness of anthropogenic factors affecting climate change and urgency to slow the negative impact. Greenhouse gases, oxides of Nitrogen and contrails resulting from aviation affect the climate in different and uncertain ways. This paper develops a flexible simulation and optimization software architecture to study the trade-offs involved in reducing emissions. The software environment is used to conduct analysis of two approaches for avoiding contrails using the concepts of contrail frequency index and optimal avoidance trajectories.
NASA Technical Reports Server (NTRS)
Abrahamson, Matthew J.; Oaida, Bogdan; Erkmen, Baris
2013-01-01
This paper will discuss the OPALS pointing strategy, focusing on incorporation of ISS trajectory and attitude models to build pointing predictions. Methods to extrapolate an ISS prediction based on past data will be discussed and will be compared to periodically published ISS predictions and Two-Line Element (TLE) predictions. The prediction performance will also be measured against GPS states available in telemetry. The performance of the pointing products will be compared to the allocated values in the OPALS pointing budget to assess compliance with requirements.
NASA Astrophysics Data System (ADS)
Klein, Abel
2013-11-01
We prove a unique continuation principle for spectral projections of Schrödinger operators. We consider a Schrödinger operator H = - Δ + V on , and let H Λ denote its restriction to a finite box Λ with either Dirichlet or periodic boundary condition. We prove unique continuation estimates of the type χ I ( H Λ ) W χ I ( H Λ ) ≥ κ χ I ( H Λ ) with κ > 0 for appropriate potentials W ≥ 0 and intervals I. As an application, we obtain optimal Wegner estimates at all energies for a class of non-ergodic random Schrödinger operators with alloy-type random potentials (‘crooked’ Anderson Hamiltonians). We also prove optimal Wegner estimates at the bottom of the spectrum with the expected dependence on the disorder (the Wegner estimate improves as the disorder increases), a new result even for the usual (ergodic) Anderson Hamiltonian. These estimates are applied to prove localization at high disorder for Anderson Hamiltonians in a fixed interval at the bottom of the spectrum.
Optimization of operation of a three-electrode gyrotron with the use of a flow-type calorimeter
Kharchev, Nikolay K.; Batanov, German M.; Kolik, Leonid V.; Malakhov, Dmitrii V.; Petrov, Aleksandr Ye.; Sarksyan, Karen A.; Skvortsova, Nina N.; Stepakhin, Vladimir D.; Belousov, Vladimir I.; Malygin, Sergei A.; Tai, Yevgenii M.
2013-01-15
Results are presented for measurements of microwave power of the Borets-75/0.8 gyrotron with recovery of residual electron energy, which were performed by a flow-type calorimeter. This gyrotron is a part of the ECR plasma heating complex put into operation in 2010 at the L-2M stellarator. The new calorimeter is capable of measuring microwave power up to 0.5 MW. Monitoring of the microwave power makes it possible to control the parameters of the gyrotron power supply unit (its voltage and current) and the magnetic field of the cryomagnet in order to optimize the gyrotron operation and arrive at maximum efficiency.
Optimization of operation of a three-electrode gyrotron with the use of a flow-type calorimeter.
Kharchev, Nikolay K; Batanov, German M; Kolik, Leonid V; Malakhov, Dmitrii V; Petrov, Aleksandr Ye; Sarksyan, Karen A; Skvortsova, Nina N; Stepakhin, Vladimir D; Belousov, Vladimir I; Malygin, Sergei A; Tai, Yevgenii M
2013-01-01
Results are presented for measurements of microwave power of the Borets-75/0.8 gyrotron with recovery of residual electron energy, which were performed by a flow-type calorimeter. This gyrotron is a part of the ECR plasma heating complex put into operation in 2010 at the L-2M stellarator. The new calorimeter is capable of measuring microwave power up to 0.5 MW. Monitoring of the microwave power makes it possible to control the parameters of the gyrotron power supply unit (its voltage and current) and the magnetic field of the cryomagnet in order to optimize the gyrotron operation and arrive at maximum efficiency. PMID:23387650
Zsigraiova, Zdena; Semiao, Viriato; Beijoco, Filipa
2013-04-01
This work proposes an innovative methodology for the reduction of the operation costs and pollutant emissions involved in the waste collection and transportation. Its innovative feature lies in combining vehicle route optimization with that of waste collection scheduling. The latter uses historical data of the filling rate of each container individually to establish the daily circuits of collection points to be visited, which is more realistic than the usual assumption of a single average fill-up rate common to all the system containers. Moreover, this allows for the ahead planning of the collection scheduling, which permits a better system management. The optimization process of the routes to be travelled makes recourse to Geographical Information Systems (GISs) and uses interchangeably two optimization criteria: total spent time and travelled distance. Furthermore, rather than using average values, the relevant parameters influencing fuel consumption and pollutant emissions, such as vehicle speed in different roads and loading weight, are taken into consideration. The established methodology is applied to the glass-waste collection and transportation system of Amarsul S.A., in Barreiro. Moreover, to isolate the influence of the dynamic load on fuel consumption and pollutant emissions a sensitivity analysis of the vehicle loading process is performed. For that, two hypothetical scenarios are tested: one with the collected volume increasing exponentially along the collection path; the other assuming that the collected volume decreases exponentially along the same path. The results evidence unquestionable beneficial impacts of the optimization on both the operation costs (labor and vehicles maintenance and fuel consumption) and pollutant emissions, regardless the optimization criterion used. Nonetheless, such impact is particularly relevant when optimizing for time yielding substantial improvements to the existing system: potential reductions of 62% for the total
NASA Astrophysics Data System (ADS)
Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid
2011-10-01
In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.
NASA Astrophysics Data System (ADS)
Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid
2011-10-01
In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the ‘best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.
Efficiency of operation of wind turbine rotors optimized by the Glauert and Betz methods
NASA Astrophysics Data System (ADS)
Okulov, V. L.; Mikkelsen, R.; Litvinov, I. V.; Naumov, I. V.
2015-11-01
The models of two types of rotors with blades constructed using different optimization methods are compared experimentally. In the first case, the Glauert optimization by the pulsed method is used, which is applied independently for each individual blade cross section. This method remains the main approach in designing rotors of various duties. The construction of the other rotor is based on the Betz idea about optimization of rotors by determining a special distribution of circulation over the blade, which ensures the helical structure of the wake behind the rotor. It is established for the first time as a result of direct experimental comparison that the rotor constructed using the Betz method makes it possible to extract more kinetic energy from the homogeneous incoming flow.
Mark O. McLinden; Arno Laesecke; Eric W. Lemmon; Joseph W. Magee; Richard A. Perkins
2002-08-30
The main goal of this project was to investigate and compare the performance of an R410A air conditioner to that of an R22 air conditioner, with specific interest in performance at high ambient temperatures at which the condenser of the R410A system may be operating above the refrigerant's critical point. Part 1 of this project consisted of measuring thermodynamic properties R125, R410A and R507A, measuring viscosity and thermal conductivity of R410A and R507A and comparing data to mixture models in NIST REFPROP database. For R125, isochoric (constant volume) heat capacity was measured over a temperature range of 305 to 397 K (32 to 124 C) at pressures up to 20 MPa. For R410A, isochoric heat capacity was measured along 8 isochores with a temperature range of 303 to 397 K (30 to 124 C) at pressures up to 18 MPa. Pressure-density-temperature was also measured along 14 isochores over a temperature range of 200 to 400 K (-73 to 127 C) at pressures up to 35 MPa and thermal conductivity along 6 isotherms over a temperature range of 301 to 404 K (28 to 131 C) with pressures to 38 MPa. For R507A, viscosity was measured along 5 isotherms over a temperature range of 301 to 421 K (28 to 148 C) at pressures up to 83 MPa and thermal conductivity along 6 isotherms over a temperature range of 301 to 404 K (28 to 131 C) with pressures to 38 MPa. Mixture models were developed to calculate the thermodynamic properties of HFC refrigerant mixtures containing R32, R125, R134a and/or R125. The form of the model is the same for all the blends considered, but blend-specific mixing functions are required for the blends R32/125 (R410 blends) and R32/134a (a constituent binary of R407 blends). The systems R125/134a, R125/143a, R134a/143a, and R134a/152a share a common, generalized mixing function. The new equation of state for R125 is believed to be the most accurate and comprehensive formulation of the properties for that fluid. Likewise, the mixture model developed in this work is the
Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng
2015-01-01
Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264
NASA Astrophysics Data System (ADS)
Chanda, Sandip; De, Abhinandan
2015-07-01
A social welfare optimization technique has been proposed in this paper with a developed state space based model and bifurcation analysis to offer substantial stability margin even in most inadvertent states of power system networks. The restoration of the power market dynamic price equilibrium has been negotiated in this paper, by forming Jacobian of the sensitivity matrix to regulate the state variables for the standardization of the quality of solution in worst possible contingencies of the network and even with co-option of intermittent renewable energy sources. The model has been tested in IEEE 30 bus system and illustrious particle swarm optimization has assisted the fusion of the proposed model and methodology.
Verreck, Devin Groeseneken, Guido; Verhulst, Anne S.; Mocuta, Anda; Collaert, Nadine; Thean, Aaron; Van de Put, Maarten; Magnus, Wim; Sorée, Bart
2015-10-07
Efficient quantum mechanical simulation of tunnel field-effect transistors (TFETs) is indispensable to allow for an optimal configuration identification. We therefore present a full-zone 15-band quantum mechanical solver based on the envelope function formalism and employing a spectral method to reduce computational complexity and handle spurious solutions. We demonstrate the versatility of the solver by simulating a 40 nm wide In{sub 0.53}Ga{sub 0.47}As lineTFET and comparing it to p-n-i-n configurations with various pocket and body thicknesses. We find that the lineTFET performance is not degraded compared to semi-classical simulations. Furthermore, we show that a suitably optimized p-n-i-n TFET can obtain similar performance to the lineTFET.
ERIC Educational Resources Information Center
Dunst, Carl J.; Raab, Melinda; Trivette, Carol M.; Wilson, Linda L.; Hamby, Deborah W.; Parkey, Cindy; Gatens, Mary; French, Jennie
2007-01-01
Findings from a study investigating the conditions under which contingency learning games were associated with optimal child and adult concomitant and social--emotional behavior benefits are reported. Participants were 41 preschool children with multiple disabilities and profound developmental delays and their parents or teachers. Results showed…
Optimization of cryogenic chilldown and loading operation using SINDA/FLUINT
NASA Astrophysics Data System (ADS)
Kashani, Ali; Luchinskiy, Dmitry G.; Ponizovskaya-Devine, Ekaterina; Khasin, Michael; Timucin, Dogan; Sass, Jared; Perotti, Jose; Brown, Barbara
2015-12-01
A cryogenic advanced propellant loading system is currently being developed at NASA. A wide range of applications and variety of loading regimes call for the development of computer assisted design and optimization methods that will reduce time and cost and improve the reliability of the APL performance. A key aspect of development of such methods is modeling and optimization of non-equilibrium two-phase cryogenic flow in the transfer line. Here we report on the development of such optimization methods using commercial SINDA/FLUINT software. The model is based on the solution of two-phase flow conservation equations in one dimension and a full set of correlations for flow patterns, losses, and heat transfer in the pipes, valves, and other system components. We validate this model using experimental data obtained from chilldown and loading of a cryogenic testbed at NASA Kennedy Space Center. We analyze sensitivity of this model with respect to the variation of the key control parameters including pressure in the tanks, openings of the control and dump valves, and insulation. We discuss the formulation of multi-objective optimization problem and provide an example of the solution of such problem.
Insertion of operation-and-indicate instructions for optimized SIMD code
Eichenberger, Alexander E; Gara, Alan; Gschwind, Michael K
2013-06-04
Mechanisms are provided for inserting indicated instructions for tracking and indicating exceptions in the execution of vectorized code. A portion of first code is received for compilation. The portion of first code is analyzed to identify non-speculative instructions performing designated non-speculative operations in the first code that are candidates for replacement by replacement operation-and-indicate instructions that perform the designated non-speculative operations and further perform an indication operation for indicating any exception conditions corresponding to special exception values present in vector register inputs to the replacement operation-and-indicate instructions. The replacement is performed and second code is generated based on the replacement of the at least one non-speculative instruction. The data processing system executing the compiled code is configured to store special exception values in vector output registers, in response to a speculative instruction generating an exception condition, without initiating exception handling.
Eilam, David
2010-01-01
Open-field behavior is a common tool in studying exploration and navigation, as well as emotions and motivations. However, it has been suggested that this behavior might be parsimoniously interpreted as directed to optimize security, with no need to interpret the animal's mental state. This latter view was challenged here by providing voles with presumably sense of optimal security. For this, voles were introduced into a dark open-field inside a familiar shelter in which they previously lived in their home cage. Voles then emerged either to locomote only in the vicinity of the shelter, or to travel further out to explore the entire arena and only later to return to the shelter. While their staying near the shelter confirms the notion of optimizing security, their traveling further out along the perimeter negates this notion. This divergence of behavior under the same security conditions illustrates that open-field behavior, which is a multi-faceted and dynamic process, is also affected by an emotional component. That is, safety is a subjective emotional state dictated by various inputs and, therefore, the resulting dynamic behavior, which is the ultimate output of the central nervous system, may vary beyond the possibility of being parsimoniously interpreted by only one factor. In a similar vein, we show that the impact of the start point on the paths of locomotion is not an intrinsic property of that point, but depends on its physical location. PMID:19744526
Bermúdez, Valmore; Martínez, María Sofía; Apruzzese, Vanessa; Chávez-Castillo, Mervin; Gonzalez, Robys; Torres, Yaquelín; Bello, Luis; Añez, Roberto; Chacín, Maricarmen; Toledo, Alexandra; Cabrera, Mayela; Mengual, Edgardo; Ávila, Raquel; López-Miranda, José
2014-01-01
Background. Mathematical models such as Homeostasis Model Assessment have gained popularity in the evaluation of insulin resistance (IR). The purpose of this study was to estimate the optimal cut-off point for Homeostasis Model Assessment-2 Insulin Resistance (HOMA2-IR) in an adult population of Maracaibo, Venezuela. Methods. Descriptive, cross-sectional study with randomized, multistaged sampling included 2,026 adult individuals. IR was evaluated through HOMA2-IR calculation in 602 metabolically healthy individuals. For cut-off point estimation, two approaches were applied: HOMA2-IR percentile distribution and construction of ROC curves using sensitivity and specificity for selection. Results. HOMA2-IR arithmetic mean for the general population was 2.21 ± 1.42, with 2.18 ± 1.37 for women and 2.23 ± 1.47 for men (P = 0.466). When calculating HOMA2-IR for the healthy reference population, the resulting p75 was 2.00. Using ROC curves, the selected cut-off point was 1.95, with an area under the curve of 0.801, sensibility of 75.3%, and specificity of 72.8%. Conclusions. We propose an optimal cut-off point of 2.00 for HOMA2-IR, offering high sensitivity and specificity, sufficient for proper assessment of IR in the adult population of our city, Maracaibo. The determination of population-specific cut-off points is needed to evaluate risk for public health problems, such as obesity and metabolic syndrome. PMID:27379332
NASA Astrophysics Data System (ADS)
Sankar Sana, Shib
2016-01-01
The paper develops a production-inventory model of a two-stage supply chain consisting of one manufacturer and one retailer to study production lot size/order quantity, reorder point sales teams' initiatives where demand of the end customers is dependent on random variable and sales teams' initiatives simultaneously. The manufacturer produces the order quantity of the retailer at one lot in which the procurement cost per unit quantity follows a realistic convex function of production lot size. In the chain, the cost of sales team's initiatives/promotion efforts and wholesale price of the manufacturer are negotiated at the points such that their optimum profits reached nearer to their target profits. This study suggests to the management of firms to determine the optimal order quantity/production quantity, reorder point and sales teams' initiatives/promotional effort in order to achieve their maximum profits. An analytical method is applied to determine the optimal values of the decision variables. Finally, numerical examples with its graphical presentation and sensitivity analysis of the key parameters are presented to illustrate more insights of the model.