Science.gov

Sample records for optimal operating points

  1. Engineering to Control Noise, Loading, and Optimal Operating Points

    SciTech Connect

    Mitchell R. Swartz

    2000-11-12

    Successful engineering of low-energy nuclear systems requires control of noise, loading, and optimum operating point (OOP) manifolds. The latter result from the biphasic system response of low-energy nuclear reaction (LENR)/cold fusion systems, and their ash production rate, to input electrical power. Knowledge of the optimal operating point manifold can improve the reproducibility and efficacy of these systems in several ways. Improved control of noise, loading, and peak production rates is available through the study, and use, of OOP manifolds. Engineering of systems toward the OOP-manifold drive-point peak may, with inclusion of geometric factors, permit more accurate uniform determinations of the calibrated activity of these materials/systems.

  2. Optimal operating points of oscillators using nonlinear resonators

    PubMed Central

    Kenig, Eyal; Cross, M. C.; Villanueva, L. G.; Karabalin, R. B.; Matheny, M. H.; Lifshitz, Ron; Roukes, M. L.

    2013-01-01

    We demonstrate an analytical method for calculating the phase sensitivity of a class of oscillators whose phase does not affect the time evolution of the other dynamic variables. We show that such oscillators possess the possibility for complete phase noise elimination. We apply the method to a feedback oscillator which employs a high Q weakly nonlinear resonator and provide explicit parameter values for which the feedback phase noise is completely eliminated and others for which there is no amplitude-phase noise conversion. We then establish an operational mode of the oscillator which optimizes its performance by diminishing the feedback noise in both quadratures, thermal noise, and quality factor fluctuations. We also study the spectrum of the oscillator and provide specific results for the case of 1/f noise sources. PMID:23214857

  3. Feature extraction and segmentation in medical images by statistical optimization and point operation approaches

    NASA Astrophysics Data System (ADS)

    Yang, Shuyu; King, Philip; Corona, Enrique; Wilson, Mark P.; Aydin, Kaan; Mitra, Sunanda; Soliz, Peter; Nutter, Brian S.; Kwon, Young H.

    2003-05-01

    Feature extraction is a critical preprocessing step, which influences the outcome of the entire process of developing significant metrics for medical image evaluation. The purpose of this paper is firstly to compare the effect of an optimized statistical feature extraction methodology to a well designed combination of point operations for feature extraction at the preprocessing stage of retinal images for developing useful diagnostic metrics for retinal diseases such as glaucoma and diabetic retinopathy. Segmentation of the extracted features allow us to investigate the effect of occlusion induced by these features on generating stereo disparity mapping and 3-D visualization of the optic cup/disc. Segmentation of blood vessels in the retina also has significant application in generating precise vessel diameter metrics in vascular diseases such as hypertension and diabetic retinopathy for monitoring progression of retinal diseases.

  4. Physical constraints, fundamental limits, and optimal locus of operating points for an inverted pendulum based actuated dynamic walker.

    PubMed

    Patnaik, Lalit; Umanand, Loganathan

    2015-10-26

    The inverted pendulum is a popular model for describing bipedal dynamic walking. The operating point of the walker can be specified by the combination of initial mid-stance velocity (v0) and step angle (φm) chosen for a given walk. In this paper, using basic mechanics, a framework of physical constraints that limit the choice of operating points is proposed. The constraint lines thus obtained delimit the allowable region of operation of the walker in the v0-φm plane. A given average forward velocity vx,avg can be achieved by several combinations of v0 and φm. Only one of these combinations results in the minimum mechanical power consumption and can be considered the optimum operating point for the given vx,avg. This paper proposes a method for obtaining this optimal operating point based on tangency of the power and velocity contours. Putting together all such operating points for various vx,avg, a family of optimum operating points, called the optimal locus, is obtained. For the energy loss and internal energy models chosen, the optimal locus obtained has a largely constant step angle with increasing speed but tapers off at non-dimensional speeds close to unity.

  5. Optimizing parallel reduction operations

    SciTech Connect

    Denton, S.M.

    1995-06-01

    A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.

  6. Characterizations of fixed points of quantum operations

    SciTech Connect

    Li Yuan

    2011-05-15

    Let {phi}{sub A} be a general quantum operation. An operator B is said to be a fixed point of {phi}{sub A}, if {phi}{sub A}(B)=B. In this note, we shall show conditions under which B, a fixed point {phi}{sub A}, implies that B is compatible with the operation element of {phi}{sub A}. In particular, we offer an extension of the generalized Lueders theorem.

  7. Optimal localization by pointing off axis.

    PubMed

    Yovel, Yossi; Falk, Ben; Moss, Cynthia F; Ulanovsky, Nachum

    2010-02-01

    Is centering a stimulus in the field of view an optimal strategy to localize and track it? We demonstrated, through experimental and computational studies, that the answer is no. We trained echolocating Egyptian fruit bats to localize a target in complete darkness, and we measured the directional aim of their sonar clicks. The bats did not center the sonar beam on the target, but instead pointed it off axis, accurately directing the maximum slope ("edge") of the beam onto the target. Information-theoretic calculations showed that using the maximum slope is optimal for localizing the target, at the cost of detection. We propose that the tradeoff between detection (optimized at stimulus peak) and localization (optimized at maximum slope) is fundamental to spatial localization and tracking accomplished through hearing, olfaction, and vision.

  8. Strategic operating indicators point to equity growth.

    PubMed

    Cleverley, W O

    1988-07-01

    As healthcare managers become more business-like in their behavior, they are becoming increasingly concerned with the equity growth rate of their organizations. Strong equity growth means a financially healthy organization. Equity growth can be expressed as a product of five financial ratios--the most important ratio being the operating margin. Improvements in operating margins will lead to improvements in equity growth. Thirty indicators, called strategic operating indicators, have been developed to monitor operating margins. These indicators, when compared with values from other peer groups, can help point to strategies for improvement of operating margins, and hence equity growth.

  9. Automated design of image operators that detect interest points.

    PubMed

    Trujillo, Leonardo; Olague, Gustavo

    2008-01-01

    This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research.

  10. Automated design of image operators that detect interest points.

    PubMed

    Trujillo, Leonardo; Olague, Gustavo

    2008-01-01

    This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research. PMID:19053496

  11. Linearization: Students Forget the Operating Point

    ERIC Educational Resources Information Center

    Roubal, J.; Husek, P.; Stecha, J.

    2010-01-01

    Linearization is a standard part of modeling and control design theory for a class of nonlinear dynamical systems taught in basic undergraduate courses. Although linearization is a straight-line methodology, it is not applied correctly by many students since they often forget to keep the operating point in mind. This paper explains the topic and…

  12. Optimization of Pilot Point Locations: an efficient and geostatistical perspective

    NASA Astrophysics Data System (ADS)

    Mehne, J.; Nowak, W.

    2012-04-01

    The pilot point method is a wide-spread method for calibrating ensembles of heterogeneous aquifer models on available field data such as hydraulic heads. The pilot points are virtual measurements of conductivity, introduced as localized carriers of information in the inverse procedure. For each heterogeneous aquifer realization, the pilot point values are calibrated until all calibration data are honored. Adequate placement and numbers of pilot points are crucial both for accurate representation of heterogeneity and to keep the computational costs of calibration at an acceptable level. Current placement methods for pilot points either rely solely on the expertise of the modeler, or they involve computationally costly sensitivity analyses. None of the existing placement methods directly addressed the geostatistical character of the placement and calibration problem. This study presents a new method for optimal selection of pilot point locations. We combine ideas from Ensemble Kalman Filtering and geostatistical optimal design with straightforward optimization. In a first step, we emulate the pilot point method with a modified Ensemble Kalman Filter for parameter estimation at drastically reduced computational costs. This avoids the costly evaluation of sensitivity coefficients often used for optimal placement of pilot points. Second, we define task-driven objective functions for the optimal placement of pilot points, based on ideas from geostatistical optimal design of experiments. These objective functions can be evaluated at speed, without carrying out the actual calibration process, requiring nothing else but ensemble covariances that are available from step one. By formal optimization, we can find pilot point placement schemes that are optimal in representing the data for the task-at-hand with minimal numbers of pilot points. In small synthetic test applications, we demonstrate the promising computational performance and the geostatistically logical choice of

  13. Operational equations for the five-point rectangle

    SciTech Connect

    Silver, G.L.

    1993-09-15

    Two operational polynomials are demonstrated for the four-point rectangle with center point. The equations are exact on the points and the surfaces they describe ordinarily fit known monotonic surfaces better tan the standard five-point equation, as judged by the L{sub 2} norm test. Equations for fitting the five-point rectangle by sines and cosines are presented.

  14. 47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Interconnection of private operational fixed point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS....137 Interconnection of private operational fixed point-to-point microwave stations....

  15. On the operating point of cortical computation

    NASA Astrophysics Data System (ADS)

    Martin, Robert; Stimberg, Marcel; Wimmer, Klaus; Obermayer, Klaus

    2010-06-01

    In this paper, we consider a class of network models of Hodgkin-Huxley type neurons arranged according to a biologically plausible two-dimensional topographic orientation preference map, as found in primary visual cortex (V1). We systematically vary the strength of the recurrent excitation and inhibition relative to the strength of the afferent input in order to characterize different operating regimes of the network. We then compare the map-location dependence of the tuning in the networks with different parametrizations with the neuronal tuning measured in cat V1 in vivo. By considering the tuning of neuronal dynamic and state variables, conductances and membrane potential respectively, our quantitative analysis is able to constrain the operating regime of V1: The data provide strong evidence for a network, in which the afferent input is dominated by strong, balanced contributions of recurrent excitation and inhibition, operating in vivo. Interestingly, this recurrent regime is close to a regime of "instability", characterized by strong, self-sustained activity. The firing rate of neurons in the best-fitting model network is therefore particularly sensitive to small modulations of model parameters, possibly one of the functional benefits of this particular operating regime.

  16. A Study on Optimal Operation of Power Generation by Waste

    NASA Astrophysics Data System (ADS)

    Sugahara, Hideo; Aoyagi, Yoshihiro; Kato, Masakazu

    This paper proposes the optimal operation of power generation by waste. Refuse is taken as a new energy resource of biomass. Although some fossil fuel origin refuse like plastic may be mixed in, CO2 emission is not counted up except for above fossil fuel origin refuse for the Kyoto Protocol. Incineration is indispensable for refuse disposal and power generation by waste is environment-friendly and power system-friendly using synchronous generators. Optimal planning is a key point to make much of this merit. The optimal plan includes refuse incinerator operation plan with refuse collection and maintenance scheduling of refuse incinerator plant. In this paper, it has been made clear that the former plan increases generation energy through numerical simulations. Concerning the latter plan, a method to determine the maintenance schedule using genetic algorithm has been established. In addition, taking environmental load of CO2 emission into account, this is expected larger merits from environment and energy resource points of view.

  17. Optimal PGU operation strategy in CHP systems

    NASA Astrophysics Data System (ADS)

    Yun, Kyungtae

    Traditional power plants only utilize about 30 percent of the primary energy that they consume, and the rest of the energy is usually wasted in the process of generating or transmitting electricity. On-site and near-site power generation has been considered by business, labor, and environmental groups to improve the efficiency and the reliability of power generation. Combined heat and power (CHP) systems are a promising alternative to traditional power plants because of the high efficiency and low CO2 emission achieved by recovering waste thermal energy produced during power generation. A CHP operational algorithm designed to optimize operational costs must be relatively simple to implement in practice such as to minimize the computational requirements from the hardware to be installed. This dissertation focuses on the following aspects pertaining the design of a practical CHP operational algorithm designed to minimize the operational costs: (a) real-time CHP operational strategy using a hierarchical optimization algorithm; (b) analytic solutions for cost-optimal power generation unit operation in CHP Systems; (c) modeling of reciprocating internal combustion engines for power generation and heat recovery; (d) an easy to implement, effective, and reliable hourly building load prediction algorithm.

  18. Optimal operation of multivessel batch distillation columns

    SciTech Connect

    Furlonge, H.I.; Pantelides, C.C.; Soerensen, E.

    1999-04-01

    Increased interest in unconventional batch distillation column configurations offers new opportunities for increasing the flexibility and energy efficiency of batch distillation. One configuration of particular interest is the multivessel column, which can be viewed as a generalization of all previously studied batch column configurations. A detailed dynamic model was used for comparing various optimal operating policies for a batch distillation column with two intermediate vessels. A wide variety of degrees of freedom including reflux ratios, product withdrawal rates, heat input to the reboiler, and initial feed distribution were considered. A mixture consisting of methanol, ethanol, n-propanol and n-butanol was studied using an objective function relating to the economics of the column operation. Optimizing the initial distribution of the feed among the vessels improved column performance significantly. For some separations, withdrawing product from the vessels into accumulators was better than total reflux operation in terms of energy consumption. Open-loop optimal operation was also compared to a recently proposed feedback control strategy where the controller parameters are optimized. The energy consumption of a regular column was about twice that of a multivessel column having the same number of stages.

  19. Optimization of the bank's operating portfolio

    NASA Astrophysics Data System (ADS)

    Borodachev, S. M.; Medvedev, M. A.

    2016-06-01

    The theory of efficient portfolios developed by Markowitz is used to optimize the structure of the types of financial operations of a bank (bank portfolio) in order to increase the profit and reduce the risk. The focus of this paper is to check the stability of the model to errors in the original data.

  20. 47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....

  1. 47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....

  2. 47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....

  3. 47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....

  4. Optimizing robot placement for visit-point tasks

    SciTech Connect

    Hwang, Y.K.; Watterberg, P.A.

    1996-06-01

    We present a manipulator placement algorithm for minimizing the length of the manipulator motion performing a visit-point task such as spot welding. Given a set of points for the tool of a manipulator to visit, our algorithm finds the shortest robot motion required to visit the points from each possible base configuration. The base configurations resulting in the shortest motion is selected as the optimal robot placement. The shortest robot motion required for visiting multiple points from a given base configuration is computed using a variant of the traveling salesman algorithm in the robot joint space and a point-to-point path planner that plans collision free robot paths between two configurations. Our robot placement algorithm is expected to reduce the robot cycle time during visit- point tasks, as well as speeding up the robot set-up process when building a manufacturing line.

  5. A superlinear interior points algorithm for engineering design optimization

    NASA Technical Reports Server (NTRS)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  6. Radar antenna pointing for optimized signal to noise ratio.

    SciTech Connect

    Doerry, Armin Walter; Marquette, Brandeis

    2013-01-01

    The Signal-to-Noise Ratio (SNR) of a radar echo signal will vary across a range swath, due to spherical wavefront spreading, atmospheric attenuation, and antenna beam illumination. The antenna beam illumination will depend on antenna pointing. Calculations of geometry are complicated by the curved earth, and atmospheric refraction. This report investigates optimizing antenna pointing to maximize the minimum SNR across the range swath.

  7. Optimizing Integrated Terminal Airspace Operations Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Bosson, Christabelle; Xue, Min; Zelinski, Shannon

    2014-01-01

    In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.

  8. An Optimization Study of Hot Stamping Operation

    NASA Astrophysics Data System (ADS)

    Ghoo, Bonyoung; Umezu, Yasuyoshi; Watanabe, Yuko; Ma, Ninshu; Averill, Ron

    2010-06-01

    In the present study, 3-dimensional finite element analyses for hot-stamping processes of Audi B-pillar product are conducted using JSTAMP/NV and HEEDS. Special attention is paid to the optimization of simulation technology coupling with thermal-mechanical formulations. Numerical simulation based on FEM technology and optimization design using the hybrid adaptive SHERPA algorithm are applied to hot stamping operation to improve productivity. The robustness of the SHERPA algorithm is found through the results of the benchmark example. The SHERPA algorithm is shown to be far superior to the GA (Genetic Algorithm) in terms of efficiency, whose calculation time is about 7 times faster than that of the GA. The SHERPA algorithm could show high performance in a large scale problem having complicated design space and long calculation time.

  9. On Motivating Operations at the Point of Online Purchase Setting

    ERIC Educational Resources Information Center

    Fagerstrom, Asle; Arntzen, Erik

    2013-01-01

    Consumer behavior analysis can be applied over a wide range of economic topics in which the main focus is the contingencies that influence the behavior of the economic agent. This paper provides an overview on the work that has been done on the impact from motivating operations at the point of online purchase situation. Motivating operations, a…

  10. Optimal Hedging Rule for Reservoir Refill Operation

    NASA Astrophysics Data System (ADS)

    Wan, W.; Zhao, J.; Lund, J. R.; Zhao, T.; Lei, X.; Wang, H.

    2015-12-01

    This paper develops an optimal reservoir Refill Hedging Rule (RHR) for combined water supply and flood operation using mathematical analysis. A two-stage model is developed to formulate the trade-off between operations for conservation benefit and flood damage in the reservoir refill season. Based on the probability distribution of the maximum refill water availability at the end of the second stage, three zones are characterized according to the relationship among storage capacity, expected storage buffer (ESB), and maximum safety excess discharge (MSED). The Karush-Kuhn-Tucker conditions of the model show that the optimality of the refill operation involves making the expected marginal loss of conservation benefit from unfilling (i.e., ending storage of refill period less than storage capacity) as nearly equal to the expected marginal flood damage from levee overtopping downstream as possible while maintaining all constraints. This principle follows and combines the hedging rules for water supply and flood management. A RHR curve is drawn analogously to water supply hedging and flood hedging rules, showing the trade-off between the two objectives. The release decision result has a linear relationship with the current water availability, implying the linearity of RHR for a wide range of water conservation functions (linear, concave, or convex). A demonstration case shows the impacts of factors. Larger downstream flood conveyance capacity and empty reservoir capacity allow a smaller current release and more water can be conserved. Economic indicators of conservation benefit and flood damage compete with each other on release, the greater economic importance of flood damage is, the more water should be released in the current stage, and vice versa. Below a critical value, improving forecasts yields less water release, but an opposing effect occurs beyond this critical value. Finally, the Danjiangkou Reservoir case study shows that the RHR together with a rolling

  11. CMB Polarization Detector Operating Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Randle, Kirsten; Chuss, David; Rostem, Karwan; Wollack, Ed

    2015-04-01

    Examining the polarization of the Cosmic Microwave Background (CMB) provides the only known way to probe the physics of inflation in the early universe. Gravitational waves produced during inflation are posited to produce a telltale pattern of polarization on the CMB and if measured would provide both tangible evidence for inflation along with a measurement of inflation's energy scale. Leading the effort to detect and measure this phenomenon, Goddard Space Flight Center has been developing high-efficiency detectors. In order to optimize signal-to-noise ratios, sources like the atmosphere and the instrumentation must be considered. In this work we examine operating parameters of these detectors such as optical power loading and photon noise. SPS Summer Internship at NASA Goddard Spaceflight Center.

  12. Improving Small Signal Stability through Operating Point Adjustment

    SciTech Connect

    Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.; Chen, Yousu; Trudnowski, Daniel J.; Mittelstadt, William; Hauer, John F.; Dagle, Jeffery E.

    2010-09-30

    ModeMeter techniques for real-time small signal stability monitoring continue to mature, and more and more phasor measurements are available in power systems. It has come to the stage to bring modal information into real-time power system operation. This paper proposes to establish a procedure for Modal Analysis for Grid Operations (MANGO). Complementary to PSS’s and other traditional modulation-based control, MANGO aims to provide suggestions such as increasing generation or decreasing load for operators to mitigate low-frequency oscillations. Different from modulation-based control, the MANGO procedure proactively maintains adequate damping for all time, instead of reacting to disturbances when they occur. Effect of operating points on small signal stability is presented in this paper. Implementation with existing operating procedures is discussed. Several approaches for modal sensitivity estimation are investigated to associate modal damping and operating parameters. The effectiveness of the MANGO procedure is confirmed through simulation studies of several test systems.

  13. Multiple tipping points and optimal repairing in interacting networks

    PubMed Central

    Majdandzic, Antonio; Braunstein, Lidia A.; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Eugene Stanley, H.; Havlin, Shlomo

    2016-01-01

    Systems composed of many interacting dynamical networks—such as the human body with its biological networks or the global economic network consisting of regional clusters—often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread and recovery. Here we develop a model for such systems and find a very rich phase diagram that becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions and two ‘forbidden' transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyse an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model. PMID:26926803

  14. Multiple tipping points and optimal repairing in interacting networks

    NASA Astrophysics Data System (ADS)

    Majdandzic, Antonio; Braunstein, Lidia A.; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Eugene Stanley, H.; Havlin, Shlomo

    2016-03-01

    Systems composed of many interacting dynamical networks--such as the human body with its biological networks or the global economic network consisting of regional clusters--often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread and recovery. Here we develop a model for such systems and find a very rich phase diagram that becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions and two `forbidden' transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyse an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model.

  15. Optimal grid point selection for improved nonrigid medical image registration

    NASA Astrophysics Data System (ADS)

    Fookes, Clinton; Maeder, Anthony

    2004-05-01

    Non-rigid image registration is an essential tool required for overcoming the inherent local anatomical variations that exist between medical images acquired from different individuals or atlases, among others. This type of registration defines a deformation field that gives a translation or mapping for every pixel in the image. One popular local approach for estimating this deformation field, known as block matching, is where a grid of control points are defined on an image and are each taken as the centre of a small window. These windows are then translated in the second image to maximise a local similarity criterion. This generates two corresponding sets of control points for the two images, yielding a sparse deformation field. This sparse field can then be propagated to the entire image using well known methods such as the thin-plate spline warp or simple Gaussian convolution. Previous block matching procedures all utilise uniformly distributed grid points. This results in the generation of a sparse deformation field containing displacement estimates at uniformly spaced locations. This neglects to make use of the evidence that block matching results are dependent on the amount of local information content. That is, results are better in regions of high information when compared to regions of low information. Consequently, this paper presents a solution to this drawback by proposing the use of a Reversible Jump Markov Chain Monte Carlo (RJMCMC) statistical procedure to optimally select grid points of interest. These grid points have a greater concentration in regions of high information and a lower concentration in regions of small information. Results show that non-rigid registration can by improved by using optimally selected grid points of interest.

  16. Searching for the Optimal Working Point of the MEIC at JLab Using an Evolutionary Algorithm

    SciTech Connect

    Balsa Terzic, Matthew Kramer, Colin Jarvis

    2011-03-01

    The Medium-energy Electron Ion Collider (MEIC), a proposed medium-energy ring-ring electron-ion collider based on CEBAF at Jefferson Lab. The collider luminosity and stability are sensitive to the choice of a working point - the betatron and synchrotron tunes of the two colliding beams. Therefore, a careful selection of the working point is essential for stable operation of the collider, as well as for achieving high luminosity. Here we describe a novel approach for locating an optimal working point based on evolutionary algorithm techniques.

  17. Attitude Control Optimization for ROCSAT-2 Operation

    NASA Astrophysics Data System (ADS)

    Chern, Jeng-Shing; Wu, A.-M.

    The second satellite of the Republic of China is named ROCSAT-2. It is a small satellite with total mass of 750 kg for remote sensing and scientific purposes. The Remote Sensing Instrument (RSI) has resolutions of 2 m for panchromatic and 8 m for multi-spectral bands, respectively. It is mainly designed for disaster monitoring and rescue, environment and pollution monitoring, forest and agriculture planning, city and country planning, etc. for Taiwan and its surrounding islands and oceans. In order to monitor Taiwan area constantly for a long time, the orbit is designed to be sun-synchronous with 14 revolutions per day. As to the scientific payload, it is an Imager of Sprite, the Upper Atmospheric Lightening (ISUAL). Since it is a small satellite, the RSI, ISUAL, and solar panel are all body-fixed. Consequently, the satellite has to maneuver as a whole body so that either RSI or ISUAL or solar panel can be pointing to the desired direction. When ROCSAT-2 rises from the horizon and catches the sunlight, it has to maneuver to face the sun for the battery to be charged. As soon as it flies to Taiwan area, several maneuvers must be made to cover the whole area for remote sensing mission. Since the swath of ROCSAT-2 is 24 km, it needs four stripes to form the mosaic of Taiwan area. Usually, four maneuvers are required to fulfill the mission in one flight path. The sequence is very important from the point of view of saving energy. However, in some cases, we may need to sacrifice energy in order to obtain good remote sensing data at a particularly specified ground region. After that mission, its solar panel has to face the sun again. Then when ROCSAT-2 sets the horizon, it has to maneuver to point the ISUAL in the specified direction for sprite imaging mission. It is the direction where scientists predict the sprite is most probable to exist. Further maneuver may be required for the down loading of onboard data. When ROCSAT-2 rises from the horizon again, it completes

  18. Optimization-based multiple-point geostatistics: A sparse way

    NASA Astrophysics Data System (ADS)

    Kalantari, Sadegh; Abdollahifard, Mohammad Javad

    2016-10-01

    In multiple-point simulation the image should be synthesized consistent with the given training image and hard conditioning data. Existing sequential simulation methods usually lead to error accumulation which is hardly manageable in future steps. Optimization-based methods are capable of handling inconsistencies by iteratively refining the simulation grid. In this paper, the multiple-point stochastic simulation problem is formulated in an optimization-based framework using a sparse model. Sparse model allows each patch to be constructed as a superposition of a few atoms of a dictionary formed using training patterns, leading to a significant increase in the variability of the patches. To control the creativity of the model, a local histogram matching method is proposed. Furthermore, effective solutions are proposed for different issues arisen in multiple-point simulation. In order to handle hard conditioning data a weighted matching pursuit method is developed in this paper. Moreover, a simple and efficient thresholding method is developed which allows working with categorical variables. The experiments show that the proposed method produces acceptable realizations in terms of pattern reproduction, increases the variability of the realizations, and properly handles numerous conditioning data.

  19. Automatic parameter optimizer (APO) for multiple-point statistics

    NASA Astrophysics Data System (ADS)

    Bani Najar, Ehsanollah; Sharghi, Yousef; Mariethoz, Gregoire

    2016-04-01

    Multiple Point statistics (MPS) have gained popularity in recent years for generating stochastic realizations of complex natural processes. The main principle is that a training image (TI) is used to represent the spatial patterns to be modeled. One important feature of MPS is that the spatial model of the fields generated is made of 1) the chosen TI and 2) a set of algorithmic parameters that are specific to each MPS algorithm. While the choice of a training image can be guided by expert knowledge (e.g. for geological modeling) or by data acquisition methods (e.g. remote sensing) determining the algorithmic parameters can be more challenging. To date, only specific guidelines have been proposed for some simulation methods, and a general parameters inference methodology is still lacking, in particular for complex modeling settings such as when using multivariate training images. The common practice consists in carrying out an extensive parameters sensitivity analysis which can be cumbersome. An additional complexity is that the algorithmic parameters do influence CPU cost, and therefore finding optimal parameters is not only a modeling question, but also a computational challenge. To overcome these issues, we propose the automatic parameter optimizer (MPS-APO), a generic method based on stochastic optimization to rapidly determine acceptable parameters, in different settings and for any MPS method. The MPS automatic parameter optimizer proceeds in a 2-step approach. In the first step, it considers the set of input parameters of a given MPS algorithm and formulates an objective function that quantifies the reproduction of spatial patterns. The Simultaneous Perturbation Stochastic Approximation (SPSA) optimization method is used to minimize the objective function. SPSA is chosen because it is able to deal with the stochastic nature of the objective function and for its computational efficiency. At each iteration, small gaps are randomly placed in the input image

  20. Optimal periodic control for spacecraft pointing and attitude determination

    NASA Technical Reports Server (NTRS)

    Pittelkau, Mark E.

    1993-01-01

    A new approach to autonomous magnetic roll/yaw control of polar-orbiting, nadir-pointing momentum bias spacecraft is considered as the baseline attitude control system for the next Tiros series. It is shown that the roll/yaw dynamics with magnetic control are periodically time varying. An optimal periodic control law is then developed. The control design features a state estimator that estimates attitude, attitude rate, and environmental torque disturbances from Earth sensor and sun sensor measurements; no gyros are needed. The state estimator doubles as a dynamic attitude determination and prediction function. In addition to improved performance, the optimal controller allows a much smaller momentum bias than would otherwise be necessary. Simulation results are given.

  1. Optimization of wastewater treatment plant operation for greenhouse gas mitigation.

    PubMed

    Kim, Dongwook; Bowen, James D; Ozelkan, Ertunga C

    2015-11-01

    This study deals with the determination of optimal operation of a wastewater treatment system for minimizing greenhouse gas emissions, operating costs, and pollution loads in the effluent. To do this, an integrated performance index that includes three objectives was established to assess system performance. The ASMN_G model was used to perform system optimization aimed at determining a set of operational parameters that can satisfy three different objectives. The complex nonlinear optimization problem was simulated using the Nelder-Mead Simplex optimization algorithm. A sensitivity analysis was performed to identify influential operational parameters on system performance. The results obtained from the optimization simulations for six scenarios demonstrated that there are apparent trade-offs among the three conflicting objectives. The best optimized system simultaneously reduced greenhouse gas emissions by 31%, reduced operating cost by 11%, and improved effluent quality by 2% compared to the base case operation.

  2. Optimizing Photon Collection from Point Sources with Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Hill, Alexander; Hervas, David; Nash, Joseph; Graham, Martin; Burgers, Alexander; Paudel, Uttam; Steel, Duncan; Kwiat, Paul

    2015-05-01

    Collection of light from point-like sources is typically poor due to the optical aberrations present with very high numerical-aperture optics. In the case of quantum dots, the emitted mode is nonisotropic and may be quite difficult to couple into single- or even few-mode fiber. Wavefront aberrations can be corrected using adaptive optics at the classical level by analyzing the wavefront directly (e.g., with a Shack-Hartmann sensor); however, these techniques are not feasible at the single-photon level. We present a new technique for adaptive optics with single photons using a genetic algorithm to optimize collection from point emitters with a deformable mirror. We first demonstrate our technique for improving coupling from a subwavelength pinhole, which simulates isotropic emission from a point source. We then apply our technique in situto InAs/GaAs quantum dots, obtaining coupling increases of up to 50% even in the presence of an artificial source of drift.

  3. Classification and uptake of reservoir operation optimization methods

    NASA Astrophysics Data System (ADS)

    Dobson, Barnaby; Pianosi, Francesca; Wagener, Thorsten

    2016-04-01

    Reservoir operation optimization algorithms aim to improve the quality of reservoir release and transfer decisions. They achieve this by creating and optimizing the reservoir operating policy; a function that returns decisions based on the current system state. A range of mathematical optimization algorithms and techniques has been applied to the reservoir operation problem of policy optimization. In this work, we propose a classification of reservoir optimization approaches by focusing on the formulation of the water management problem rather than the optimization algorithm type. We believe that decision makers and operators will find it easier to navigate a classification system based on the problem characteristics, something they can clearly define, rather than the optimization algorithm. Part of this study includes an investigation regarding the extent of algorithm uptake and the possible reasons that limit real world application.

  4. Process Parameters Optimization in Single Point Incremental Forming

    NASA Astrophysics Data System (ADS)

    Gulati, Vishal; Aryal, Ashmin; Katyal, Puneet; Goswami, Amitesh

    2016-04-01

    This work aims to optimize the formability and surface roughness of parts formed by the single-point incremental forming process for an Aluminium-6063 alloy. The tests are based on Taguchi's L18 orthogonal array selected on the basis of DOF. The tests have been carried out on vertical machining center (DMC70V); using CAD/CAM software (SolidWorks V5/MasterCAM). Two levels of tool radius, three levels of sheet thickness, step size, tool rotational speed, feed rate and lubrication have been considered as the input process parameters. Wall angle and surface roughness have been considered process responses. The influential process parameters for the formability and surface roughness have been identified with the help of statistical tool (response table, main effect plot and ANOVA). The parameter that has the utmost influence on formability and surface roughness is lubrication. In the case of formability, lubrication followed by the tool rotational speed, feed rate, sheet thickness, step size and tool radius have the influence in descending order. Whereas in surface roughness, lubrication followed by feed rate, step size, tool radius, sheet thickness and tool rotational speed have the influence in descending order. The predicted optimal values for the wall angle and surface roughness are found to be 88.29° and 1.03225 µm. The confirmation experiments were conducted thrice and the value of wall angle and surface roughness were found to be 85.76° and 1.15 µm respectively.

  5. Multi-point optimization of recirculation flow type casing treatment in centrifugal compressors

    NASA Astrophysics Data System (ADS)

    Tun, Min Thaw; Sakaguchi, Daisaku

    2016-06-01

    High-pressure ratio and wide operating range are highly required for a turbocharger in diesel engines. A recirculation flow type casing treatment is effective for flow range enhancement of centrifugal compressors. Two ring grooves on a suction pipe and a shroud casing wall are connected by means of an annular passage and stable recirculation flow is formed at small flow rates from the downstream groove toward the upstream groove through the annular bypass. The shape of baseline recirculation flow type casing is modified and optimized by using a multi-point optimization code with a metamodel assisted evolutionary algorithm embedding a commercial CFD code CFX from ANSYS. The numerical optimization results give the optimized design of casing with improving adiabatic efficiency in wide operating flow rate range. Sensitivity analysis of design parameters as a function of efficiency has been performed. It is found that the optimized casing design provides optimized recirculation flow rate, in which an increment of entropy rise is minimized at grooves and passages of the rotating impeller.

  6. Charcoal bed operation for optimal organic carbon removal

    SciTech Connect

    Merritt, C.M.; Scala, F.R.

    1995-05-01

    Historically, evaporation, reverse osmosis or charcoal-demineralizer systems have been used to remove impurities in liquid radwaste processing systems. At Nine Mile point, we recently replaced our evaporators with charcoal-demineralizer systems to purify floor drain water. A comparison of the evaporator to the charcoal-demineralizer system has shown that the charcoal-demineralizer system is more effective in organic carbon removal. We also show the performance data of the Granulated Activated Charcoal (GAC) vessel as a mechanical filter. Actual data showing that frequent backflushing and controlled flow rates through the GAC vessel dramatically increases Total Organic Carbon (TOC) removal efficiency. GAC vessel dramatically increases Total Organic Carbon (TOC) removal efficiency. Recommendations are provided for operating the GAC vessel to ensure optimal performance.

  7. 47 CFR 22.591 - Channels for point-to-point operation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false Channels for point-to-point operation. 22.591....46 72.88 75.74 72.12 72.50 72.90 75.76 72.14 72.54 72.92 75.78 72.16 72.58 72.94 75.80 72.18 72.62 72.96 75.82 72.20 72.64 72.98 75.84 72.22 72.66 75.42 75.86 72.24 72.68 75.46 75.88 72.26 72.70 75.50...

  8. 47 CFR 22.591 - Channels for point-to-point operation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 2 2014-10-01 2014-10-01 false Channels for point-to-point operation. 22.591....46 72.88 75.74 72.12 72.50 72.90 75.76 72.14 72.54 72.92 75.78 72.16 72.58 72.94 75.80 72.18 72.62 72.96 75.82 72.20 72.64 72.98 75.84 72.22 72.66 75.42 75.86 72.24 72.68 75.46 75.88 72.26 72.70 75.50...

  9. 47 CFR 22.591 - Channels for point-to-point operation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 2 2012-10-01 2012-10-01 false Channels for point-to-point operation. 22.591....46 72.88 75.74 72.12 72.50 72.90 75.76 72.14 72.54 72.92 75.78 72.16 72.58 72.94 75.80 72.18 72.62 72.96 75.82 72.20 72.64 72.98 75.84 72.22 72.66 75.42 75.86 72.24 72.68 75.46 75.88 72.26 72.70 75.50...

  10. 47 CFR 22.591 - Channels for point-to-point operation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Channels for point-to-point operation. 22.591....46 72.88 75.74 72.12 72.50 72.90 75.76 72.14 72.54 72.92 75.78 72.16 72.58 72.94 75.80 72.18 72.62 72.96 75.82 72.20 72.64 72.98 75.84 72.22 72.66 75.42 75.86 72.24 72.68 75.46 75.88 72.26 72.70 75.50...

  11. 47 CFR 22.591 - Channels for point-to-point operation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 2 2013-10-01 2013-10-01 false Channels for point-to-point operation. 22.591....46 72.88 75.74 72.12 72.50 72.90 75.76 72.14 72.54 72.92 75.78 72.16 72.58 72.94 75.80 72.18 72.62 72.96 75.82 72.20 72.64 72.98 75.84 72.22 72.66 75.42 75.86 72.24 72.68 75.46 75.88 72.26 72.70 75.50...

  12. Optimizing emergency department front-end operations.

    PubMed

    Wiler, Jennifer L; Gentle, Christopher; Halfpenny, James M; Heins, Alan; Mehrotra, Abhi; Mikhail, Michael G; Fite, Diana

    2010-02-01

    As administrators evaluate potential approaches to improve cost, quality, and throughput efficiencies in the emergency department (ED), "front-end" operations become an important area of focus. Interventions such as immediate bedding, bedside registration, advanced triage (triage-based care) protocols, physician/practitioner at triage, dedicated "fast track" service line, tracking systems and whiteboards, wireless communication devices, kiosk self check-in, and personal health record technology ("smart cards") have been offered as potential solutions to streamline the front-end processing of ED patients, which becomes crucial during periods of full capacity, crowding, and surges. Although each of these operational improvement strategies has been described in the lay literature, various reports exist in the academic literature about their effect on front-end operations. In this report, we present a review of the current body of academic literature, with the goal of identifying select high-impact front-end operational improvement solutions. PMID:19556030

  13. Optimizing Synchronization Operations for Remote Memory Communication Systems

    SciTech Connect

    Buntinas, Darius; Saify, Amina; Panda, Dhabaleswar K.; Nieplocha, Jarek; Bob Werner

    2003-04-22

    Synchronization operations, such as fence and locking, are used in many parallel operations accessing shared memory. However, a process which is blocked waiting for a fence operation to complete, or for a lock to be acquired, cannot perform useful computation. It is therefore critical that these operations be implemented as efficiently as possible to reduce the time a process waits idle. These operations also impact the scalability of the overall system. As system sizes get larger, the number of processes potentially requesting a lock increases. In this paper we describe the design and implementation of an optimized operation which combines a global fence operation and a barrier synchronization operation. We also describe our implementation of an optimized lock algorithm. The optimizations have been incorporated into the ARMCI communication library. The global fence and barrier operation gives a factor of improvement of up to 9 over the current implementation in a 16 node system, while the optimized lock implementation gives up to 1.25 factor of improvement. These optimizations allow for more efficient and scalable applications

  14. FCCU operating changes optimize octane catalyst use

    SciTech Connect

    Desai, P.H.

    1986-09-01

    The use of octane-enhancing catalysts in a fluid catalytic cracking unit (FCCU) requires changes in the operation of the unit to derive maximum benefits from the octane catalyst. In addition to the impressive octane gain achieved by the octane catalyst, the catalyst also affects the yield structure, the unit heat balance, and the product slate by reducing hydrogen transfer reactions. Catalyst manufacturers have introduced new product lines based upon ultrastable Y type (USY) zeolites which can result in 2 to 3 research octane number (RON) gains over the more traditional rare earth exchanged Y type (REY) zeolites. Here are some operating techniques for the FCCU and associated processes that will allow maximum benefits from octane catalyst use.

  15. Earth-Moon Libration Point Orbit Stationkeeping: Theory, Modeling and Operations

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Pavlak, Thomas A.; Haapala, Amanda F.; Howell, Kathleen C.; Woodard, Mark A.

    2013-01-01

    Collinear Earth-Moon libration points have emerged as locations with immediate applications. These libration point orbits are inherently unstable and must be maintained regularly which constrains operations and maneuver locations. Stationkeeping is challenging due to relatively short time scales for divergence effects of large orbital eccentricity of the secondary body, and third-body perturbations. Using the Acceleration Reconnection and Turbulence and Electrodynamics of the Moon's Interaction with the Sun (ARTEMIS) mission orbit as a platform, the fundamental behavior of the trajectories is explored using Poincare maps in the circular restricted three-body problem. Operational stationkeeping results obtained using the Optimal Continuation Strategy are presented and compared to orbit stability information generated from mode analysis based in dynamical systems theory.

  16. How beam driven operations optimize ALICE efficiency and safety

    NASA Astrophysics Data System (ADS)

    Pinazza, Ombretta; Augustinus, André; Bond, Peter M.; Chochula, Peter C.; Kurepin, Alexander N.; Lechman, Mateusz; Rosinsky, Peter

    2012-12-01

    ALICE is one of the experiments at the Large Hadron Collider (LHC), CERN (Geneva, Switzerland). The ALICE DCS is responsible for the coordination and monitoring of the various detectors and of central systems, for collecting and managing alarms, data and commands. Furthermore, it's the central tool to monitor and verify the beam status with special emphasis on safety. In particular, it is important to ensure that the experiment's detectors are brought to and stay in a safe state, e.g. reduced voltages during the injection, acceleration, and adjusting phases of the LHC beams. Thanks to its central role, it's the appropriate system to implement automatic actions that were normally left to the initiative of the shift leader; where decisions come from the knowledge of detectors’ statuses and of the beam, combined together to fulfil the scientific requirements, keeping safety as a priority in all cases. This paper shows how the central DCS is interpreting the daily operations from a beam driven point of view. A tool is being implemented where automatic actions can be set and monitored through expert panels, with a custom level of automatization. Some routine operations are already automated, when a particular beam mode is declared by the LHC, which can represent a safety concern. This beam driven approach is proving to be a tool for the shift crew to optimize the efficiency of data taking, while improving the safety of the experiment.

  17. Applying Dynamical Systems Theory to Optimize Libration Point Orbit Stationkeeping Maneuvers for WIND

    NASA Technical Reports Server (NTRS)

    Brown, Jonathan M.; Petersen, Jeremy D.

    2014-01-01

    NASA's WIND mission has been operating in a large amplitude Lissajous orbit in the vicinity of the interior libration point of the Sun-Earth/Moon system since 2004. Regular stationkeeping maneuvers are required to maintain the orbit due to the instability around the collinear libration points. Historically these stationkeeping maneuvers have been performed by applying an incremental change in velocity, or (delta)v along the spacecraft-Sun vector as projected into the ecliptic plane. Previous studies have shown that the magnitude of libration point stationkeeping maneuvers can be minimized by applying the (delta)v in the direction of the local stable manifold found using dynamical systems theory. This paper presents the analysis of this new maneuver strategy which shows that the magnitude of stationkeeping maneuvers can be decreased by 5 to 25 percent, depending on the location in the orbit where the maneuver is performed. The implementation of the optimized maneuver method into operations is discussed and results are presented for the first two optimized stationkeeping maneuvers executed by WIND.

  18. Constrained genetic algorithms for optimizing multi-use reservoir operation

    NASA Astrophysics Data System (ADS)

    Chang, Li-Chiu; Chang, Fi-John; Wang, Kuo-Wei; Dai, Shin-Yi

    2010-08-01

    To derive an optimal strategy for reservoir operations to assist the decision-making process, we propose a methodology that incorporates the constrained genetic algorithm (CGA) where the ecological base flow requirements are considered as constraints to water release of reservoir operation when optimizing the 10-day reservoir storage. Furthermore, a number of penalty functions designed for different types of constraints are integrated into reservoir operational objectives to form the fitness function. To validate the applicability of this proposed methodology for reservoir operations, the Shih-Men Reservoir and its downstream water demands are used as a case study. By implementing the proposed CGA in optimizing the operational performance of the Shih-Men Reservoir for the last 20 years, we find this method provides much better performance in terms of a small generalized shortage index (GSI) for human water demands and greater ecological base flows for most of the years than historical operations do. We demonstrate the CGA approach can significantly improve the efficiency and effectiveness of water supply capability to both human and ecological base flow requirements and thus optimize reservoir operations for multiple water users. The CGA can be a powerful tool in searching for the optimal strategy for multi-use reservoir operations in water resources management.

  19. Synergy optimization and operation management on syndicate complementary knowledge cooperation

    NASA Astrophysics Data System (ADS)

    Tu, Kai-Jan

    2014-10-01

    The number of multi enterprises knowledge cooperation has grown steadily, as a result of global innovation competitions. I have conducted research based on optimization and operation studies in this article, and gained the conclusion that synergy management is effective means to break through various management barriers and solve cooperation's chaotic systems. Enterprises must communicate system vision and access complementary knowledge. These are crucial considerations for enterprises to exert their optimization and operation knowledge cooperation synergy to meet global marketing challenges.

  20. Operational optimization of large-scale parallel-unit SWRO desalination plant using differential evolution algorithm.

    PubMed

    Wang, Jian; Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping

    2014-01-01

    A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180

  1. Operational Optimization of Large-Scale Parallel-Unit SWRO Desalination Plant Using Differential Evolution Algorithm

    PubMed Central

    Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping

    2014-01-01

    A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180

  2. Operational optimization of large-scale parallel-unit SWRO desalination plant using differential evolution algorithm.

    PubMed

    Wang, Jian; Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping

    2014-01-01

    A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality.

  3. 24 CFR 902.47 - Management operations portion of total PHAS points.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Management operations portion of... Operations § 902.47 Management operations portion of total PHAS points. Of the total 100 points available for a PHAS score, a PHA may receive up to 30 points based on the Management Operations Indicator....

  4. Beam pointing angle optimization and experiments for vehicle laser Doppler velocimetry

    NASA Astrophysics Data System (ADS)

    Fan, Zhe; Hu, Shuling; Zhang, Chunxi; Nie, Yanju; Li, Jun

    2015-10-01

    Beam pointing angle (BPA) is one of the key parameters that affects the operation performance of the laser Doppler velocimetry (LDV) system. By considering velocity sensitivity and echo power, for the first time, the optimized BPA of vehicle LDV is analyzed. Assuming mounting error is within ±1.0 deg, the reflectivity and roughness are variable for different scenarios, the optimized BPA is obtained in the range from 29 to 43 deg. Therefore, velocity sensitivity is in the range of 1.25 to 1.76 MHz/(m/s), and the percentage of normalized echo power at optimized BPA with respect to that at 0 deg is greater than 53.49%. Laboratory experiments with a rotating table are done with different BPAs of 10, 35, and 66 deg, and the results coincide with the theoretical analysis. Further, vehicle experiment with optimized BPA of 35 deg is conducted by comparison with microwave radar (accuracy of ±0.5% full scale output). The root-mean-square error of LDV's results is smaller than the Microstar II's, 0.0202 and 0.1495 m/s, corresponding to LDV and Microstar II, respectively, and the mean velocity discrepancy is 0.032 m/s. It is also proven that with the optimized BPA both high velocity sensitivity and acceptable echo power can simultaneously be guaranteed.

  5. Improvements in floating point addition/subtraction operations

    DOEpatents

    Farmwald, P.M.

    1984-02-24

    Apparatus is described for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.

  6. Nickel-Cadmium Battery Operation Management Optimization Using Robust Design

    NASA Technical Reports Server (NTRS)

    Blosiu, Julian O.; Deligiannis, Frank; DiStefano, Salvador

    1996-01-01

    In recent years following several spacecraft battery anomalies, it was determined that managing the operational factors of NASA flight NiCd rechargeable battery was very important in order to maintain space flight battery nominal performance. The optimization of existing flight battery operational performance was viewed as something new for a Taguchi Methods application.

  7. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  8. Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations

    NASA Technical Reports Server (NTRS)

    Zhao, Yiyuan; Chen, Robert T. N.

    1996-01-01

    This paper presents a summary of a series of recent analytical studies conducted to investigate One-Engine-Inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, Continued TakeOff (CTO), Rejected TakeOff (RTO), Balked Landing (BL), and Continued Landing (CL) are investigated for both Vertical-TakeOff-and-Landing (VTOL) and Short-TakeOff-and-Landing (STOL) terminal-area operations. The formulation of the nonlinear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajectory optimization studies are presented. In particular, a new balanced- weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.

  9. Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations

    NASA Technical Reports Server (NTRS)

    Chen, Robert T. N.; Zhao, Yi-Yuan

    1997-01-01

    This paper presents a summary of a series of recent analytical studies conducted to investigate one-engine-inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, continued takeoff (CTO), rejected takeoff (RTO), balked landing (BL), and continued landing (CL) are investigated for both vertical-takeoff-and-landing (VTOL) and short-takeoff-and-landing (STOL) terminal-area operations. The formulation of the non-linear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajector optimization studies are presented. In particular, a new balanced-weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.

  10. Gradient-based multiobjective optimization using a distance constraint technique and point replacement

    NASA Astrophysics Data System (ADS)

    Sato, Yuki; Izui, Kazuhiro; Yamada, Takayuki; Nishiwaki, Shinji

    2016-07-01

    This paper proposes techniques to improve the diversity of the searching points during the optimization process in an Aggregative Gradient-based Multiobjective Optimization (AGMO) method, so that well-distributed Pareto solutions are obtained. First to be discussed is a distance constraint technique, applied among searching points in the objective space when updating design variables, that maintains a minimum distance between the points. Next, a scheme is introduced that deals with updated points that violate the distance constraint, by deleting the offending points and introducing new points in areas of the objective space where searching points are sparsely distributed. Finally, the proposed method is applied to example problems to illustrate its effectiveness.

  11. Fuzzy multiobjective models for optimal operation of a hydropower system

    NASA Astrophysics Data System (ADS)

    Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.

    2013-06-01

    Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.

  12. Optimization of Operations Resources via Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  13. Optimization of pocket milling operation of rectangular shapes

    SciTech Connect

    Andijani, A.

    1994-12-31

    An optimization model to setup machine parameters (feed, speed, width, and depth of cut) for pocket milling on a vertical mill is developed. We propose an approach to determine the optimal set of operating conditions that minimize the total milling cost. The part to be milled has a square or a rectangular shape. The pocket milling mathematical model in this paper is an explicit, multi-variable, nonlinear objective function, and nonlinear equality and inequality constraints. We provide a study of some optimization algorithms that are suitable for the optimization of the pocket milling operation. We describe the general and the relative features for each algorithm. However, the final choice of the best algorithm depends upon individual preference, experience, and the case being investigated. An illustrative example is presented.

  14. Optimal investment and operation plans for Kenya's electricity industry

    SciTech Connect

    Murathe-Muthee, A.

    1983-01-01

    The research sought to determine optimal investment and operation plans for Kenya's electricity industry. A multi-period linear programming model was used to select construction, generation and transmission programs that will minimize the present value of electricity investment and operation costs (PVC) while meeting forecasted demand for the years 1982 through 2000. The basic optimal construction plan was designed to provide capability for meeting demand under dry-year conditions. Out of a total of 804 MW of new generation capacity indicated, 36% would be from hydro, 27% from geothermal and 37% from coal and oil resources. In a dry year, optimal operation of the system would generate 59% of the energy from hydro, 14% from geothermal and 27% from coal and oil sources. In average years a 14% increase in hydroenergy makes it possible to reduce fuel use by 23% and decrease the PVC by 11%.

  15. Optimal qudit operator bases for efficient characterization of quantum gates

    NASA Astrophysics Data System (ADS)

    Reich, Daniel M.; Gualdi, Giulia; Koch, Christiane P.

    2014-09-01

    For target unitary operations which preserve the basis of measurement operators, the average fidelity of the corresponding N-qubit gate can be determined efficiently. That is, the number of required experiments is independent of system size and the classical computational resources scale only polynomially in the number N of qubits. Here we address the question of how to optimally choose the measurement basis for fidelity estimation when replacing two-level qubits by d-level qudits. We define optimality in terms of the maximal number of unitaries that preserve the measurement basis. Our definition allows us to construct the optimal measurement basis in terms of their spectra and eigenbases: the measurement operators are unitaries with d-nary spectrum and partition into d+1 Abelian groups whose eigenbases are mutually unbiased.

  16. The Optimal Cloner for Mixed States as a Quantum Operation

    NASA Astrophysics Data System (ADS)

    Gardiner, John G.; van Huele, Jean-Francois S.

    2012-10-01

    The no-cloning theorem in quantum information says that it is impossible to produce two copies of an arbitrary quantum state. This precludes the possibility of a perfect universal quantum cloner, a process that could copy any quantum state perfectly. It is possible, however, to find optimal approximations of such a cloner. Using the formalism of quantum operations we obtain the optimal quantum cloner for arbitrary mixed states of a given purity and find that it is equivalent to the Buzek-Hillery optimal cloner for pure states. We also find the fidelity of this cloner as a function of the chosen purity.

  17. OPTIMIZATION OF THE PHASE ADVANCE BETWEEN RHIC INTERACTION POINTS.

    SciTech Connect

    TOMAS, R.; FISCHER, W.

    2005-05-16

    The authors consider a scenario of having two identical Interaction Points (IPs) in the Relativistic Heavy Ion Collider (RHIC). The strengths of beam-beam resonances strongly depend on the phase advance between these two IPs and therefore certain phase advances could improve beam life-time and luminosity. The authors compute the dynamic aperture (DA) as function of the phase advance between these IPs to find the optimum settings.The beam-beam interaction is treated in the weak-strong approximation and a non-linear model of the lattice is used. For the current RHIC proton working point (0.69.0.685) [1] the design lattice is found to have the optimum phase advance. However this is not the case for other working points.

  18. Point-process principal components analysis via geometric optimization.

    PubMed

    Solo, Victor; Pasha, Syed Ahmed

    2013-01-01

    There has been a fast-growing demand for analysis tools for multivariate point-process data driven by work in neural coding and, more recently, high-frequency finance. Here we develop a true or exact (as opposed to one based on time binning) principal components analysis for preliminary processing of multivariate point processes. We provide a maximum likelihood estimator, an algorithm for maximization involving steepest ascent on two Stiefel manifolds, and novel constrained asymptotic analysis. The method is illustrated with a simulation and compared with a binning approach. PMID:23020106

  19. Point-process principal components analysis via geometric optimization.

    PubMed

    Solo, Victor; Pasha, Syed Ahmed

    2013-01-01

    There has been a fast-growing demand for analysis tools for multivariate point-process data driven by work in neural coding and, more recently, high-frequency finance. Here we develop a true or exact (as opposed to one based on time binning) principal components analysis for preliminary processing of multivariate point processes. We provide a maximum likelihood estimator, an algorithm for maximization involving steepest ascent on two Stiefel manifolds, and novel constrained asymptotic analysis. The method is illustrated with a simulation and compared with a binning approach.

  20. Energy-optimal programming and scheduling of the manufacturing operations

    NASA Astrophysics Data System (ADS)

    Badea, N.; Frumuşanu, G.; Epureanu, A.

    2016-08-01

    The shop floor energy system covers the energy consumed for both the air conditioning and manufacturing processes. At the same time, most of energy consumed in manufacturing processes is converted in heat released in the shop floor interior and has a significant influence on the microclimate. Both these components of the energy consumption have a time variation that can be realistic assessed. Moreover, the consumed energy decisively determines the environmental sustainability of the manufacturing operation, while the expenditure for running the shop floor energy system is a significant component of the manufacturing operations cost. Finally yet importantly, the energy consumption can be fundamentally influenced by properly programming and scheduling of the manufacturing operations. In this paper, we present a method for modeling and energy-optimal programming & scheduling the manufacturing operations. In this purpose, we have firstly identified two optimization targets, namely the environmental sustainability and the economic efficiency. Then, we have defined three optimization criteria, which can assess the degree of achieving these targets. Finally, we have modeled the relationship between the optimization criteria and the parameters of programming and scheduling. In this way, it has been revealed that by adjusting these parameters one can significantly improve the sustainability and efficiency of manufacturing operations. A numerical simulation has proved the feasibility and the efficiency of the proposed method.

  1. 76 FR 60733 - Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-30

    ... SECURITY Coast Guard 33 CFR Part 117 Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY AGENCY... the Smith Point Bridge, 6.1, across Narrow Bay, between Smith Point and Fire Island, New York. The.... SUPPLEMENTARY INFORMATION: The Smith Point Bridge, across Narrow Bay, mile 6.1, between Smith Point and...

  2. Optimal Operation of a Thermal Energy Storage Tank Using Linear Optimization

    NASA Astrophysics Data System (ADS)

    Civit Sabate, Carles

    In this thesis, an optimization procedure for minimizing the operating costs of a Thermal Energy Storage (TES) tank is presented. The facility in which the optimization is based is the combined cooling, heating, and power (CCHP) plant at the University of California, Irvine. TES tanks provide the ability of decoupling the demand of chilled water from its generation, over the course of a day, from the refrigeration and air-conditioning plants. They can be used to perform demand-side management, and optimization techniques can help to approach their optimal use. The proposed optimization approach provides a fast and reliable methodology of finding the optimal use of the TES tank to reduce energy costs and provides a tool for future implementation of optimal control laws on the system. Advantages of the proposed methodology are studied using simulation with historical data.

  3. Optimizing and controlling the operation of heat-exchanger networks

    SciTech Connect

    Aguilera, N.; Marchetti, J.L.

    1998-05-01

    A procedure was developed for on-line optimization and control systems of heat-exchanger networks, which features a two-level control structure, one for a constant configuration control system and the other for a supervisor on-line optimizer. The coordination between levels is achieved by adjusting the formulation of the optimization problem to meet requirements of the adopted control system. The general goal is always to work without losing stream temperature targets while keeping the highest energy integration. The operation constraints used for heat-exchanger and utility units emphasize the computation of heat-exchanger duties rather than intermediate stream temperatures. This simplifies the modeling task and provides clear links with the limits of the manipulated variables. The optimal condition is determined using LP or NLP, depending on the final problem formulation. Degrees of freedom for optimization and equation constraints for considering simple and multiple bypasses are rigorously discussed. An example used shows how the optimization problem can be adjusted to a specific network design, its expected operating space, and the control configuration. Dynamic simulations also show benefits and limitations of this procedure.

  4. Trajectory optimization for intra-operative nuclear tomographic imaging.

    PubMed

    Vogel, Jakob; Lasser, Tobias; Gardiazabal, José; Navab, Nassir

    2013-10-01

    Diagnostic nuclear imaging modalities like SPECT typically employ gantries to ensure a densely sampled geometry of detectors in order to keep the inverse problem of tomographic reconstruction as well-posed as possible. In an intra-operative setting with mobile freehand detectors the situation changes significantly, and having an optimal detector trajectory during acquisition becomes critical. In this paper we propose an incremental optimization method based on the numerical condition of the system matrix of the underlying iterative reconstruction method to calculate optimal detector positions during acquisition in real-time. The performance of this approach is evaluated using simulations. A first experiment on a phantom using a robot-controlled intra-operative SPECT-like setup demonstrates the feasibility of the approach. PMID:23706624

  5. A fixed point theorem for certain operator valued maps

    NASA Technical Reports Server (NTRS)

    Brown, D. R.; Omalley, M. J.

    1978-01-01

    In this paper, we develop a family of Neuberger-like results to find points z epsilon H satisfying L(z)z = z and P(z) = z. This family includes Neuberger's theorem and has the additional property that most of the sequences q sub n converge to idempotent elements of B sub 1(H).

  6. Driving external chemistry optimization via operations management principles.

    PubMed

    Bi, F Christopher; Frost, Heather N; Ling, Xiaolan; Perry, David A; Sakata, Sylvie K; Bailey, Simon; Fobian, Yvette M; Sloan, Leslie; Wood, Anthony

    2014-03-01

    Confronted with the need to significantly raise the productivity of remotely located chemistry CROs Pfizer embraced a commitment to continuous improvement which leveraged the tools from both Lean Six Sigma and queue management theory to deliver positive measurable outcomes. During 2012 cycle times were reduced by 48% by optimization of the work in progress and conducting a detailed workflow analysis to identify and address pinch points. Compound flow was increased by 29% by optimizing the request process and de-risking the chemistry. Underpinning both achievements was the development of close working relationships and productive communications between Pfizer and CRO chemists.

  7. Optimization of Pilot Point Locations for Conditional Simulation of Heterogeneous Aquifers

    NASA Astrophysics Data System (ADS)

    Mehne, J.; Nowak, W.

    2011-12-01

    Spatial variability of geological media in conjunction with scarce data introduces parameter and prediction uncertainties in simulations of flow and transport. Conditional simulation methods use the Monte Carlo framework combined with inverse modeling techniques to incorporate available field data and, thus, to reduce these uncertainties. The pilot point method is a wide-spread method that can be used for conditional simulation in order to condition multiple realizations of heterogeneous conductivity fields on available field data. Virtual direct measurements of conductivity, placed at so called pilot points, are introduced and their values are optimized until all available field data is honored. Adequate placement and numbers of pilot points are crucial both for accurate representation of heterogeneity and to control computational costs. Current placement methods for pilot points depend solely on the expertise and experience of the modeler or involve computationally costly sensitivity analyses. This study presents a new method for optimal pilot point placement. Ideas from geostatistical optimal design and ensemble Kalman filters are combined. The proposed method emulates the pilot point method at drastically reduced computational costs by avoiding the evaluation of sensitivity coefficients. A task-driven measure for optimal design of pilot point placement patterns is evaluated without carrying out the actual conditioning process. This makes it possible to efficiently compare a high number of possible patterns for pilot point placement. By formal optimization of this measure, pilot point placement schemes are found that are optimal in representing the data with minimal numbers of pilot points. Small synthetic test applications of the proposed method showed a promising computational performance and a geostatistically logical choice of pilot point locations. In comparison with a regularly spaced pilot point grid, equally good calibration results were achieved by a

  8. 77 FR 8904 - Entergy Nuclear Indian Point 3, LLC.; Entergy Nuclear Operations, Inc., Indian Point Nuclear...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-15

    ... locally to start 31 AFW pump. 4 7A Locally operate the bypass valve for Flow Control Valve (FCV)- 1121 in... AFW flow to Steam Generators (SGs). 9 60A Locally open valve 227 to establish charging makeup flowpath... 4 is available to restore recirculation flow by locally operating the bypass valve for FCV-1121...

  9. Optimal Operation of Energy Storage in Power Transmission and Distribution

    NASA Astrophysics Data System (ADS)

    Akhavan Hejazi, Seyed Hossein

    In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider

  10. A Transmittance-optimized, Point-focus Fresnel Lens Solar Concentrator

    NASA Technical Reports Server (NTRS)

    Oneill, M. J.

    1984-01-01

    The development of a point-focus Fresnel lens solar concentrator for high-temperature solar thermal energy system applications is discussed. The concentrator utilizes a transmittance-optimized, short-focal-length, dome-shaped refractive Fresnel lens as the optical element. This concentrator combines both good optical performance and a large tolerance for manufacturing, deflection, and tracking errors. The conceptual design of an 11-meter diameter concentrator which should provide an overall collector efficiency of about 70% at an 815 C (1500 F) receiver operating temperature and a 1500X geometric concentration ratio (lens aperture area/receiver aperture area) was completed. Results of optical and thermal analyses of the collector, a discussion of manufacturing methods for making the large lens, and an update on the current status and future plans of the development program are included.

  11. Using information Theory in Optimal Test Point Selection for Health Management in NASA's Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Mehr, Ali Farhang; Tumer, Irem

    2005-01-01

    In this paper, we will present a new methodology that measures the "worth" of deploying an additional testing instrument (sensor) in terms of the amount of information that can be retrieved from such measurement. This quantity is obtained using a probabilistic model of RLV's that has been partially developed in the NASA Ames Research Center. A number of correlated attributes are identified and used to obtain the worth of deploying a sensor in a given test point from an information-theoretic viewpoint. Once the information-theoretic worth of sensors is formulated and incorporated into our general model for IHM performance, the problem can be formulated as a constrained optimization problem where reliability and operational safety of the system as a whole is considered. Although this research is conducted specifically for RLV's, the proposed methodology in its generic form can be easily extended to other domains of systems health monitoring.

  12. Selection of optimal threshold to construct recurrence plot for structural operational vibration measurements

    NASA Astrophysics Data System (ADS)

    Yang, Dong; Ren, Wei-Xin; Hu, Yi-Ding; Li, Dan

    2015-08-01

    The structural health monitoring (SHM) involves the sampled operational vibration measurements over time so that the structural features can be extracted accordingly. The recurrence plot (RP) and corresponding recurrence quantification analysis (RQA) have become a useful tool in various fields due to its efficiency. The threshold selection is one of key issues to make sure that the constructed recurrence plot contains enough recurrence points. Different signals have in nature different threshold values. This paper is aiming at presenting an approach to determine the optimal threshold for the operational vibration measurements of civil engineering structures. The surrogate technique and Taguchi loss function are proposed to generate reliable data and to achieve the optimal discrimination power point where the threshold is optimum. The impact of selecting recurrence thresholds on different signals is discussed. It is demonstrated that the proposed method to identify the optimal threshold is applicable to the operational vibration measurements. The proposed method provides a way to find the optimal threshold for the best RP construction of structural vibration measurements under operational conditions.

  13. Variability of "optimal" cut points for mild, moderate, and severe pain: neglected problems when comparing groups.

    PubMed

    Hirschfeld, Gerrit; Zernikow, Boris

    2013-01-01

    Defining cut points for mild, moderate, and severe pain intensity on the basis of differences in functional interference has an intuitive appeal. The statistical procedure to derive them proposed in 1995 by Serlin et al. has been widely used. Contrasting cut points between populations have been interpreted as meaningful differences between different chronic pain populations. We explore the variability associated with optimally defined cut points in a large sample of chronic pain patients and in homogeneous subsamples. Ratings of maximal pain intensity (0-10 numeric rating scale, NRS) and pain-related disability were collected in a sample of 2249 children with chronic pain managed in a tertiary pain clinic. First, the "optimal" cut points for the whole sample were determined. Second, the variability of these cut points was quantified by the bootstrap technique. Third, this variability was also assessed in homogeneous subsamples of 650 children with constant pain, 430 children with chronic daily headache, and 295 children with musculoskeletal pain. Our study revealed 3 main findings: (1) The optimal cut points for mild, moderate, and severe pain in the whole sample were 4 and 8 (0-10 NRS). (2) The variability of these cut points within the whole sample was very high, identifying the optimal cut points in only 40% of the time. (3) Similarly large variability was also found in subsamples of patients with a homogeneous pain etiology. Optimal cut points are strongly influenced by random fluctuations within a sample. Differences in optimal cut points between study groups may be explained by chance variation; no other substantial explanation is required. Future studies that aim to interpret differences between groups need to include measures of variability for optimal cut points.

  14. Na-Faraday rotation filtering: The optimal point

    PubMed Central

    Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja

    2014-01-01

    Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

  15. 77 FR 14690 - Drawbridge Operation Regulation; New Jersey Intracoastal Waterway (NJICW), Point Pleasant Canal, NJ

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-13

    ... (NJICW), Point Pleasant Canal, NJ AGENCY: Coast Guard, DHS. ACTION: Notice of temporary deviation from... regulations governing the operation of the Route 88/Veterans Memorial Bridge across Point Pleasant Canal... Bridge across Point Pleasant Canal along the NJICW, in Point Pleasant, NJ. The bridge has a...

  16. 78 FR 23845 - Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-23

    ... SECURITY Coast Guard 33 CFR Part 117 Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY AGENCY... the Smith Point Bridge, mile 6.1, across Narrow Bay, between Smith Point and Fire Island, New York. The deviation is necessary to facilitate the Smith Point Triathlon. This deviation allows the...

  17. Long-term optimal operation of hydrothermal power systems

    NASA Astrophysics Data System (ADS)

    Ardekaaniaan, Rezaa

    When new construction projects are postponed or cancelled because of socio-economic concerns, greater emphasis is placed on enhanced operational planning---to get the most at the least cost, from the existing projects. Of the approaches that made significant improvement in the operation of energy production systems is the co-ordination between hydro and thermal power plants. In this research, the problem of "Long-term Optimal Operation of Hydro-Thermal Power Systems" is addressed. Considering the uncertainty in reservoir inflows, the problem is defined as a "two-stage stochastic linear network programming with recourse". To avoid dimensionality problem generally associated with the employment of dynamic programming in large scale applications, Benders' decomposition approach is employed as the solution algorithm basis for the defined problem. Using the "General Algebraic Modelling System", a modelling code, the "Hydro-Thermal Co-ordinating Model (HTCOM)" is developed. In HTCOM, each sequence of hydrologic inflows generates a subproblem which is solved deterministically. The solutions of all subproblems are next co-ordinated by a master problem to determine a single feasible optimal policy for the original problem. This policy includes optimal reservoirs releases as well as allocation of energy generation at different power plants in the subsequent time period. The objective minimizes the expected total cost of meeting the energy demands while satisfying the system constraints over the long-term horizon of one to three years. To demonstrate the applicability of HTCOM, a real world case study named the "Khozestan Water and Power Authority (KWPA)" in Iran is employed as a system of two multipurpose reservoirs with five hydro-thermal power plants and transactions of energy. The KWPA system components and operating policies are simulated as the network flow model and an integrated solution procedure is planned to determine the optimal operation policies. This procedure

  18. An adaptive immune optimization algorithm with dynamic lattice searching operation for fast optimization of atomic clusters

    NASA Astrophysics Data System (ADS)

    Wu, Xia; Wu, Genhua

    2014-08-01

    Geometrical optimization of atomic clusters is performed by a development of adaptive immune optimization algorithm (AIOA) with dynamic lattice searching (DLS) operation (AIOA-DLS method). By a cycle of construction and searching of the dynamic lattice (DL), DLS algorithm rapidly makes the clusters more regular and greatly reduces the potential energy. DLS can thus be used as an operation acting on the new individuals after mutation operation in AIOA to improve the performance of the AIOA. The AIOA-DLS method combines the merit of evolutionary algorithm and idea of dynamic lattice. The performance of the proposed method is investigated in the optimization of Lennard-Jones clusters within 250 atoms and silver clusters described by many-body Gupta potential within 150 atoms. Results reported in the literature are reproduced, and the motif of Ag61 cluster is found to be stacking-fault face-centered cubic, whose energy is lower than that of previously obtained icosahedron.

  19. AN OPTIMIZED 64X64 POINT TWO-DIMENSIONAL FAST FOURIER TRANSFORM

    NASA Technical Reports Server (NTRS)

    Miko, J.

    1994-01-01

    Scientists at Goddard have developed an efficient and powerful program-- An Optimized 64x64 Point Two-Dimensional Fast Fourier Transform-- which combines the performance of real and complex valued one-dimensional Fast Fourier Transforms (FFT's) to execute a two-dimensional FFT and its power spectrum coefficients. These coefficients can be used in many applications, including spectrum analysis, convolution, digital filtering, image processing, and data compression. The program's efficiency results from its technique of expanding all arithmetic operations within one 64-point FFT; its high processing rate results from its operation on a high-speed digital signal processor. For non-real-time analysis, the program requires as input an ASCII data file of 64x64 (4096) real valued data points. As output, this analysis produces an ASCII data file of 64x64 power spectrum coefficients. To generate these coefficients, the program employs a row-column decomposition technique. First, it performs a radix-4 one-dimensional FFT on each row of input, producing complex valued results. Then, it performs a one-dimensional FFT on each column of these results to produce complex valued two-dimensional FFT results. Finally, the program sums the squares of the real and imaginary values to generate the power spectrum coefficients. The program requires a Banshee accelerator board with 128K bytes of memory from Atlanta Signal Processors (404/892-7265) installed on an IBM PC/AT compatible computer (DOS ver. 3.0 or higher) with at least one 16-bit expansion slot. For real-time operation, an ASPI daughter board is also needed. The real-time configuration reads 16-bit integer input data directly into the accelerator board, operating on 64x64 point frames of data. The program's memory management also allows accumulation of the coefficient results. The real-time processing rate to calculate and accumulate the 64x64 power spectrum output coefficients is less than 17.0 mSec. Documentation is included

  20. Stabilizing operation point technique based on the tunable distributed feedback laser for interferometric sensors

    NASA Astrophysics Data System (ADS)

    Mao, Xuefeng; Zhou, Xinlei; Yu, Qingxu

    2016-02-01

    We describe a stabilizing operation point technique based on the tunable Distributed Feedback (DFB) laser for quadrature demodulation of interferometric sensors. By introducing automatic lock quadrature point and wavelength periodically tuning compensation into an interferometric system, the operation point of interferometric system is stabilized when the system suffers various environmental perturbations. To demonstrate the feasibility of this stabilizing operation point technique, experiments have been performed using a tunable-DFB-laser as light source to interrogate an extrinsic Fabry-Perot interferometric vibration sensor and a diaphragm-based acoustic sensor. Experimental results show that good tracing of Q-point was effectively realized.

  1. The optimization of operating parameters on microalgae upscaling process planning.

    PubMed

    Ma, Yu-An; Huang, Hsin-Fu; Yu, Chung-Chyi

    2016-03-01

    The upscaling process planning developed in this study primarily involved optimizing operating parameters, i.e., dilution ratios, during process designs. Minimal variable cost was used as an indicator for selecting the optimal combination of dilution ratios. The upper and lower mean confidence intervals obtained from the actual cultured cell density data were used as the final cell density stability indicator after the operating parameters or dilution ratios were selected. The process planning method and results were demonstrated through three case studies of batch culture simulation. They are (1) final objective cell densities were adjusted, (2) high and low light intensities were used for intermediate-scale cultures, and (3) the number of culture days was expressed as integers for the intermediate-scale culture.

  2. Optimization of operating conditions in tunnel drying of food

    SciTech Connect

    Dong Sun Lee . Dept. of Food Engineering); Yu Ryang Pyun . Dept. of Food Engineering)

    1993-01-01

    A food drying process in a tunnel dryer was modeled from Keey's drying model and experimental drying curve, and optimized in operating conditions consisting of inlet air temperature, air recycle ratio and air flow rate. Radish was chosen as a typical food material to be dried, because it has the typical drying characteristics of food and quality indexes of ascorbic acid destruction and browning during drying. Optimization results of cocurrent and counter current tunnel drying showed higher inlet air temperature, lower recycle ratio and higher air flow rate with shorter total drying time. Compared with cocurrent operation counter current drying used lower air temperature, lower recycle ratio and lower air flow rate, and appeared to be more efficient in energy usage. Most of consumed energy was shown to be used for sir heating and then escaped from the dryer in the form of exhaust air.

  3. Bayesian stochastic optimization of reservoir operation using uncertain forecasts

    NASA Astrophysics Data System (ADS)

    Karamouz, Mohammad; Vasiliadis, Haralambos V.

    1992-05-01

    Operation of reservoir systems using stochastic dynamic programming (SDP) and Bayesian decision theory (BDT) is investigated in this study. The proposed model, called Bayesian stochastic dynamic programming (BSDP), which includes inflow, storage, and forecast as state variables, describes streamflows with a discrete lag 1 Markov process, and uses BDT to incorporate new information by updating the prior probabilities to posterior probabilities, is used to generate optimal reservoir operating rules. This continuous updating can significantly reduce the effects of natural and forecast uncertainties in the model. In order to test the value of the BSDP model for generating optimal operating rules, real-time reservoir operation simulation models are constructed using 95 years of monthly historical inflows of the Gunpowder River to Loch Raven reservoir in Maryland. The rules generated by the BSDP model are applied in an operation simulation model and their performance is compared with an alternative stochastic dynamic programming (ASDP) model and a classical stochastic dynamic programming (SDP) model. BSDP differs from the other two models in the selection of state variables and the way the transition probabilities are formed and updated.

  4. Optimal recovery of linear operators in non-Euclidean metrics

    SciTech Connect

    Osipenko, K Yu

    2014-10-31

    The paper looks at problems concerning the recovery of operators from noisy information in non-Euclidean metrics. A number of general theorems are proved and applied to recovery problems for functions and their derivatives from the noisy Fourier transform. In some cases, a family of optimal methods is found, from which the methods requiring the least amount of original information are singled out. Bibliography: 25 titles.

  5. Optimizing integrated airport surface and terminal airspace operations under uncertainty

    NASA Astrophysics Data System (ADS)

    Bosson, Christabelle S.

    In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is

  6. 77 FR 7184 - Entergy Nuclear Indian Point 2, LLC; Entergy Nuclear Operations, Inc.; Indian Point Nuclear...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-10

    ... PCV- 1310A and PCV- 1310B. 4 Operate 22 AFW pump flow control valves FCV-405A (discharge to 21 steam... flow to selected steam generators. 5 F Primary Open HCV-142 Auxiliary bypass valve Building and 227 to... alternate power. 8 H Vapor (Reactor) Fail open valves Containment 204A (charging Building. flow to Loop...

  7. Robust optimal sun-pointing control of a large solar power satellite

    NASA Astrophysics Data System (ADS)

    Wu, Shunan; Zhang, Kaiming; Peng, Haijun; Wu, Zhigang; Radice, Gianmarco

    2016-10-01

    The robust optimal sun-pointing control strategy for a large geostationary solar power satellite (SPS) is addressed in this paper. The SPS is considered as a huge rigid body, and the sun-pointing dynamics are firstly proposed in the state space representation. The perturbation effects caused by gravity gradient, solar radiation pressure and microwave reaction are investigated. To perform sun-pointing maneuvers, a periodically time-varying robust optimal LQR controller is designed to assess the pointing accuracy and the control inputs. It should be noted that, to reduce the pointing errors, the disturbance rejection technique is combined into the proposed LQR controller. A recursive algorithm is then proposed to solve the optimal LQR control gain. Simulation results are finally provided to illustrate the performance of the proposed closed-loop system.

  8. Multi-objective nested algorithms for optimal reservoir operation

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Solomatine, Dimitri

    2016-04-01

    The optimal reservoir operation is in general a multi-objective problem, meaning that multiple objectives are to be considered at the same time. For solving multi-objective optimization problems there exist a large number of optimization algorithms - which result in a generation of a Pareto set of optimal solutions (typically containing a large number of them), or more precisely, its approximation. At the same time, due to the complexity and computational costs of solving full-fledge multi-objective optimization problems some authors use a simplified approach which is generically called "scalarization". Scalarization transforms the multi-objective optimization problem to a single-objective optimization problem (or several of them), for example by (a) single objective aggregated weighted functions, or (b) formulating some objectives as constraints. We are using the approach (a). A user can decide how many multi-objective single search solutions will generate, depending on the practical problem at hand and by choosing a particular number of the weight vectors that are used to weigh the objectives. It is not guaranteed that these solutions are Pareto optimal, but they can be treated as a reasonably good and practically useful approximation of a Pareto set, albeit small. It has to be mentioned that the weighted-sum approach has its known shortcomings because the linear scalar weights will fail to find Pareto-optimal policies that lie in the concave region of the Pareto front. In this context the considered approach is implemented as follows: there are m sets of weights {w1i, …wni} (i starts from 1 to m), and n objectives applied to single objective aggregated weighted sum functions of nested dynamic programming (nDP), nested stochastic dynamic programming (nSDP) and nested reinforcement learning (nRL). By employing the multi-objective optimization by a sequence of single-objective optimization searches approach, these algorithms acquire the multi-objective properties

  9. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  10. Optimal reservoir operation policies using novel nested algorithms

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri

    2015-04-01

    Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested

  11. Optimization of Maneuver Execution for Landsat-7 Routine Operations

    NASA Technical Reports Server (NTRS)

    Cox, E. Lucien, Jr.; Bauer, Frank H. (Technical Monitor)

    2000-01-01

    Multiple mission constraints were satisfied during a lengthy, strategic ascent phase. Once routine operations begin, the ongoing concern of maintaining mission requirements becomes an immediate priority. The Landsat-7 mission has tight longitude control box and Earth imaging that requires sub-satellite descending nodal equator crossing times to occur in a narrow 30minute range fifteen (15) times daily. Operationally, spacecraft maneuvers must'be executed properly to maintain mission requirements. The paper will discuss the importance of optimizing the altitude raising and plane change maneuvers, amidst known constraints, to satisfy requirements throughout mission lifetime. Emphasis will be placed not only on maneuver size and frequency but also on changes in orbital elements that impact maneuver execution decisions. Any associated trade-off arising from operations contingencies will be discussed as well. Results of actual altitude and plane change maneuvers are presented to clarify actions taken.

  12. Parametric Optimization of Some Critical Operating System Functions--An Alternative Approach to the Study of Operating Systems Design

    ERIC Educational Resources Information Center

    Sobh, Tarek M.; Tibrewal, Abhilasha

    2006-01-01

    Operating systems theory primarily concentrates on the optimal use of computing resources. This paper presents an alternative approach to teaching and studying operating systems design and concepts by way of parametrically optimizing critical operating system functions. Detailed examples of two critical operating systems functions using the…

  13. 76 FR 79066 - Drawbridge Operation Regulation; Escatawpa River, Moss Point, MS

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-21

    ... SECURITY Coast Guard 33 CFR Part 117 Drawbridge Operation Regulation; Escatawpa River, Moss Point, MS... of the Mississippi Export Railroad Company swing bridge across the Escatawpa River, mile 3.0, at Moss... operating schedule for the swing span bridge across Escatawpa River, mile 3.0, at Moss Point, Jackson...

  14. Determination of the Optimal Operating Parameters for Jefferson Laboratory's Cryogenic Cold Compressor Systems

    SciTech Connect

    Joe D. Wilson, Jr.

    2003-04-01

    The technology of Jefferson Laboratory's (JLab) Continuous Electron Beam Accelerator Facility (CEBAF) and Free Electron Laser (FEL) requires cooling from one of the world's largest 2K helium refrigerators known as the Central Helium Liquefier (CHL). The key characteristic of CHL is the ability to maintain a constant low vapor pressure over the large liquid helium inventory using a series of five cold compressors. The cold compressor system operates with a constrained discharge pressure over a range of suction pressures and mass flows to meet the operational requirements of CEBAF and FEL. The research topic is the prediction of the most thermodynamically efficient conditions for the system over its operating range of mass flows and vapor pressures with minimum disruption to JLab operations. The research goal is to find the operating points for each cold compressor for optimizing the overall system at any given flow and vapor pressure.

  15. Robust Optimization of Fixed Points of Nonlinear Discrete Time Systems with Uncertain Parameters

    NASA Astrophysics Data System (ADS)

    Kastsian, Darya; Monnigmann, Martin

    2010-01-01

    This contribution extends the normal vector method for the optimization of parametrically uncertain dynamical systems to a general class of nonlinear discrete time systems. Essentially, normal vectors are used to state constraints on dynamical properties of fixed points in the optimization of discrete time dynamical systems. In a typical application of the method, a technical dynamical system is optimized with respect to an economic profit function, while the normal vector constraints are used to guarantee the stability of the optimal fixed point. We derive normal vector systems for flip, fold, and Neimark-Sacker bifurcation points, because these bifurcation points constitute the stability boundary of a large class of discrete time systems. In addition, we derive normal vector systems for a related type of critical point that can be used to ensure a user-specified disturbance rejection rate in the optimization of parametrically uncertain systems. We illustrate the method by applying it to the optimization of a discrete time supply chain model and a discretized fermentation process model.

  16. 75 FR 66308 - Drawbridge Operation Regulation; New Jersey Intracoastal Waterway (NJICW), Point Pleasant Canal, NJ

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-28

    ... (NJICW), Point Pleasant Canal, NJ AGENCY: Coast Guard, DHS. ACTION: Notice of temporary deviation from... regulations governing the operation of the Route 88/Veterans Memorial Bridge across Point Pleasant Canal, at... Canal along the NJICW, in Point Pleasant, NJ. The bridge has a vertical clearance in the closed...

  17. Point-to-Point! Validation of the Small Aircraft Transportation System Higher Volume Operations Concept

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.

    2006-01-01

    Described is the research process that NASA researchers used to validate the Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) concept. The four phase building-block validation and verification process included multiple elements ranging from formal analysis of HVO procedures to flight test, to full-system architecture prototype that was successfully shown to the public at the June 2005 SATS Technical Demonstration in Danville, VA. Presented are significant results of each of the four research phases that extend early results presented at ICAS 2004. HVO study results have been incorporated into the development of the Next Generation Air Transportation System (NGATS) vision and offer a validated concept to provide a significant portion of the 3X capacity improvement sought after in the United States National Airspace System (NAS).

  18. Optimization of Sample Points for Monitoring Arable Land Quality by Simulated Annealing while Considering Spatial Variations

    PubMed Central

    Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng

    2016-01-01

    With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051

  19. Optimized Algorithms for Prediction within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; SunSpiral, Vytas; Allan, Mark B.

    2006-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Laboratory at NASA Johnson Space Center serves as a testbed for human-robot collaboration research and development efforts. One of the primary efforts investigates how adjustable autonomy can provide for a safe and more effective completion of manipulation-based tasks. A predictive algorithm developed in previous work was deployed as part of a software interface that can be used for long-distance tele-operation. In this paper we provide the details of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmic approach. We show that all of the algorithms presented can be optimized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. Judicious feature selection also plays a significant role in the conclusions drawn.

  20. Optimal Cutoff Points of Anthropometric Parameters to Identify High Coronary Heart Disease Risk in Korean Adults

    PubMed Central

    2016-01-01

    Several published studies have reported the need to change the cutoff points of anthropometric indices for obesity. We therefore conducted a cross-sectional study to estimate anthropometric cutoff points predicting high coronary heart disease (CHD) risk in Korean adults. We analyzed the Korean National Health and Nutrition Examination Survey data from 2007 to 2010. A total of 21,399 subjects aged 20 to 79 yr were included in this study (9,204 men and 12,195 women). We calculated the 10-yr Framingham coronary heart disease risk score for all individuals. We then estimated receiver-operating characteristic (ROC) curves for body mass index (BMI), waist circumference, and waist-to-height ratio to predict a 10-yr CHD risk of 20% or more. For sensitivity analysis, we conducted the same analysis for a 10-yr CHD risk of 10% or more. For a CHD risk of 20% or more, the area under the curve of waist-to-height ratio was the highest, followed by waist circumference and BMI. The optimal cutoff points in men and women were 22.7 kg/m2 and 23.3 kg/m2 for BMI, 83.2 cm and 79.7 cm for waist circumference, and 0.50 and 0.52 for waist-to-height ratio, respectively. In sensitivity analysis, the results were the same as those reported above except for BMI in women. Our results support the re-classification of anthropometric indices and suggest the clinical use of waist-to-height ratio as a marker for obesity in Korean adults. PMID:26770039

  1. Optimal Cutoff Points of Anthropometric Parameters to Identify High Coronary Heart Disease Risk in Korean Adults.

    PubMed

    Kim, Sang Hyuck; Choi, Hyunrim; Won, Chang Won; Kim, Byung-Sung

    2016-01-01

    Several published studies have reported the need to change the cutoff points of anthropometric indices for obesity. We therefore conducted a cross-sectional study to estimate anthropometric cutoff points predicting high coronary heart disease (CHD) risk in Korean adults. We analyzed the Korean National Health and Nutrition Examination Survey data from 2007 to 2010. A total of 21,399 subjects aged 20 to 79 yr were included in this study (9,204 men and 12,195 women). We calculated the 10-yr Framingham coronary heart disease risk score for all individuals. We then estimated receiver-operating characteristic (ROC) curves for body mass index (BMI), waist circumference, and waist-to-height ratio to predict a 10-yr CHD risk of 20% or more. For sensitivity analysis, we conducted the same analysis for a 10-yr CHD risk of 10% or more. For a CHD risk of 20% or more, the area under the curve of waist-to-height ratio was the highest, followed by waist circumference and BMI. The optimal cutoff points in men and women were 22.7 kg/m(2) and 23.3 kg/m(2) for BMI, 83.2 cm and 79.7 cm for waist circumference, and 0.50 and 0.52 for waist-to-height ratio, respectively. In sensitivity analysis, the results were the same as those reported above except for BMI in women. Our results support the re-classification of anthropometric indices and suggest the clinical use of waist-to-height ratio as a marker for obesity in Korean adults.

  2. [Numerical simulation and operation optimization of biological filter].

    PubMed

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.

  3. [Numerical simulation and operation optimization of biological filter].

    PubMed

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10. PMID:25826934

  4. Optimizing Watershed Management by Coordinated Operation of Storing Facilities

    NASA Astrophysics Data System (ADS)

    Anghileri, Daniela; Castelletti, Andrea; Pianosi, Francesca; Soncini-Sessa, Rodolfo; Weber, Enrico

    2013-04-01

    Water storing facilities in a watershed are very often operated independently one to another to meet specific operating objectives, with no information sharing among the operators. This uncoordinated approach might result in upstream-downstream disputes and conflicts among different water users, or inefficiencies in the watershed management, when looked at from the viewpoint of an ideal central decision-maker. In this study, we propose an approach in two steps to design coordination mechanisms at the watershed scale with the ultimate goal of enlarging the space for negotiated agreements between competing uses and improve the overall system efficiency. First, we compute the multi-objective centralized solution to assess the maximum potential benefits of a shift from a sector-by-sector to an ideal fully coordinated perspective. Then, we analyze the Pareto-optimal operating policies to gain insight into suitable strategies to foster cooperation or impose coordination among the involved agents. The approach is demonstrated on an Alpine watershed in Italy where a long lasting conflict exists between upstream hydropower production and downstream irrigation water users. Results show that a coordination mechanism can be designed that drive the current uncoordinated structure towards the performance of the ideal centralized operation.

  5. Optimization of shared autonomy vehicle control architectures for swarm operations.

    PubMed

    Sengstacken, Aaron J; DeLaurentis, Daniel A; Akbarzadeh-T, Mohammad R

    2010-08-01

    The need for greater capacity in automotive transportation (in the midst of constrained resources) and the convergence of key technologies from multiple domains may eventually produce the emergence of a "swarm" concept of operations. The swarm, which is a collection of vehicles traveling at high speeds and in close proximity, will require technology and management techniques to ensure safe, efficient, and reliable vehicle interactions. We propose a shared autonomy control approach, in which the strengths of both human drivers and machines are employed in concert for this management. Building from a fuzzy logic control implementation, optimal architectures for shared autonomy addressing differing classes of drivers (represented by the driver's response time) are developed through a genetic-algorithm-based search for preferred fuzzy rules. Additionally, a form of "phase transition" from a safe to an unsafe swarm architecture as the amount of sensor capability is varied uncovers key insights on the required technology to enable successful shared autonomy for swarm operations.

  6. Excited meson radiative transitions from lattice QCD using variationally optimized operators

    SciTech Connect

    Shultz, Christian J.; Dudek, Jozef J.; Edwards, Robert G.

    2015-06-02

    We explore the use of 'optimized' operators, designed to interpolate only a single meson eigenstate, in three-point correlation functions with a vector-current insertion. These operators are constructed as linear combinations in a large basis of meson interpolating fields using a variational analysis of matrices of two-point correlation functions. After performing such a determination at both zero and non-zero momentum, we compute three-point functions and are able to study radiative transition matrix elements featuring excited state mesons. The required two- and three-point correlation functions are efficiently computed using the distillation framework in which there is a factorization between quark propagation and operator construction, allowing for a large number of meson operators of definite momentum to be considered. We illustrate the method with a calculation using anisotopic lattices having three flavors of dynamical quark all tuned to the physical strange quark mass, considering form-factors and transitions of pseudoscalar and vector meson excitations. In conclusion, the dependence on photon virtuality for a number of form-factors and transitions is extracted and some discussion of excited-state phenomenology is presented.

  7. Comparison of two stand-alone CADe systems at multiple operating points

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas

    2015-03-01

    Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.

  8. Applications of Optimal Building Energy System Selection and Operation

    SciTech Connect

    Marnay, Chris; Stadler, Michael; Siddiqui, Afzal; DeForest, Nicholas; Donadee, Jon; Bhattacharya, Prajesh; Lai, Judy

    2011-04-01

    Berkeley Lab has been developing the Distributed Energy Resources Customer Adoption Model (DER-CAM) for several years. Given load curves for energy services requirements in a building microgrid (u grid), fuel costs and other economic inputs, and a menu of available technologies, DER-CAM finds the optimum equipment fleet and its optimum operating schedule using a mixed integer linear programming approach. This capability is being applied using a software as a service (SaaS) model. Optimisation problems are set up on a Berkeley Lab server and clients can execute their jobs as needed, typically daily. The evolution of this approach is demonstrated by description of three ongoing projects. The first is a public access web site focused on solar photovoltaic generation and battery viability at large commercial and industrial customer sites. The second is a building CO2 emissions reduction operations problem for a University of California, Davis student dining hall for which potential investments are also considered. And the third, is both a battery selection problem and a rolling operating schedule problem for a large County Jail. Together these examples show that optimization of building u grid design and operation can be effectively achieved using SaaS.

  9. Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas

    2010-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1

  10. Biohydrogen Production from Simple Carbohydrates with Optimization of Operating Parameters.

    PubMed

    Muri, Petra; Osojnik-Črnivec, Ilja Gasan; Djinovič, Petar; Pintar, Albin

    2016-01-01

    Hydrogen could be alternative energy carrier in the future as well as source for chemical and fuel synthesis due to its high energy content, environmentally friendly technology and zero carbon emissions. In particular, conversion of organic substrates to hydrogen via dark fermentation process is of great interest. The aim of this study was fermentative hydrogen production using anaerobic mixed culture using different carbon sources (mono and disaccharides) and further optimization by varying a number of operating parameters (pH value, temperature, organic loading, mixing intensity). Among all tested mono- and disaccharides, glucose was shown as the preferred carbon source exhibiting hydrogen yield of 1.44 mol H(2)/mol glucose. Further evaluation of selected operating parameters showed that the highest hydrogen yield (1.55 mol H(2)/mol glucose) was obtained at the initial pH value of 6.4, T=37 °C and organic loading of 5 g/L. The obtained results demonstrate that lower hydrogen yield at all other conditions was associated with redirection of metabolic pathways from butyric and acetic (accompanied by H(2) production) to lactic (simultaneous H(2) production is not mandatory) acid production. These results therefore represent an important foundation for the optimization and industrial-scale production of hydrogen from organic substrates. PMID:26970800

  11. Biohydrogen Production from Simple Carbohydrates with Optimization of Operating Parameters.

    PubMed

    Muri, Petra; Osojnik-Črnivec, Ilja Gasan; Djinovič, Petar; Pintar, Albin

    2016-01-01

    Hydrogen could be alternative energy carrier in the future as well as source for chemical and fuel synthesis due to its high energy content, environmentally friendly technology and zero carbon emissions. In particular, conversion of organic substrates to hydrogen via dark fermentation process is of great interest. The aim of this study was fermentative hydrogen production using anaerobic mixed culture using different carbon sources (mono and disaccharides) and further optimization by varying a number of operating parameters (pH value, temperature, organic loading, mixing intensity). Among all tested mono- and disaccharides, glucose was shown as the preferred carbon source exhibiting hydrogen yield of 1.44 mol H(2)/mol glucose. Further evaluation of selected operating parameters showed that the highest hydrogen yield (1.55 mol H(2)/mol glucose) was obtained at the initial pH value of 6.4, T=37 °C and organic loading of 5 g/L. The obtained results demonstrate that lower hydrogen yield at all other conditions was associated with redirection of metabolic pathways from butyric and acetic (accompanied by H(2) production) to lactic (simultaneous H(2) production is not mandatory) acid production. These results therefore represent an important foundation for the optimization and industrial-scale production of hydrogen from organic substrates.

  12. Optimizing and controlling earthmoving operations using spatial technologies

    NASA Astrophysics Data System (ADS)

    Alshibani, Adel

    This thesis presents a model designed for optimizing, tracking, and controlling earthmoving operations. The proposed model utilizes, Genetic Algorithm (GA), Linear Programming (LP), and spatial technologies including Global Positioning Systems (GPS) and Geographic Information Systems (GIS) to support the management functions of the developed model. The model assists engineers and contractors in selecting near optimum crew formations in planning phase and during construction, using GA and LP supported by the Pathfinder Algorithm developed in a GIS environment. GA is used in conjunction with a set of rules developed to accelerate the optimization process and to avoid generating and evaluating hypothetical and unrealistic crew formations. LP is used to determine quantities of earth to be moved from different borrow pits and to be placed at different landfill sites to meet project constraints and to minimize the cost of these earthmoving operations. On the one hand, GPS is used for onsite data collection and for tracking construction equipment in near real-time. On the other hand, GIS is employed to automate data acquisition and to analyze the collected spatial data. The model is also capable of reconfiguring crew formations dynamically during the construction phase while site operations are in progress. The optimization of the crew formation considers: (1) construction time, (2) construction direct cost, or (3) construction total cost. The model is also capable of generating crew formations to meet, as close as possible, specified time and/or cost constraints. In addition, the model supports tracking and reporting of project progress utilizing the earned-value concept and the project ratio method with modifications that allow for more accurate forecasting of project time and cost at set future dates and at completion. The model is capable of generating graphical and tabular reports. The developed model has been implemented in prototype software, using Object

  13. Optimization of Hydroacoustic Equipment Deployments at Lookout Point and Cougar Dams, Willamette Valley Project, 2010

    SciTech Connect

    Johnson, Gary E.; Khan, Fenton; Ploskey, Gene R.; Hughes, James S.; Fischer, Eric S.

    2010-08-18

    The goal of the study was to optimize performance of the fixed-location hydroacoustic systems at Lookout Point Dam (LOP) and the acoustic imaging system at Cougar Dam (CGR) by determining deployment and data acquisition methods that minimized structural, electrical, and acoustic interference. The general approach was a multi-step process from mount design to final system configuration. The optimization effort resulted in successful deployments of hydroacoustic equipment at LOP and CGR.

  14. Two Point Exponential Approximation Method for structural optimization of problems with frequency constraints

    NASA Technical Reports Server (NTRS)

    Fadel, G. M.

    1991-01-01

    The point exponential approximation method was introduced by Fadel et al. (Fadel, 1990), and tested on structural optimization problems with stress and displacement constraints. The reports in earlier papers were promising, and the method, which consists of correcting Taylor series approximations using previous design history, is tested in this paper on optimization problems with frequency constraints. The aim of the research is to verify the robustness and speed of convergence of the two point exponential approximation method when highly non-linear constraints are used.

  15. Optimal control problems with switching points. Ph.D. Thesis, 1990 Final Report

    NASA Technical Reports Server (NTRS)

    Seywald, Hans

    1991-01-01

    An overview is presented of the problems and difficulties that arise in solving optimal control problems with switching points. A brief discussion of existing optimality conditions is given and a numerical approach for solving the multipoint boundary value problems associated with the first-order necessary conditions of optimal control is presented. Two real-life aerospace optimization problems are treated explicitly. These are altitude maximization for a sounding rocket (Goddard Problem) in the presence of a dynamic pressure limit, and range maximization for a supersonic aircraft flying in the vertical, also in the presence of a dynamic pressure limit. In the second problem singular control appears along arcs with active dynamic pressure limit, which in the context of optimal control, represents a first-order state inequality constraint. An extension of the Generalized Legendre-Clebsch Condition to the case of singular control along state/control constrained arcs is presented and is applied to the aircraft range maximization problem stated above. A contribution to the field of Jacobi Necessary Conditions is made by giving a new proof for the non-optimality of conjugate paths in the Accessory Minimum Problem. Because of its simple and explicit character, the new proof may provide the basis for an extension of Jacobi's Necessary Condition to the case of the trajectories with interior point constraints. Finally, the result that touch points cannot occur for first-order state inequality constraints is extended to the case of vector valued control functions.

  16. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  17. Strategies for optimal operation of the tellurium electrowinning process

    SciTech Connect

    Broderick, G.; Handle, B.; Paschen, P.

    1999-02-01

    Empirical models predicting the purity of electrowon tellurium have been developed using data from 36 pilot-plant trials. Based on these models, a numerical optimization of the process was performed to identify conditions which minimize the total contamination in Pb and Se while reducing electrical consumption per kilogram of electrowon tellurium. Results indicate that product quality can be maintained and even improved while operating at the much higher electroplating production rates obtained at high current densities. Using these same process settings, the electrical consumption of the process can be reduced by up to 10 pct by operating at midrange temperatures of close to 50 C. This is particularly attractive when waste heat is available at the plant to help preheat the electrolyte feed. When both Pb and Se are present as contaminants, the most energy-efficient strategy involves the use of a high current density, at a moderate temperature with high flow, for low concentrations of TeO{sub 2}. If Pb is removed prior to the electrowinning process, the use of a low current density and low electrolyte feed concentration, while operating at a low temperature and moderate flow rates, provides the most significant reduction in Se codeposition.

  18. 78 FR 39018 - Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating Unit Nos. 2 and 3

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-28

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating Unit Nos. 2 and 3 AGENCY: Nuclear Regulatory Commission. ACTION: Supplement to Final Supplement 38 to the Generic...

  19. Optimization of site layout for change of plant operation

    SciTech Connect

    Reuwer, S.M.; Kasperski, E.; Joseph, T.D.

    1995-12-31

    Several of the Florida Power & Light operating fossil power plants have undergone significant site layout changes as well as changes in plant operation. The FPL Fort Lauderdale Plant was repowered in 1992 which consisted of using four (4) Westinghouse 501F Combustion Turbines rated at 158 Mw each, to repower two (2) existing steam turbines rates at 143 Mw each. In 1991, a physical security fence separation occurred between Turkey Point Plants`s fossil fueled Units 1&2, and its nuclear fueled Units 3&4. As a result of this separation, certain facilities common to both the nuclear side and fossil side of the plant required relocating. Also, the Sanford and Manatee Plants were evaluated for the use of a new fuel as an alternative source. Manatee Plant is currently in the licensing process for modifications to burn a new fuel, requiring expansion of backened clean-up equipment, with additional staff to operate this equipment. In order to address these plant changes, site development studies were prepared for each plant to determine the suitability of the existing ancillary facilities to support the operational changes, and to make recommendations for facility improvement if found inadequate. A standardized process was developed for all of the site studies. This proved to be a comprehensive process and approach, that gave FPL a successful result that all the various stake holders bought into. This process was objectively based, focused, and got us to where we need to be as quickly as possible. As a result, this paper details the outline and various methods developed to prepare a study following this process, that will ultimately provide the optimum site development plan for the changing plant operations.

  20. MANGO – Modal Analysis for Grid Operation: A Method for Damping Improvement through Operating Point Adjustment

    SciTech Connect

    Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.; Chen, Yousu; Trudnowski, Daniel J.; Diao, Ruisheng; Fuller, Jason C.; Mittelstadt, William A.; Hauer, John F.; Dagle, Jeffery E.

    2010-10-18

    Small signal stability problems are one of the major threats to grid stability and reliability in the U.S. power grid. An undamped mode can cause large-amplitude oscillations and may result in system breakups and large-scale blackouts. There have been several incidents of system-wide oscillations. Of those incidents, the most notable is the August 10, 1996 western system breakup, a result of undamped system-wide oscillations. Significant efforts have been devoted to monitoring system oscillatory behaviors from measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision, time-synchronized data needed for detecting oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measurements to identify system oscillation modes and their damping. Low damping indicates potential system stability issues. Modal analysis has been demonstrated with phasor measurements to have the capability of estimating system modes from both oscillation signals and ambient data. With more and more phasor measurements available and ModeMeter techniques maturing, there is yet a need for methods to bring modal analysis from monitoring to actions. The methods should be able to associate low damping with grid operating conditions, so operators or automated operation schemes can respond when low damping is observed. The work presented in this report aims to develop such a method and establish a Modal Analysis for Grid Operation (MANGO) procedure to aid grid operation decision making to increase inter-area modal damping. The procedure can provide operation suggestions (such as increasing generation or decreasing load) for mitigating inter-area oscillations.

  1. Decision Support Systems to Optimize the Operational Efficiency of Dams and Maintain Regulatory Compliance Criteria

    NASA Astrophysics Data System (ADS)

    Parkinson, S.; Morehead, M. D.; Conner, J. T.; Frye, C.

    2012-12-01

    Increasing demand for water and electricity, increasing variability in weather and climate and stricter requirements for riverine ecosystem health has put ever more stringent demands on hydropower operations. Dam operators are being impacted by these constraints and are looking for methods to meet these requirements while retaining the benefits hydropower offers. Idaho Power owns and operates 17 hydroelectric plants in Idaho and Oregon which have both Federal and State compliance requirements. Idaho Power has started building Decision Support Systems (DSS) to aid the hydroelectric plant operators in maximizing hydropower operational efficiency, while meeting regulatory compliance constraints. Regulatory constraints on dam operations include: minimum in-stream flows, maximum ramp rate of river stage, reservoir volumes, and reservoir ramp rate for draft and fill. From the hydroelectric standpoint, the desire is to vary the plant discharge (ramping) such that generation matches electricity demand (load-following), but ramping is limited by the regulatory requirements. Idaho Power desires DSS that integrate real time and historic data, simulates the rivers behavior from the hydroelectric plants downstream to the compliance measurement point and presents the information in an easily understandable display that allows the operators to make informed decisions. Creating DSS like these has a number of scientific and technical challenges. Real-time data are inherently noisy and automated data cleaning routines are required to filter the data. The DSS must inform the operators when incoming data are outside of predefined bounds. Complex river morphologies can make the timing and shape of a discharge change traveling downstream from a power plant nearly impossible to represent with a predefined lookup table. These complexities require very fast hydrodynamic models of the river system that simulate river characteristics (ex. Stage, discharge) at the downstream compliance point

  2. An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification

    ERIC Educational Resources Information Center

    Wang, Jun; Samal, Ashok; Rong, Panying; Green, Jordan R.

    2016-01-01

    Purpose: The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. Method: The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of…

  3. Optimization of a point-focusing, distributed receiver solar thermal electric system

    NASA Technical Reports Server (NTRS)

    Pons, R. L.

    1979-01-01

    This paper presents an approach to optimization of a solar concept which employs solar-to-electric power conversion at the focus of parabolic dish concentrators. The optimization procedure is presented through a series of trade studies, which include the results of optical/thermal analyses and individual subsystem trades. Alternate closed-cycle and open-cycle Brayton engines and organic Rankine engines are considered to show the influence of the optimization process, and various storage techniques are evaluated, including batteries, flywheels, and hybrid-engine operation.

  4. Operational optimization and real-time control of fuel-cell systems

    NASA Astrophysics Data System (ADS)

    Hasikos, J.; Sarimveis, H.; Zervas, P. L.; Markatos, N. C.

    Fuel cells is a rapidly evolving technology with applications in many industries including transportation, and both portable and stationary power generation. The viability, efficiency and robustness of fuel-cell systems depend strongly on optimization and control of their operation. This paper presents the development of an integrated optimization and control tool for Proton Exchange Membrane Fuel-Cell (PEMFC) systems. Using a detailed simulation model, a database is generated first, which contains steady-state values of the manipulated and controlled variables over the full operational range of the fuel-cell system. In a second step, the database is utilized for producing Radial Basis Function (RBF) neural network "meta-models". In the third step, a Non-Linear Programming Problem (NLP) is formulated, that takes into account the constraints and limitations of the system and minimizes the consumption of hydrogen, for a given value of power demand. Based on the formulation and solution of the NLP problem, a look-up table is developed, containing the optimal values of the system variables for any possible value of power demand. In the last step, a Model Predictive Control (MPC) methodology is designed, for the optimal control of the system response to successive sep-point changes of power demand. The efficiency of the produced MPC system is illustrated through a number of simulations, which show that a successful dynamic closed-loop behaviour can be achieved, while at the same time the consumption of hydrogen is minimized.

  5. 47 CFR 90.473 - Operation of internal transmitter control systems through licensed fixed control points.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Operation of internal transmitter control... Transmitter Control Internal Transmitter Control Systems § 90.473 Operation of internal transmitter control systems through licensed fixed control points. An internal transmitter control system may be...

  6. To the point: teaching the obstetrics and gynecology medical student in the operating room.

    PubMed

    Hampton, Brittany S; Craig, LaTasha B; Abbott, Jodi F; Buery-Joyner, Samantha D; Dalrymple, John L; Forstein, David A; Hopkins, Laura; McKenzie, Margaret L; Page-Ramsey, Sarah M; Pradhan, Archana; Wolf, Abigail; Graziano, Scott C

    2015-10-01

    This article, from the "To the Point" series that is prepared by the Association of Professors of Gynecology and Obstetrics Undergraduate Medical Education Committee, is a review of considerations for teaching the medical student in the operating room during the obstetrics/gynecology clerkship. The importance of the medical student operating room experience and barriers to learning in the operating room are discussed. Specific considerations for the improvement of medical student learning and operating room experience, which include the development of operating room objectives and specific curricula, an increasing awareness regarding role modeling, and faculty development, are reviewed.

  7. Optimizing Wind And Hydropower Generation Within Realistic Reservoir Operating Policy

    NASA Astrophysics Data System (ADS)

    Magee, T. M.; Clement, M. A.; Zagona, E. A.

    2012-12-01

    Previous studies have evaluated the benefits of utilizing the flexibility of hydropower systems to balance the variability and uncertainty of wind generation. However, previous hydropower and wind coordination studies have simplified non-power constraints on reservoir systems. For example, some studies have only included hydropower constraints on minimum and maximum storage volumes and minimum and maximum plant discharges. The methodology presented here utilizes the pre-emptive linear goal programming optimization solver in RiverWare to model hydropower operations with a set of prioritized policy constraints and objectives based on realistic policies that govern the operation of actual hydropower systems, including licensing constraints, environmental constraints, water management and power objectives. This approach accounts for the fact that not all policy constraints are of equal importance. For example target environmental flow levels may not be satisfied if it would require violating license minimum or maximum storages (pool elevations), but environmental flow constraints will be satisfied before optimizing power generation. Additionally, this work not only models the economic value of energy from the combined hydropower and wind system, it also captures the economic value of ancillary services provided by the hydropower resources. It is recognized that the increased variability and uncertainty inherent with increased wind penetration levels requires an increase in ancillary services. In regions with liberalized markets for ancillary services, a significant portion of hydropower revenue can result from providing ancillary services. Thus, ancillary services should be accounted for when determining the total value of a hydropower system integrated with wind generation. This research shows that the end value of integrated hydropower and wind generation is dependent on a number of factors that can vary by location. Wind factors include wind penetration level

  8. A Particle Swarm Optimization Algorithm for Optimal Operating Parameters of VMI Systems in a Two-Echelon Supply Chain

    NASA Astrophysics Data System (ADS)

    Sue-Ann, Goh; Ponnambalam, S. G.

    This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.

  9. Performing a scatterv operation on a hierarchical tree network optimized for collective operations

    SciTech Connect

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-22

    Performing a scatterv operation on a hierarchical tree network optimized for collective operations including receiving, by the scatterv module installed on the node, from a nearest neighbor parent above the node a chunk of data having at least a portion of data for the node; maintaining, by the scatterv module installed on the node, the portion of the data for the node; determining, by the scatterv module installed on the node, whether any portions of the data are for a particular nearest neighbor child below the node or one or more other nodes below the particular nearest neighbor child; and sending, by the scatterv module installed on the node, those portions of data to the nearest neighbor child if any portions of the data are for a particular nearest neighbor child below the node or one or more other nodes below the particular nearest neighbor child.

  10. Multiplicative approximations, optimal hypervolume distributions, and the choice of the reference point.

    PubMed

    Friedrich, Tobias; Neumann, Frank; Thyssen, Christian

    2015-01-01

    Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multi-objective problems as the population of such an algorithm can be used to represent the trade-offs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multi-objective problems. We consider indicator-based algorithms whose goal is to maximize the hypervolume for a given problem by distributing [Formula: see text] points on the Pareto front. To gain new theoretical insights into the behavior of hypervolume-based algorithms, we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of bi-objective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolume-based approaches and examine Pareto fronts of different shapes by numerical calculations. PMID:24654679

  11. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    NASA Astrophysics Data System (ADS)

    Goldberg, Daniel N.; Krishna Narayanan, Sri Hari; Hascoet, Laurent; Utke, Jean

    2016-05-01

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enabling larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. The methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.

  12. Road centerline extraction from airborne LiDAR point cloud based on hierarchical fusion and optimization

    NASA Astrophysics Data System (ADS)

    Hui, Zhenyang; Hu, Youjian; Jin, Shuanggen; Yevenyo, Yao Ziggah

    2016-08-01

    Road information acquisition is an important part of city informatization construction. Airborne LiDAR provides a new means of acquiring road information. However, the existing road extraction methods using LiDAR point clouds always decide the road intensity threshold based on experience, which cannot obtain the optimal threshold to extract a road point cloud. Moreover, these existing methods are deficient in removing the interference of narrow roads and several attached areas (e.g., parking lot and bare ground) to main roads extraction, thereby imparting low completeness and correctness to the city road network extraction result. Aiming at resolving the key technical issues of road extraction from airborne LiDAR point clouds, this paper proposes a novel method to extract road centerlines from airborne LiDAR point clouds. The proposed approach is mainly composed of three key algorithms, namely, Skewness balancing, Rotating neighborhood, and Hierarchical fusion and optimization (SRH). The skewness balancing algorithm used for the filtering was adopted as a new method for obtaining an optimal intensity threshold such that the "pure" road point cloud can be obtained. The rotating neighborhood algorithm on the other hand was developed to remove narrow roads (corridors leading to parking lots or sidewalks), which are not the main roads to be extracted. The proposed hierarchical fusion and optimization algorithm caused the road centerlines to be unaffected by certain attached areas and ensured the road integrity as much as possible. The proposed method was tested using the Vaihingen dataset. The results demonstrated that the proposed method can effectively extract road centerlines in a complex urban environment with 91.4% correctness and 80.4% completeness.

  13. Analysis of an optimization-based atomistic-to-continuum coupling method for point defects

    SciTech Connect

    Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; Luskin, Mitchell

    2015-11-16

    Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.

  14. Optimizing Wellfield Operation in a Variable Power Price Regime.

    PubMed

    Bauer-Gottwein, Peter; Schneider, Raphael; Davidsen, Claus

    2016-01-01

    Wellfield management is a multiobjective optimization problem. One important objective has been energy efficiency in terms of minimizing the energy footprint (EFP) of delivered water (MWh/m(3) ). However, power systems in most countries are moving in the direction of deregulated markets and price variability is increasing in many markets because of increased penetration of intermittent renewable power sources. In this context the relevant management objective becomes minimizing the cost of electric energy used for pumping and distribution of groundwater from wells rather than minimizing energy use itself. We estimated EFP of pumped water as a function of wellfield pumping rate (EFP-Q relationship) for a wellfield in Denmark using a coupled well and pipe network model. This EFP-Q relationship was subsequently used in a Stochastic Dynamic Programming (SDP) framework to minimize total cost of operating the combined wellfield-storage-demand system over the course of a 2-year planning period based on a time series of observed price on the Danish power market and a deterministic, time-varying hourly water demand. In the SDP setup, hourly pumping rates are the decision variables. Constraints include storage capacity and hourly water demand fulfilment. The SDP was solved for a baseline situation and for five scenario runs representing different EFP-Q relationships and different maximum wellfield pumping rates. Savings were quantified as differences in total cost between the scenario and a constant-rate pumping benchmark. Minor savings up to 10% were found in the baseline scenario, while the scenario with constant EFP and unlimited pumping rate resulted in savings up to 40%. Key factors determining potential cost savings obtained by flexible wellfield operation under a variable power price regime are the shape of the EFP-Q relationship, the maximum feasible pumping rate and the capacity of available storage facilities.

  15. Optimal Operation Method of Smart House by Controllable Loads based on Smart Grid Topology

    NASA Astrophysics Data System (ADS)

    Yoza, Akihiro; Uchida, Kosuke; Yona, Atsushi; Senju, Tomonobu

    2013-08-01

    From the perspective of global warming suppression and depletion of energy resources, renewable energy such as wind generation (WG) and photovoltaic generation (PV) are getting attention in distribution systems. Additionally, all electrification apartment house or residence such as DC smart house have increased in recent years. However, due to fluctuating power from renewable energy sources and loads, supply-demand balancing fluctuations of power system become problematic. Therefore, "smart grid" has become very popular in the worldwide. This article presents a methodology for optimal operation of a smart grid to minimize the interconnection point power flow fluctuations. To achieve the proposed optimal operation, we use distributed controllable loads such as battery and heat pump. By minimizing the interconnection point power flow fluctuations, it is possible to reduce the maximum electric power consumption and the electric cost. This system consists of photovoltaics generator, heat pump, battery, solar collector, and load. In order to verify the effectiveness of the proposed system, MATLAB is used in simulations.

  16. A systematic approach: optimization of healthcare operations with knowledge management.

    PubMed

    Wickramasinghe, Nilmini; Bali, Rajeev K; Gibbons, M Chris; Choi, J H James; Schaffer, Jonathan L

    2009-01-01

    Effective decision making is vital in all healthcare activities. While this decision making is typically complex and unstructured, it requires the decision maker to gather multispectral data and information in order to make an effective choice when faced with numerous options. Unstructured decision making in dynamic and complex environments is challenging and in almost every situation the decision maker is undoubtedly faced with information inferiority. The need for germane knowledge, pertinent information and relevant data are critical and hence the value of harnessing knowledge and embracing the tools, techniques, technologies and tactics of knowledge management are essential to ensuring efficiency and efficacy in the decision making process. The systematic approach and application of knowledge management (KM) principles and tools can provide the necessary foundation for improving the decision making processes in healthcare. A combination of Boyd's OODA Loop (Observe, Orient, Decide, Act) and the Intelligence Continuum provide an integrated, systematic and dynamic model for ensuring that the healthcare decision maker is always provided with the appropriate and necessary knowledge elements that will help to ensure that healthcare decision making process outcomes are optimized for maximal patient benefit. The example of orthopaedic operating room processes will illustrate the application of the integrated model to support effective decision making in the clinical environment.

  17. Confidence intervals for the symmetry point: an optimal cutpoint in continuous diagnostic tests.

    PubMed

    López-Ratón, Mónica; Cadarso-Suárez, Carmen; Molanes-López, Elisa M; Letón, Emilio

    2016-01-01

    Continuous diagnostic tests are often used for discriminating between healthy and diseased populations. For this reason, it is useful to select an appropriate discrimination threshold. There are several optimality criteria: the North-West corner, the Youden index, the concordance probability and the symmetry point, among others. In this paper, we focus on the symmetry point that maximizes simultaneously the two types of correct classifications. We construct confidence intervals for this optimal cutpoint and its associated specificity and sensitivity indexes using two approaches: one based on the generalized pivotal quantity and the other on empirical likelihood. We perform a simulation study to check the practical behaviour of both methods and illustrate their use by means of three real biomedical datasets on melanoma, prostate cancer and coronary artery disease. PMID:26756550

  18. Pointing calibration of the MKIVA DSN antennas Voyager 2 Uranus encounter operations support

    NASA Technical Reports Server (NTRS)

    Stevens, R.; Riggs, R. L.; Wood, B.

    1986-01-01

    The MKIVA DSN introduced significant changes to the pointing systems of the 34-meter and 64-meter diameter antennas. To support the Voyager 2 Uranus Encounter, the systems had to be accurately calibrated. Reliable techniques for use of the calibrations during intense mission support activity had to be provided. This article describes the techniques used to make the antenna pointing calibrations and to demonstrate their operational use. The results of the calibrations are summarized.

  19. Providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOEpatents

    Archer, Charles J.; Faraj, Ahmad A.; Inglett, Todd A.; Ratterman, Joseph D.

    2012-10-23

    Methods, apparatus, and products are disclosed for providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: identifying each link in the global combining network for each compute node of the operational group; designating one of a plurality of point-to-point class routing identifiers for each link such that no compute node in the operational group is connected to two adjacent compute nodes in the operational group with links designated for the same class routing identifiers; and configuring each compute node of the operational group for point-to-point communications with each adjacent compute node in the global combining network through the link between that compute node and that adjacent compute node using that link's designated class routing identifier.

  20. Mission to the Sun-Earth L5 Lagrangian Point: An Optimal Platform for Space Weather Research

    NASA Astrophysics Data System (ADS)

    Vourlidas, Angelos

    2015-04-01

    The Sun-Earth Lagrangian L5 point is a uniquely advantageous location for space weather research and monitoring. It covers the "birth-to-impact" travel of solar transients; it enables imaging of solar activity at least 3 days prior to a terrestrial viewpoint and measures the solar wind conditions 4-5 days ahead of Earth impact. These observations, especially behind east limb magnetograms, will be a boon for background solar wind models, which are essential for coronal mass ejection (CME) and shock propagation forecasting. From an operational perspective, the L5 orbit is the space weather equivalent to the geosynchronous orbit for weather satellites. Optimal for both research and monitoring, an L5 mission is ideal for developing a Research-to-Operations capability in Heliophysics.

  1. Optimal integration of gravity in trajectory planning of vertical pointing movements.

    PubMed

    Crevecoeur, Frédéric; Thonnard, Jean-Louis; Lefèvre, Philippe

    2009-08-01

    The planning and control of motor actions requires knowledge of the dynamics of the controlled limb to generate the appropriate muscular commands and achieve the desired goal. Such planning and control imply that the CNS must be able to deal with forces and constraints acting on the limb, such as the omnipresent force of gravity. The present study investigates the effect of hypergravity induced by parabolic flights on the trajectory of vertical pointing movements to test the hypothesis that motor commands are optimized with respect to the effect of gravity on the limb. Subjects performed vertical pointing movements in normal gravity and hypergravity. We use a model based on optimal control to identify the role played by gravity in the optimal arm trajectory with minimal motor costs. First, the simulations in normal gravity reproduce the asymmetry in the velocity profiles (the velocity reaches its maximum before half of the movement duration), which typically characterizes the vertical pointing movements performed on Earth, whereas the horizontal movements present symmetrical velocity profiles. Second, according to the simulations, the optimal trajectory in hypergravity should present an increase in the peak acceleration and peak velocity despite the increase in the arm weight. In agreement with these predictions, the subjects performed faster movements in hypergravity with significant increases in the peak acceleration and peak velocity, which were accompanied by a significant decrease in the movement duration. This suggests that movement kinematics change in response to an increase in gravity, which is consistent with the hypothesis that motor commands are optimized and the action of gravity on the limb is taken into account. The results provide evidence for an internal representation of gravity in the central planning process and further suggest that an adaptation to altered dynamics can be understood as a reoptimization process.

  2. Phase-operation for conduction electron by atomic-scale scattering via single point-defect

    SciTech Connect

    Nagaoka, Katsumi Yaginuma, Shin; Nakayama, Tomonobu

    2014-03-17

    In order to propose a phase-operation technique for conduction electrons in solid, we have investigated, using scanning tunneling microscopy, an atomic-scale electron-scattering phenomenon on a 2D subband state formed in Si. Particularly, we have noticed a single surface point-defect around which a standing-wave pattern created, and a dispersion of scattering phase-shifts by the defect-potential against electron-energy has been measured. The behavior is well-explained with appropriate scattering parameters: the potential height and radius. This result experimentally proves that the atomic-scale potential scattering via the point defect enables phase-operation for conduction electrons.

  3. The Apache Point Observatory Lunar Laser-ranging Operation: Instrument Description and First Detections

    SciTech Connect

    Murphy, TW; Adelberger, Eric G.; Battat, J.; Carey, LN; Hoyle, Charles D.; LeBlanc, P.; Michelsen, EL; Nordtvedt, K.; Orin, AE; Strasburg, Jana D.; Stubbs, CW; Swanson, HE; Williams, E.

    2008-01-01

    A next-generation lunar laser ranging apparatus using the 3.5 m telescope at the Apache Point Observatory in southern New Mexico has begun science operation. APOLLO (the Apache Point Observatory Lunar Laser-ranging Operation) has achieved one-millimeter range precision to the moon which should lead to aproximately one-orderof-magnitude improvements in the precision of several tests of fundamental properties of gravity. We briefly motivate the scientific goals, and then give a detailed discussion of the APOLLO instrumentation.

  4. Research on the modeling of the missile's disturbance motion and the initial control point optimization

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Dalin; Tang, Shengjing

    2012-11-01

    The initial trajectory design of the missile is an important part of the overall design, but often a tedious calculation and analysis process due to the large dimension nonlinear differential equations and the traditional statistical analysis methods. To improve the traditional design methods, a robust optimization concept and method are introduced in this paper to deal with the determination of the initial control point. First, the Gaussian Radial Basis Network is adopted to establish the approximate model of the missile's disturbance motion based on the disturbance motion and disturbance factors analysis. Then, a direct analytical relationship between the disturbance input and statistical results is deduced on the basis of Gaussian Radial Basis Network model. Subsequently, a robust optimization model is established aiming at the initial control point design problem and the niche Pareto genetic algorithm for multi-objective optimization is adopted to solve this optimization model. An integral design example is give at last and the simulation results have verified the validity of this method.

  5. Sensitivity derivatives and optimization of nodal point locations for vibration reduction

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.; Haftka, Raphael T.

    1987-01-01

    A method is developed for sensitivity analysis and optimization of nodal point locations in connection with vibration reduction. A straightforward derivation of the expression for the derivative of nodal locations is given, and the role of the derivative in assessing design trends is demonstrated. An optimization process is developed which uses added lumped masses on the structure as design variables to move the node to a preselected location; for example, where low response amplitude is required or to a point which makes the mode shape nearly orthogonal to the force distribution, thereby minimizing the generalized force. The optimization formulation leads to values for added masses that adjust a nodal location while minimizing the total amount of added mass required to do so. As an example, the node of the second mode of a cantilever box beam is relocated to coincide with the centroid of a prescribed force distribution, thereby reducing the generalized force substantially without adding excessive mass. A comparison with an optimization formulation that directly minimizes the generalized force indicates that nodal placement gives essentially a minimum generalized force when the node is appropriately placed.

  6. Senstitivty analysis and optimization of nodal point placement for vibration reduction

    NASA Technical Reports Server (NTRS)

    Pritchard, J. I.; Adelman, H. M.; Haftka, R. T.

    1986-01-01

    A method is developed for sensitivity analysis and optimization of nodal point locations in connection with vibration reduction. A straightforward derivation of the expression for the derivative of nodal locations is given, and the role of the derivative in assessing design trends is demonstrated. An optimization process is developed which uses added lumped masses on the structure as design variables to move the node to a preselected location - for example, where low response amplitude is required or to a point which makes the mode shape nearly orthogonal to the force distribution, thereby minimizing the generalized force. The optimization formulation leads to values for added masses that adjust a nodal location while minimizing the total amount of added mass required to do so. As an example, the node of the second mode of a cantilever box beam is relocated to coincide with the centroid of a prescribed force distribution, thereby reducing the generalized force substantially without adding excessive mass. A comparison with an optimization formulation that directly minimizes the generalized force indicates that nodal placement gives essentially a minimum generalized force when the node is appropriately placed.

  7. Operationally optimal maneuver strategy for spacecraft injected into sub-geosynchronous transfer orbit

    NASA Astrophysics Data System (ADS)

    Kiran, B. S.; Singh, Satyendra; Negi, Kuldeep

    The GSAT-12 spacecraft is providing Communication services from the INSAT/GSAT system in the Indian region. The spacecraft carries 12 extended C-band transponders. GSAT-12 was launched by ISRO’s PSLV from Sriharikota, into a sub-geosynchronous Transfer Orbit (sub-GTO) of 284 x 21000 km with inclination 18 deg. This Mission successfully accomplished combined optimization of launch vehicle and satellite capabilities to maximize operational life of the s/c. This paper describes mission analysis carried out for GSAT-12 comprising launch window, orbital events study and orbit raising maneuver strategies considering various Mission operational constraints. GSAT-12 is equipped with two earth sensors (ES), three gyroscopes and digital sun sensor. The launch window was generated considering mission requirement of minimum 45 minutes of ES data for calibration of gyros with Roll-sun-pointing orientation in T.O. Since the T.O. period was a rather short 6.1 hr, required pitch biases were worked out to meet the gyro-calibration requirement. A 440 N Liquid Apogee Motor (LAM) is used for orbit raising. The objective of the maneuver strategy is to achieve desired drift orbit satisfying mission constraints and minimizing propellant expenditure. In case of sub-GTO, the optimal strategy is to first perform an in-plane maneuver at perigee to raise the apogee to synchronous level and then perform combined maneuvers at the synchronous apogee to achieve desired drift orbit. The perigee burn opportunities were examined considering ground station visibility requirement for monitoring the burn. Two maneuver strategies were proposed: an optimal five-burn strategy with two perigee burns centered around perigee#5 and perigee#8 with partial ground station visibility and three apogee burns with dual station visibility, a near-optimal five-burn strategy with two off-perigee burns at perigee#5 and perigee#8 with single ground station visibility and three apogee burns with dual station visibility

  8. 75 FR 3856 - Drawbridge Operation Regulations; Great Egg Harbor Bay, Between Beesleys Point and Somers Point, NJ

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-25

    ... Bridge over Great Egg Harbor Bay, at mile 3.5, between Beesleys Point and Somers Point, NJ. This rule....S. Route 9 Bridge, at mile 3.5, across Great Egg Harbor Bay, between Beesleys Point and Somers Point... follows: Sec. 117.722 Great Egg Harbor Bay. The draw of the U.S. Route 9/Beesleys Point Bridge, mile...

  9. A PERFECT MATCH CONDITION FOR POINT-SET MATCHING PROBLEMS USING THE OPTIMAL MASS TRANSPORT APPROACH

    PubMed Central

    CHEN, PENGWEN; LIN, CHING-LONG; CHERN, I-LIANG

    2013-01-01

    We study the performance of optimal mass transport-based methods applied to point-set matching problems. The present study, which is based on the L2 mass transport cost, states that perfect matches always occur when the product of the point-set cardinality and the norm of the curl of the non-rigid deformation field does not exceed some constant. This analytic result is justified by a numerical study of matching two sets of pulmonary vascular tree branch points whose displacement is caused by the lung volume changes in the same human subject. The nearly perfect match performance verifies the effectiveness of this mass transport-based approach. PMID:23687536

  10. Li/CFx Cells Optimized for Low-Temperature Operation

    NASA Technical Reports Server (NTRS)

    Smart, Marshall C.; Whitacre, Jay F.; Bugga, Ratnakumar V.; Prakash, G. K. Surya; Bhalla, Pooja; Smith, Kiah

    2009-01-01

    Some developments reported in prior NASA Tech Briefs articles on primary electrochemical power cells containing lithium anodes and fluorinated carbonaceous (CFx) cathodes have been combined to yield a product line of cells optimized for relatively-high-current operation at low temperatures at which commercial lithium-based cells become useless. These developments have involved modifications of the chemistry of commercial Li/CFx cells and batteries, which are not suitable for high-current and low-temperature applications because they are current-limited and their maximum discharge rates decrease with decreasing temperature. One of two developments that constitute the present combination is, itself, a combination of developments: (1) the use of sub-fluorinated carbonaceous (CFx wherein x<1) cathode material, (2) making the cathodes thinner than in most commercial units, and (3) using non-aqueous electrolytes formulated especially to enhance low-temperature performance. This combination of developments was described in more detail in High-Energy-Density, Low- Temperature Li/CFx Primary Cells (NPO-43219), NASA Tech Briefs, Vol. 31, No. 7 (July 2007), page 43. The other development included in the present combination is the use of an anion receptor as an electrolyte additive, as described in the immediately preceding article, "Additive for Low-Temperature Operation of Li-(CF)n Cells" (NPO- 43579). A typical cell according to the present combination of developments contains an anion-receptor additive solvated in an electrolyte that comprises LiBF4 dissolved at a concentration of 0.5 M in a mixture of four volume parts of 1,2 dimethoxyethane with one volume part of propylene carbonate. The proportion, x, of fluorine in the cathode in such a cell lies between 0.5 and 0.9. The best of such cells fabricated to date have exhibited discharge capacities as large as 0.6 A h per gram at a temperature of 50 C when discharged at a rate of C/5 (where C is the magnitude of the

  11. Turbine Reliability and Operability Optimization through the use of Direct Detection Lidar Final Technical Report

    SciTech Connect

    Johnson, David K; Lewis, Matthew J; Pavlich, Jane C; Wright, Alan D; Johnson, Kathryn E; Pace, Andrew M

    2013-02-01

    The goal of this Department of Energy (DOE) project is to increase wind turbine efficiency and reliability with the use of a Light Detection and Ranging (LIDAR) system. The LIDAR provides wind speed and direction data that can be used to help mitigate the fatigue stress on the turbine blades and internal components caused by wind gusts, sub-optimal pointing and reactionary speed or RPM changes. This effort will have a significant impact on the operation and maintenance costs of turbines across the industry. During the course of the project, Michigan Aerospace Corporation (MAC) modified and tested a prototype direct detection wind LIDAR instrument; the resulting LIDAR design considered all aspects of wind turbine LIDAR operation from mounting, assembly, and environmental operating conditions to laser safety. Additionally, in co-operation with our partners, the National Renewable Energy Lab and the Colorado School of Mines, progress was made in LIDAR performance modeling as well as LIDAR feed forward control system modeling and simulation. The results of this investigation showed that using LIDAR measurements to change between baseline and extreme event controllers in a switching architecture can reduce damage equivalent loads on blades and tower, and produce higher mean power output due to fewer overspeed events. This DOE project has led to continued venture capital investment and engagement with leading turbine OEMs, wind farm developers, and wind farm owner/operators.

  12. 76 FR 65118 - Drawbridge Operation Regulation; Bear Creek, Sparrows Point, MD

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-20

    ... SECURITY Coast Guard 33 CFR Part 117 RIN 1625-AA09 Drawbridge Operation Regulation; Bear Creek, Sparrows... Avenue) highway toll drawbridge across Bear Creek, mile 1.5, Sparrows Point, MD was replaced with a fixed... Bear Creek, mile 1.5 was removed and replaced with a fixed bridge in 1998. Prior to 1998, a...

  13. 77 FR 56115 - Drawbridge Operation Regulations; Fort Point Channel, Boston, MA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-12

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Part 117 Drawbridge Operation Regulations; Fort Point Channel, Boston, MA AGENCY: Coast Guard, DHS. ACTION: Notice of temporary deviation from regulations. SUMMARY: The...

  14. 78 FR 26248 - Drawbridge Operation Regulation; York River, between Yorktown and Gloucester Point, VA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-06

    ... draw of the US 17/George P. Coleman Memorial Swing Bridge across the York River, at mile 7.0, between.... Coleman Memorial Swing Bridge. This deviation allows the drawbridge to remain in the closed to navigation... regular operating schedule, the Coleman Memorial Bridge, at mile 7.0, between Gloucester Point...

  15. 78 FR 21064 - Drawbridge Operation Regulations; York River, between Yorktown and Gloucester Point, VA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-09

    ... draw of the US 17/George P. Coleman Memorial Swing Bridge across the York River, at mile 7.0, between.... Coleman Memorial Swing Bridge. This temporary deviation allows the drawbridge to remain in the closed- to... regular operating schedule, the Coleman Memorial Bridge, at mile 7.0, between Gloucester Point...

  16. 77 FR 40091 - Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating, Units 2 and 3

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-06

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating, Units 2 and 3 AGENCY: Nuclear... statement for license renewal of nuclear plants; availability. SUMMARY: The U.S. Nuclear...

  17. Existence and data dependence of fixed points for multivalued operators on gauge spaces

    NASA Astrophysics Data System (ADS)

    Espínola, Rafael; Petrusel, Adrian

    2005-09-01

    The purpose of this note is to present some fixed point and data dependence theorems in complete gauge spaces and in hyperconvex metric spaces for the so-called Meir-Keeler multivalued operators and admissible multivalued a[alpha]-contractions. Our results extend and generalize several theorems of Espínola and Kirk [R. Espínola, W.A. Kirk, Set-valued contractions and fixed points, Nonlinear Anal. 54 (2003) 485-494] and Rus, Petrusel, and Sîntamarian [I.A. Rus, A. Petrusel, A. Sîntamarian, Data dependence of the fixed point set of some multivalued weakly Picard operators, Nonlinear Anal. 52 (2003) 1947-1959].

  18. Two Point Eigenvalue Correlation for a Class of Non-Selfadjoint Operators Under Random Perturbations

    NASA Astrophysics Data System (ADS)

    Vogel, Martin

    2016-09-01

    We consider a non-selfadjoint h-differential model operator P h in the semiclassical limit ({h→ 0} ) subject to random perturbations with a small coupling constant δ. Assume that {e^{-1/Ch} < δ ≪ h^{κ}} for constants {C,κ > 0} suitably large. Let Σ be the closure of the range of the principal symbol. We study the 2-point intensity measure of the random point process of eigenvalues of the randomly perturbed operator {P_h^{δ}} and prove an h-asymptotic formula for the average 2-point density of eigenvalues. With this we show that two eigenvalues of {P_h^{δ}} in the interior of Σ exhibit close range repulsion and long range decoupling.

  19. Building Restoration Operations Optimization Model Beta Version 1.0

    2007-05-31

    The Building Restoration Operations Optimization Model (BROOM), developed by Sandia National Laboratories, is a software product designed to aid in the restoration of large facilities contaminated by a biological material. BROOM’s integrated data collection, data management, and visualization software improves the efficiency of cleanup operations, minimizes facility downtime, and provides a transparent basis for reopening the facility. Secure remote access to building floor plans Floor plan drawings and knowledge of the HVAC system are criticalmore » to the design and implementation of effective sampling plans. In large facilities, access to these data may be complicated by the sheer abundance and disorganized state they are often stored in. BROOM avoids potentially costly delays by providing a means of organizing and storing mechanical and floor plan drawings in a secure remote database that is easily accessed. Sampling design tools BROOM provides an array of tools to answer the question of where to sample and how many samples to take. In addition to simple judgmental and random sampling plans, the software includes two sophisticated methods of adaptively developing a sampling strategy. Both tools strive to choose sampling locations that best satisfy a specified objective (i.e. minimizing kriging variance) but use numerically different strategies to do so. Surface samples are collected early in the restoration process to characterize the extent of contamination and then again later to verify that the facility is safe to reenter. BROOM supports sample collection using a ruggedized PDA equipped with a barcode scanner and laser range finder. The PDA displays building floor drawings, sampling plans, and electronic forms for data entry. Barcodes are placed on sample containers for the purpose of tracking the specimen and linking acquisition data (i.e. location, surface type, texture) to laboratory results. Sample location is determined by activating the integrated

  20. Building Restoration Operations Optimization Model Beta Version 1.0

    SciTech Connect

    2007-05-31

    The Building Restoration Operations Optimization Model (BROOM), developed by Sandia National Laboratories, is a software product designed to aid in the restoration of large facilities contaminated by a biological material. BROOM’s integrated data collection, data management, and visualization software improves the efficiency of cleanup operations, minimizes facility downtime, and provides a transparent basis for reopening the facility. Secure remote access to building floor plans Floor plan drawings and knowledge of the HVAC system are critical to the design and implementation of effective sampling plans. In large facilities, access to these data may be complicated by the sheer abundance and disorganized state they are often stored in. BROOM avoids potentially costly delays by providing a means of organizing and storing mechanical and floor plan drawings in a secure remote database that is easily accessed. Sampling design tools BROOM provides an array of tools to answer the question of where to sample and how many samples to take. In addition to simple judgmental and random sampling plans, the software includes two sophisticated methods of adaptively developing a sampling strategy. Both tools strive to choose sampling locations that best satisfy a specified objective (i.e. minimizing kriging variance) but use numerically different strategies to do so. Surface samples are collected early in the restoration process to characterize the extent of contamination and then again later to verify that the facility is safe to reenter. BROOM supports sample collection using a ruggedized PDA equipped with a barcode scanner and laser range finder. The PDA displays building floor drawings, sampling plans, and electronic forms for data entry. Barcodes are placed on sample containers for the purpose of tracking the specimen and linking acquisition data (i.e. location, surface type, texture) to laboratory results. Sample location is determined by activating the integrated laser

  1. The Hubble Space Telescope fine guidance system operating in the coarse track pointing control mode

    NASA Technical Reports Server (NTRS)

    Whittlesey, Richard

    1993-01-01

    The Hubble Space Telescope (HST) Fine Guidance System has set new standards in pointing control capability for earth orbiting spacecraft. Two precision pointing control modes are implemented in the Fine Guidance System; one being a Coarse Track Mode which employs a pseudo-quadrature detector approach and the second being a Fine Mode which uses a two axis interferometer implementation. The Coarse Track Mode was designed to maintain FGS pointing error to within 20 milli-arc seconds (rms) when guiding on a 14.5 Mv star. The Fine Mode was designed to maintain FGS pointing error to less than 3 milli-arc seconds (rms). This paper addresses the HST FGS operating in the Coarse Track Mode. An overview of the implementation, the operation, and both the predicted and observed on orbit performance is presented. The discussion includes a review of the Fine Guidance System hardware which uses two beam steering Star Selector servos, four photon counting photomultiplier tube detectors, as well as a 24 bit microprocessor, which executes the control system firmware. Unanticipated spacecraft operational characteristics are discussed as they impact pointing performance. These include the influence of spherically aberrated star images as well as the mechanical shocks induced in the spacecraft during and following orbital day/night terminator crossings. Computer modeling of the Coarse Track Mode verifies the observed on orbit performance trends in the presence of these optical and mechanical disturbances. It is concluded that the coarse track pointing control function is performing as designed and is providing a robust pointing control capability for the Hubble Space Telescope.

  2. Cove Point liquefied natural gas operations: a preliminary review of the risk

    SciTech Connect

    Margulies, T.S.

    1980-12-01

    In response to a request from Calvert County, Maryland the Energy and Coastal Zone Administration has made an effort to evaluate the impacts associated with the transport of LNG to Cove Point. This report discusses a study that has been performed to provide a preliminary review of the risk to the public. Several tasks included in the study were: (1) Review of safety and preventive measures currently being used to prevent a hazardous release of LNG; (2) Review of the calculated risk associated with tanker accidents including a discussion of the probabilistic ship collision and vapor cloud dispersion models used in a risk assessment of the Cove Point operation by Science Applications, Inc.; (3) provide an overview of risk assessment techniques applicable to marine transportation and facility problems in the event that further expansion of the Cove Point facility or a new facility is proposed and; (4) develop information on the population distribution surrounding Cove Point.

  3. A bioeconomic model for comparing beef cattle genotypes at their optimal economic slaughter end point.

    PubMed

    Amer, P R; Kemp, R A; Buchanan-Smith, J G; Fox, G C; Smith, C

    1994-01-01

    A bioeconomic model of a feedlot was developed for the comparison of beef cattle genotypes under specified management and marketing conditions. The optimization behavior of commercial feedlot managers is incorporated into the model using optimum economic rotation theory. The days spent in the feedlot (rotation) by a group of animals are derived using this theory so as to maximize an objective function. Differences among breeds in the present value of profits from a single rotation, expressed per animal, represent the expected price premium paid for a feeder animal of a particular breed. Feed requirements and growth rates for a genotype are predicted over time for a specified diet from estimated mature size. Estimates of carcass fatness over time as a function of the energy content of the diet and estimates of dressing percentage over time are used for each genotype. A base model is described that incorporates biological parameters estimated for 11 breeds from a major breed comparison experiment and uses prices of inputs and outputs for Ontario feedlots. Sensitivity of the model to these biological and economic assumptions is shown. When breeds are compared at constant days fed, weight, or fat depth slaughter points, rankings are inconsistent, relative to those when each breed is slaughtered at its optimal economic point. The model can be used to establish appropriate slaughter end points for comparing beef cattle breeds and crosses and to evaluate breeding objectives for feedlot traits in genetic improvement programs.

  4. Optimal Design and Operation of In-Situ Chemical Oxidation Using Stochastic Cost Optimization Toolkit

    NASA Astrophysics Data System (ADS)

    Kim, U.; Parker, J.; Borden, R. C.

    2014-12-01

    In-situ chemical oxidation (ISCO) has been applied at many dense non-aqueous phase liquid (DNAPL) contaminated sites. A stirred reactor-type model was developed that considers DNAPL dissolution using a field-scale mass transfer function, instantaneous reaction of oxidant with aqueous and adsorbed contaminant and with readily oxidizable natural oxygen demand ("fast NOD"), and second-order kinetic reactions with "slow NOD." DNAPL dissolution enhancement as a function of oxidant concentration and inhibition due to manganese dioxide precipitation during permanganate injection are included in the model. The DNAPL source area is divided into multiple treatment zones with different areas, depths, and contaminant masses based on site characterization data. The performance model is coupled with a cost module that involves a set of unit costs representing specific fixed and operating costs. Monitoring of groundwater and/or soil concentrations in each treatment zone is employed to assess ISCO performance and make real-time decisions on oxidant reinjection or ISCO termination. Key ISCO design variables include the oxidant concentration to be injected, time to begin performance monitoring, groundwater and/or soil contaminant concentrations to trigger reinjection or terminate ISCO, number of monitoring wells or geoprobe locations per treatment zone, number of samples per sampling event and location, and monitoring frequency. Design variables for each treatment zone may be optimized to minimize expected cost over a set of Monte Carlo simulations that consider uncertainty in site parameters. The model is incorporated in the Stochastic Cost Optimization Toolkit (SCOToolkit) program, which couples the ISCO model with a dissolved plume transport model and with modules for other remediation strategies. An example problem is presented that illustrates design tradeoffs required to deal with characterization and monitoring uncertainty. Monitoring soil concentration changes during ISCO

  5. Optimizing water supply and hydropower reservoir operation rule curves: An imperialist competitive algorithm approach

    NASA Astrophysics Data System (ADS)

    Afshar, Abbas; Emami Skardi, Mohammad J.; Masoumi, Fariborz

    2015-09-01

    Efficient reservoir management requires the implementation of generalized optimal operating policies that manage storage volumes and releases while optimizing a single objective or multiple objectives. Reservoir operating rules stipulate the actions that should be taken under the current state of the system. This study develops a set of piecewise linear operating rule curves for water supply and hydropower reservoirs, employing an imperialist competitive algorithm in a parameterization-simulation-optimization approach. The adaptive penalty method is used for constraint handling and proved to work efficiently in the proposed scheme. Its performance is tested deriving an operation rule for the Dez reservoir in Iran. The proposed modelling scheme converged to near-optimal solutions efficiently in the case examples. It was shown that the proposed optimum piecewise linear rule may perform quite well in reservoir operation optimization as the operating period extends from very short to fairly long periods.

  6. An Efficient and Optimal Filter for Identifying Point Sources in Millimeter/Submillimeter Wavelength Sky Maps

    NASA Astrophysics Data System (ADS)

    Perera, T. A.; Wilson, G. W.; Scott, K. S.; Austermann, J. E.; Schaar, J. R.; Mancera, A.

    2013-07-01

    A new technique for reliably identifying point sources in millimeter/submillimeter wavelength maps is presented. This method accounts for the frequency dependence of noise in the Fourier domain as well as nonuniformities in the coverage of a field. This optimal filter is an improvement over commonly-used matched filters that ignore coverage gradients. Treating noise variations in the Fourier domain as well as map space is traditionally viewed as a computationally intensive problem. We show that the penalty incurred in terms of computing time is quite small due to casting many of the calculations in terms of FFTs and exploiting the absence of sharp features in the noise spectra of observations. Practical aspects of implementing the optimal filter are presented in the context of data from the AzTEC bolometer camera. The advantages of using the new filter over the standard matched filter are also addressed in terms of a typical AzTEC map.

  7. Planned LMSS propagation experiment using ACTS: Preliminary antenna pointing results during mobile operations

    NASA Technical Reports Server (NTRS)

    Rowland, John R.; Goldhirsh, Julius; Vogel, Wolfhard J.; Torrence, Geoffrey W.

    1991-01-01

    An overview and a status description of the planned LMSS mobile K band experiment with ACTS is presented. As a precursor to the ACTS mobile measurements at 20.185 GHz, measurements at 19.77 GHz employing the Olympus satellite were originally planned. However, because of the demise of Olympus in June of 1991, the efforts described here are focused towards the ACTS measurements. In particular, we describe the design and testing results of a gyro controlled mobile-antenna pointing system. Preliminary pointing measurements during mobile operations indicate that the present system is suitable for measurements employing a 15 cm aperture (beamwidth at approximately 7 deg) receiving antenna operating with ACTS in the high gain transponder mode. This should enable measurements with pattern losses smaller than plus or minus 1 dB over more than 95 percent of the driving distance. Measurements with the present mount system employing a 60 cm aperture (beamwidth at approximately 1.7 deg) results in pattern losses smaller than plus or minus 3 dB for 70 percent of the driving distance. Acceptable propagation measurements may still be made with this system by employing developed software to flag out bad data points due to extreme pointing errors. The receiver system including associated computer control software has been designed and assembled. Plans are underway to integrate the antenna mount with the receiver on the University of Texas mobile receiving van and repeat the pointing tests on highways employing a recently designed radome system.

  8. Polarizable six-point water models from computational and empirical optimization.

    PubMed

    Tröster, Philipp; Lorenzen, Konstantin; Tavan, Paul

    2014-02-13

    Tröster et al. (J. Phys. Chem B 2013, 117, 9486-9500) recently suggested a mixed computational and empirical approach to the optimization of polarizable molecular mechanics (PMM) water models. In the empirical part the parameters of Buckingham potentials are optimized by PMM molecular dynamics (MD) simulations. The computational part applies hybrid calculations, which combine the quantum mechanical description of a H2O molecule by density functional theory (DFT) with a PMM model of its liquid phase environment generated by MD. While the static dipole moments and polarizabilities of the PMM water models are fixed at the experimental gas phase values, the DFT/PMM calculations are employed to optimize the remaining electrostatic properties. These properties cover the width of a Gaussian inducible dipole positioned at the oxygen and the locations of massless negative charge points within the molecule (the positive charges are attached to the hydrogens). The authors considered the cases of one and two negative charges rendering the PMM four- and five-point models TL4P and TL5P. Here we extend their approach to three negative charges, thus suggesting the PMM six-point model TL6P. As compared to the predecessors and to other PMM models, which also exhibit partial charges at fixed positions, TL6P turned out to predict all studied properties of liquid water at p0 = 1 bar and T0 = 300 K with a remarkable accuracy. These properties cover, for instance, the diffusion constant, viscosity, isobaric heat capacity, isothermal compressibility, dielectric constant, density, and the isobaric thermal expansion coefficient. This success concurrently provides a microscopic physical explanation of corresponding shortcomings of previous models. It uniquely assigns the failures of previous models to substantial inaccuracies in the description of the higher electrostatic multipole moments of liquid phase water molecules. Resulting favorable properties concerning the transferability to

  9. Polarizable six-point water models from computational and empirical optimization.

    PubMed

    Tröster, Philipp; Lorenzen, Konstantin; Tavan, Paul

    2014-02-13

    Tröster et al. (J. Phys. Chem B 2013, 117, 9486-9500) recently suggested a mixed computational and empirical approach to the optimization of polarizable molecular mechanics (PMM) water models. In the empirical part the parameters of Buckingham potentials are optimized by PMM molecular dynamics (MD) simulations. The computational part applies hybrid calculations, which combine the quantum mechanical description of a H2O molecule by density functional theory (DFT) with a PMM model of its liquid phase environment generated by MD. While the static dipole moments and polarizabilities of the PMM water models are fixed at the experimental gas phase values, the DFT/PMM calculations are employed to optimize the remaining electrostatic properties. These properties cover the width of a Gaussian inducible dipole positioned at the oxygen and the locations of massless negative charge points within the molecule (the positive charges are attached to the hydrogens). The authors considered the cases of one and two negative charges rendering the PMM four- and five-point models TL4P and TL5P. Here we extend their approach to three negative charges, thus suggesting the PMM six-point model TL6P. As compared to the predecessors and to other PMM models, which also exhibit partial charges at fixed positions, TL6P turned out to predict all studied properties of liquid water at p0 = 1 bar and T0 = 300 K with a remarkable accuracy. These properties cover, for instance, the diffusion constant, viscosity, isobaric heat capacity, isothermal compressibility, dielectric constant, density, and the isobaric thermal expansion coefficient. This success concurrently provides a microscopic physical explanation of corresponding shortcomings of previous models. It uniquely assigns the failures of previous models to substantial inaccuracies in the description of the higher electrostatic multipole moments of liquid phase water molecules. Resulting favorable properties concerning the transferability to

  10. Performance of FORTRAN floating-point operations on the Flex/32 multicomputer

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1987-01-01

    A series of experiments has been run to examine the floating-point performance of FORTRAN programs on the Flex/32 (Trademark) computer. The experiments are described, and the timing results are presented. The time required to execute a floating-point operation is found to vary considerbaly depending on a number of factors. One factor of particular interest from an algorithm design standpoint is the difference in speed between common memory accesses and local memory accesses. Common memory accesses were found to be slower, and guidelines are given for determinig when it may be cost effective to copy data from common to local memory.

  11. Zero-point energies, the uncertainty principle, and positivity of the quantum Brownian density operator.

    PubMed

    Tameshtit, Allan

    2012-04-01

    High-temperature and white-noise approximations are frequently invoked when deriving the quantum Brownian equation for an oscillator. Even if this white-noise approximation is avoided, it is shown that if the zero-point energies of the environment are neglected, as they often are, the resultant equation will violate not only the basic tenet of quantum mechanics that requires the density operator to be positive, but also the uncertainty principle. When the zero-point energies are included, asymptotic results describing the evolution of the oscillator are obtained that preserve positivity and, therefore, the uncertainty principle.

  12. Development of optimization program for single point mooring floating production system

    SciTech Connect

    Nakagawa, H.; Kanda, M. Mikami, T.; Kamishohara, A.; Kojima, T.; Yoshizawa, M.

    1995-12-31

    The Floating Production Systems (FPS) or the Floating, Storage and Offloading Systems (FPSO) have been applied to marginal oil fields or early production systems since late `70 in view of less capital and short delivery. There are two types of floating production systems: semi-submersible based spread mooring type and tanker-based single point mooring. This paper describes the analysis method, the technical development of optimization programs and the model experiments for the Single Point Mooring (SPM) systems for FPSO, which have been carried out as a joint research project by Japan National Oil Corporation, Akishima Laboratories Inc. and MODEC Inc. The detailed analysis program is developed based on the constraint matrix method in frequency and time domains and is capable of calculating the motions and constraint forces of SPM in wind, current and random waves. Outline of calculation method is presented.

  13. Optimization of a catchment-scale coupled surface-subsurface hydrological model using pilot points

    NASA Astrophysics Data System (ADS)

    Danapour, Mehrdis; Stisen, Simon; Lajer Højberg, Anker

    2016-04-01

    Transient coupled surface-subsurface models are usually complex and contain a large amount of spatio-temporal information. In the traditional calibration approach, model parameters are adjusted against only few spatially aggregated observations of discharge or individual point observations of groundwater head. However, this approach doesn't enable an assessment of spatially explicit predictive model capabilities at the intermediate scale relevant for many applications. The overall objectives of this project is to develop a new model calibration and evaluation framework by combining distributed model parameterization and regularization with new types of objective functions focusing on optimizing spatial patterns rather than individual points or catchment scale features. Inclusion of detailed observed spatial patterns of hydraulic head gradients or relevant information obtained from remote sensing data in the calibration process could allow for a better representation of spatial variability of hydraulic properties. Pilot Points as an alternative to classical parameterization approaches, introduce great flexibility when calibrating heterogeneous systems without neglecting expert knowledge (Doherty, 2003). A highly parameterized optimization of complex distributed hydrological models at catchment scale is challenging due to the computational burden that comes with it. In this study the physically-based coupled surface-subsurface model MIKE SHE is calibrated for the 8,500 km2 area of central Jylland (Denmark) that is characterized by heterogeneous geology and considerable groundwater flow across topographical catchment boundaries. The calibration of the distributed conductivity fields is carried out with a pilot point-based approach, implemented using the PEST parameter estimation tool. To reduce the high number of calibration parameters, PEST's advanced singular value decomposition combined with regularization was utilized and a reduction of the model's complexity was

  14. Building optimal regression tree by ant colony system-genetic algorithm: application to modeling of melting points.

    PubMed

    Hemmateenejad, Bahram; Shamsipur, Mojtaba; Zare-Shahabadi, Vali; Akhond, Morteza

    2011-10-17

    The classification and regression trees (CART) possess the advantage of being able to handle large data sets and yield readily interpretable models. A conventional method of building a regression tree is recursive partitioning, which results in a good but not optimal tree. Ant colony system (ACS), which is a meta-heuristic algorithm and derived from the observation of real ants, can be used to overcome this problem. The purpose of this study was to explore the use of CART and its combination with ACS for modeling of melting points of a large variety of chemical compounds. Genetic algorithm (GA) operators (e.g., cross averring and mutation operators) were combined with ACS algorithm to select the best solution model. In addition, at each terminal node of the resulted tree, variable selection was done by ACS-GA algorithm to build an appropriate partial least squares (PLS) model. To test the ability of the resulted tree, a set of approximately 4173 structures and their melting points were used (3000 compounds as training set and 1173 as validation set). Further, an external test set containing of 277 drugs was used to validate the prediction ability of the tree. Comparison of the results obtained from both trees showed that the tree constructed by ACS-GA algorithm performs better than that produced by recursive partitioning procedure.

  15. Building optimal regression tree by ant colony system-genetic algorithm: application to modeling of melting points.

    PubMed

    Hemmateenejad, Bahram; Shamsipur, Mojtaba; Zare-Shahabadi, Vali; Akhond, Morteza

    2011-10-17

    The classification and regression trees (CART) possess the advantage of being able to handle large data sets and yield readily interpretable models. A conventional method of building a regression tree is recursive partitioning, which results in a good but not optimal tree. Ant colony system (ACS), which is a meta-heuristic algorithm and derived from the observation of real ants, can be used to overcome this problem. The purpose of this study was to explore the use of CART and its combination with ACS for modeling of melting points of a large variety of chemical compounds. Genetic algorithm (GA) operators (e.g., cross averring and mutation operators) were combined with ACS algorithm to select the best solution model. In addition, at each terminal node of the resulted tree, variable selection was done by ACS-GA algorithm to build an appropriate partial least squares (PLS) model. To test the ability of the resulted tree, a set of approximately 4173 structures and their melting points were used (3000 compounds as training set and 1173 as validation set). Further, an external test set containing of 277 drugs was used to validate the prediction ability of the tree. Comparison of the results obtained from both trees showed that the tree constructed by ACS-GA algorithm performs better than that produced by recursive partitioning procedure. PMID:21907021

  16. Operating point stabilization of fiber-based line detectors for photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Felbermayer, Karoline; Grün, Hubert; Berer, Thomas; Burgholzer, Peter

    2011-07-01

    Photoacoustic imaging is an upcoming technique in the field of biomedical imaging. Our group introduced fiber-based line detectors, which are used to acquire broad-band ultrasonic signals, several years ago. Up to now operating point stabilization of fiber-based line detectors was realized by tuning the wavelength of the detection laser. This is, because of the high costs, not applicable for parallel detection. An alternative stabilization method, the change of the optical path length, is presented in this paper. Changing of the optical path length is realized by stretching the fiber with piezoelectric tubes. Fringe patterns and operation point stabilization of both stabilization schemes are compared. Next, signal detection utilizing a polymer optical fiber in a Mach-Zehnder and Fabry-Perot interferometer is demonstrated, and the influence of the detection wavelength (633nm and 1550nm) is examined. Finally, two-dimensional imaging by utilizing a perfluorinated polymer fiber is demonstrated.

  17. Design optimization of composite structures operating in acoustic environments

    NASA Astrophysics Data System (ADS)

    Chronopoulos, D.

    2015-10-01

    The optimal mechanical and geometric characteristics for layered composite structures subject to vibroacoustic excitations are derived. A Finite Element description coupled to Periodic Structure Theory is employed for the considered layered panel. Structures of arbitrary anisotropy as well as geometric complexity can thus be modelled by the presented approach. Damping can also be incorporated in the calculations. Initially, a numerical continuum-discrete approach for computing the sensitivity of the acoustic wave characteristics propagating within the modelled periodic composite structure is exhibited. The first- and second-order sensitivities of the acoustic transmission coefficient expressed within a Statistical Energy Analysis context are subsequently derived as a function of the computed acoustic wave characteristics. Having formulated the gradient vector as well as the Hessian matrix, the optimal mechanical and geometric characteristics satisfying the considered mass, stiffness and vibroacoustic performance criteria are sought by employing Newton's optimization method.

  18. The Optimized Operation of Gas Turbine Combined Heat and Power Units Oriented for the Grid-Connected Control

    NASA Astrophysics Data System (ADS)

    Xia, Shu; Ge, Xiaolin

    2016-04-01

    In this study, according to various grid-connected demands, the optimization scheduling models of Combined Heat and Power (CHP) units are established with three scheduling modes, which are tracking the total generation scheduling mode, tracking steady output scheduling mode and tracking peaking curve scheduling mode. In order to reduce the solution difficulty, based on the principles of modern algebraic integers, linearizing techniques are developed to handle complex nonlinear constrains of the variable conditions, and the optimized operation problem of CHP units is converted into a mixed-integer linear programming problem. Finally, with specific examples, the 96 points day ahead, heat and power supply plans of the systems are optimized. The results show that, the proposed models and methods can develop appropriate coordination heat and power optimization programs according to different grid-connected control.

  19. Design, Performance and Optimization for Multimodal Radar Operation

    PubMed Central

    Bhat, Surendra S.; Narayanan, Ram M.; Rangaswamy, Muralidhar

    2012-01-01

    This paper describes the underlying methodology behind an adaptive multimodal radar sensor that is capable of progressively optimizing its range resolution depending upon the target scattering features. It consists of a test-bed that enables the generation of linear frequency modulated waveforms of various bandwidths. This paper discusses a theoretical approach to optimizing the bandwidth used by the multimodal radar. It also discusses the various experimental results obtained from measurement. The resolution predicted from theory agrees quite well with that obtained from experiments for different target arrangements.

  20. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....

  1. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....

  2. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....

  3. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....

  4. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....

  5. An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification

    PubMed Central

    Samal, Ashok; Rong, Panying; Green, Jordan R.

    2016-01-01

    Purpose The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. Method The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of words, and a set of short phrases during the recording. We used a machine-learning classifier (support-vector machine) to classify the speech stimuli on the basis of articulatory movements. We then compared classification accuracies of the flesh-point combinations to determine an optimal set of sensors. Results When data from the 4 sensors (T1: the vicinity between the tongue tip and tongue blade; T4: the tongue-body back; UL: the upper lip; and LL: the lower lip) were combined, phoneme and word classifications were most accurate and were comparable with the full set (including T2: the tongue-body front; and T3: the tongue-body front). Conclusion We identified a 4-sensor set—that is, T1, T4, UL, LL—that yielded a classification accuracy (91%–95%) equivalent to that using all 6 sensors. These findings provide an empirical basis for selecting sensors and their locations for scientific and emerging clinical applications that incorporate articulatory movements. PMID:26564030

  6. Sound source localization on an axial fan at different operating points

    NASA Astrophysics Data System (ADS)

    Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes

    2016-08-01

    A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.

  7. An optimal operational advisory system for a brewery's energy supply plant

    SciTech Connect

    Ito, K.; Shiba, T.; Yokoyama, R. . Dept. of Energy Systems Engineering); Sakashita, S. . Mayekawa Energy Management Research Center)

    1994-03-01

    An optimal operational advisory system is proposed to operate rationally a brewery's energy supply plant from the economical viewpoint. A mixed-integer linear programming problem is formulated so as to minimize the daily operational cost subject to constraints such as equipment performance characteristics, energy supply-demand relations, and some practical operational restrictions. This problem includes lots of unknown variables and a hierarchical approach is adopted to derive numerical solutions. The optimal solution obtained by this methods is indicated to the plant operators so as to support their decision making. Through the numerical study for a real brewery plant, the possibility of saving operational cost is ascertained.

  8. Loop Heat Pipe Operation Using Heat Source Temperature for Set Point Control

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Paiva, Kleber; Mantelli, Marcia

    2011-01-01

    Loop heat pipes (LHPs) have been used for thermal control of several NASA and commercial orbiting spacecraft. The LHP operating temperature is governed by the saturation temperature of its compensation chamber (CC). Most LHPs use the CC temperature for feedback control of its operating temperature. There exists a thermal resistance between the heat source to be cooled by the LHP and the LHP's CC. Even if the CC set point temperature is controlled precisely, the heat source temperature will still vary with its heat output. For most applications, controlling the heat source temperature is of most interest. A logical question to ask is: "Can the heat source temperature be used for feedback control of the LHP operation?" A test program has been implemented to answer the above question. Objective is to investigate the LHP performance using the CC temperature and the heat source temperature for feedback control

  9. Science Operations for the 2008 NASA Lunar Analog Field Test at Black Point Lava Flow, Arizona

    NASA Technical Reports Server (NTRS)

    Garry W. D.; Horz, F.; Lofgren, G. E.; Kring, D. A.; Chapman, M. G.; Eppler, D. B.; Rice, J. W., Jr.; Nelson, J.; Gernhardt, M. L.; Walheim, R. J.

    2009-01-01

    Surface science operations on the Moon will require merging lessons from Apollo with new operation concepts that exploit the Constellation Lunar Architecture. Prototypes of lunar vehicles and robots are already under development and will change the way we conduct science operations compared to Apollo. To prepare for future surface operations on the Moon, NASA, along with several supporting agencies and institutions, conducted a high-fidelity lunar mission simulation with prototypes of the small pressurized rover (SPR) and unpressurized rover (UPR) (Fig. 1) at Black Point lava flow (Fig. 2), 40 km north of Flagstaff, Arizona from Oct. 19-31, 2008. This field test was primarily intended to evaluate and compare the surface mobility afforded by unpressurized and pressurized rovers, the latter critically depending on the innovative suit-port concept for efficient egress and ingress. The UPR vehicle transports two astronauts who remain in their EVA suits at all times, whereas the SPR concept enables astronauts to remain in a pressurized shirt-sleeve environment during long translations and while making contextual observations and enables rapid (less than or equal to 10 minutes) transfer to and from the surface via suit-ports. A team of field geologists provided realistic science scenarios for the simulations and served as crew members, field observers, and operators of a science backroom. Here, we present a description of the science team s operations and lessons learned.

  10. Optimization strategy integrity for watershed agricultural non-point source pollution control based on Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Gong, Y.; Yu, Y. J.; Zhang, W. Y.

    2016-08-01

    This study has established a set of methodological systems by simulating loads and analyzing optimization strategy integrity for the optimization of watershed non-point source pollution control. First, the source of watershed agricultural non-point source pollution is divided into four aspects, including agricultural land, natural land, livestock breeding, and rural residential land. Secondly, different pollution control measures at the source, midway and ending stages are chosen. Thirdly, the optimization effect of pollution load control in three stages are simulated, based on the Monte Carlo simulation. The method described above is applied to the Ashi River watershed in Heilongjiang Province of China. Case study results indicate that the combined three types of control measures can be implemented only if the government promotes the optimized plan and gradually improves implementation efficiency. This method for the optimization strategy integrity for watershed non-point source pollution control has significant reference value.

  11. Loop Heat Pipe Operation Using Heat Source Temperature for Set Point Control

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Paiva, Kleber; Mantelli, Marcia

    2011-01-01

    The LHP operating temperature is governed by the saturation temperature of its reservoir. Controlling the reservoir saturation temperature is commonly accomplished by cold biasing the reservoir and using electrical heaters to provide the required control power. Using this method, the loop operating temperature can be controlled within +/- 0.5K. However, because of the thermal resistance that exists between the heat source and the LHP evaporator, the heat source temperature will vary with its heat output even if LHP operating temperature is kept constant. Since maintaining a constant heat source temperature is of most interest, a question often raised is whether the heat source temperature can be used for LHP set point temperature control. A test program with a miniature LHP has been carried out to investigate the effects on the LHP operation when the control temperature sensor is placed on the heat source instead of the reservoir. In these tests, the LHP reservoir is cold-biased and is heated by a control heater. Tests results show that it is feasible to use the heat source temperature for feedback control of the LHP operation. Using this method, the heat source temperature can be maintained within a tight range for moderate and high powers. At low powers, however, temperature oscillations may occur due to interactions among the reservoir control heater power, the heat source mass, and the heat output from the heat source. In addition, the heat source temperature could temporarily deviate from its set point during fast thermal transients. The implication is that more sophisticated feedback control algorithms need to be implemented for LHP transient operation when the heat source temperature is used for feedback control.

  12. Existence, stability and optimality for optimal control problems governed by maximal monotone operators

    NASA Astrophysics Data System (ADS)

    Briceño-Arias, Luis M.; Hoang, Nguyen Dinh; Peypouquet, Juan

    2016-01-01

    We study optimal control problems governed by maximal monotone differential inclusions with mixed control-state constraints in infinite dimensional spaces. We obtain some existence results for this kind of dynamics and construct the discrete approximations that allows us to strongly approximate optimal solutions of the continuous-type optimal control problems by their discrete counterparts. Our approach allows us to apply our results for a wide class of mappings that are applicable in mechanics and material sciences.

  13. Interactive method for planning constrained, fuel-optimal orbital proximity operations

    NASA Technical Reports Server (NTRS)

    Abramovitz, Adrian; Grunwald, Arthur J.

    1993-01-01

    An interactive graphical method for planning fuel-efficient rendezvous trajectories in the multi-spacecraft environment of the space station is presented. The method allows the operator to compose a multi-burn transfer trajectory between arbitrary initial chaser and target trajectories. The available task time of the mission is limited and the maneuver is subject to various operational constraints, such as departure, arrival, plume impingement and spatial constraints. The maneuvers are described in terms of the relataive motion experienced in a Space-Station centered coordinate system. The optimization method is based on the primer vector and its extension to non-optimal trajectories. The visual feedback of trajectory shapes, operational constraints, and optimization functions, provided by user-transparaent and continuously active background computations, allows the operator to make fast, iterative design changes which rapidly converge to fuel-efficient solutions. The optimization functions are presented. A variety of simple design examples has been presented to demonstrate the usefulness of the method. In many cases the addition of a properly positioned intermediate waypoint resulted in fuel savings of up to 30%. Furthermore, due to the counter-intuitive character of the optimization functions, most fuel-optimal solutions could not have been found without the aid of the optimization tools. Operating the system was found to be very easy, and did not require any previous in-depth knowledge of orbital dynamics or trajectory. The planning tool is an example of operator assisted optimization of nonlinear cost-functions.

  14. TH-C-19A-11: Toward An Optimized Multi-Point Scintillation Detector

    SciTech Connect

    Duguay-Drouin, P; Delage, ME; Therriault-Proulx, F; Beddar, S; Beaulieu, L

    2014-06-15

    Purpose: The purpose of this work is to characterize a 2-points mPSDs' optical chain using a spectral analysis to help selecting the optimal components for the detector. Methods: Twenty different 2-points mPSD combinations were built using 4 plastic scintillators (BCF10, BCF12, BCF60, BC430; St-Gobain) and quantum dots (QDs). The scintillator is said to be proximal when near the photodetector, and distal otherwise. A 15m optical fiber (ESKA GH-4001) was coupled to the scintillating component and connected to a spectrometer (Shamrock, Andor and QEPro, OceanOptics). These scintillation components were irradiated at 125kVp; a spectrum for each scintillator was obtained by irradiation of individual scintillator and shielding the second component, thus talking into account light propagation in all components and interfaces. The combined total spectrum was also acquired and involved simultaneous irradiation of the two scintillators for each possible combination. The shape and intensity were characterized. Results: QDs in proximal position absorb almost all the light signal from distal plastic scintillators and emit in its own emission wavelength, with 100% of the signal in the QD range (625–700nm) for the combination BCF12/QD. However, discrimination is possible when QD is in distal position in combination with blue scintillators, total signal being 73% in the blue range (400-550nm) and 27% in QD range. Similar results are obtained with the orange scintillator (BC430). For optimal signal intensity, BCF12 should always be in proximal position, e.g. having 50% more intensity when coupled with BCF60 in distal position (BCF12/BCF60) compared to the BCF60/BCF12 combination. Conclusion: Different combinations of plastic scintillators and QD were built and their emission spectra were studied. We established a preferential order for the scintillating components in the context of an optimized 2-points mPSD. In short, the components with higher wavelength emission spectrum

  15. Optimization of the Nano-Dust Analyzer (NDA) for operation under solar UV illumination

    NASA Astrophysics Data System (ADS)

    O`Brien, L.; Grün, E.; Sternovsky, Z.

    2015-12-01

    The performance of the Nano-Dust Analyzer (NDA) instrument is analyzed for close pointing to the Sun, finding the optimal field-of-view (FOV), arrangement of internal baffles and measurement requirements. The laboratory version of the NDA instrument was recently developed (O'Brien et al., 2014) for the detection and elemental composition analysis of nano-dust particles. These particles are generated near the Sun by the collisional breakup of interplanetary dust particles (IDP), and delivered to Earth's orbit through interaction with the magnetic field of the expanding solar wind plasma. NDA is operating on the basis of impact ionization of the particle and collecting the generated ions in a time-of-flight fashion. The challenge in the measurement is that nano-dust particles arrive from a direction close to that of the Sun and thus the instrument is exposed to intense ultraviolet (UV) radiation. The performed optical ray-tracing analysis shows that it is possible to suppress the number of UV photons scattering into NDA's ion detector to levels that allow both high signal-to-noise ratio measurements, and long-term instrument operation. Analysis results show that by avoiding direct illumination of the target, the photon flux reaching the detector is reduced by a factor of about 103. Furthermore, by avoiding the target and also implementing a low-reflective coating, as well as an optimized instrument geometry consisting of an internal baffle system and a conical detector housing, the photon flux can be reduced by a factor of 106, bringing it well below the operation requirement. The instrument's FOV is optimized for the detection of nano-dust particles, while excluding the Sun. With the Sun in the FOV, the instrument can operate with reduced sensitivity and for a limited duration. The NDA instrument is suitable for future space missions to provide the unambiguous detection of nano-dust particles, to understand the conditions in the inner heliosphere and its temporal

  16. Street curb recognition in 3d point cloud data using morphological operations

    NASA Astrophysics Data System (ADS)

    Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino

    2015-04-01

    Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a

  17. Field-scale operation of methane biofiltration systems to mitigate point source methane emissions.

    PubMed

    Hettiarachchi, Vijayamala C; Hettiaratchi, Patrick J; Mehrotra, Anil K; Kumar, Sunil

    2011-06-01

    Methane biofiltration (MBF) is a novel low-cost technique for reducing low volume point source emissions of methane (CH₄). MBF uses a granular medium, such as soil or compost, to support the growth of methanotrophic bacteria responsible for converting CH₄ to carbon dioxide (CO₂) and water (H₂O). A field research program was undertaken to evaluate the potential to treat low volume point source engineered CH₄ emissions using an MBF at a natural gas monitoring station. A new comprehensive three-dimensional numerical model was developed incorporating advection-diffusive flow of gas, biological reactions and heat and moisture flow. The one-dimensional version of this model was used as a guiding tool for designing and operating the MBF. The long-term monitoring results of the field MBF are also presented. The field MBF operated with no control of precipitation, evaporation, and temperature, provided more than 80% of CH₄ oxidation throughout spring, summer, and fall seasons. The numerical model was able to predict the CH₄ oxidation behavior of the field MBF with high accuracy. The numerical model simulations are presented for estimating CH₄ oxidation efficiencies under various operating conditions, including different filter bed depths and CH₄ flux rates. The field observations as well as numerical model simulations indicated that the long-term performance of MBFs is strongly dependent on environmental factors, such as ambient temperature and precipitation. PMID:21414700

  18. Optimal operating frequency in wireless power transmission for implantable devices.

    PubMed

    Poon, Ada S Y; O'Driscoll, Stephen; Meng, Teresa H

    2007-01-01

    This paper examines short-range wireless powering for implantable devices and shows that existing analysis techniques are not adequate to conclude the characteristics of power transfer efficiency over a wide frequency range. It shows, theoretically and experimentally, that the optimal frequency for power transmission in biological media can be in the GHz-range while existing solutions exclusively focus on the MHz-range. This implies that the size of the receive coil can be reduced by 10(4) times which enables the realization of fully integrated implantable devices. PMID:18003300

  19. Optimization of a greener method for removal phenol species by cloud point extraction and spectrophotometry

    NASA Astrophysics Data System (ADS)

    Zain, N. N. M.; Abu Bakar, N. K.; Mohamad, S.; Saleh, N. Md.

    2014-01-01

    A greener method based on cloud point extraction was developed for removing phenol species including 2,4-dichlorophenol (2,4-DCP), 2,4,6-trichlorophenol (2,4,6-TCP) and 4-nitrophenol (4-NP) in water samples by using the UV-Vis spectrophotometric method. The non-ionic surfactant DC193C was chosen as an extraction solvent due to its low water content in a surfactant rich phase and it is well-known as an environmentally-friendly solvent. The parameters affecting the extraction efficiency such as pH, temperature and incubation time, concentration of surfactant and salt, amount of surfactant and water content were evaluated and optimized. The proposed method was successfully applied for removing phenol species in real water samples.

  20. Optimization of a greener method for removal phenol species by cloud point extraction and spectrophotometry.

    PubMed

    Zain, N N M; Abu Bakar, N K; Mohamad, S; Saleh, N Md

    2014-01-24

    A greener method based on cloud point extraction was developed for removing phenol species including 2,4-dichlorophenol (2,4-DCP), 2,4,6-trichlorophenol (2,4,6-TCP) and 4-nitrophenol (4-NP) in water samples by using the UV-Vis spectrophotometric method. The non-ionic surfactant DC193C was chosen as an extraction solvent due to its low water content in a surfactant rich phase and it is well-known as an environmentally-friendly solvent. The parameters affecting the extraction efficiency such as pH, temperature and incubation time, concentration of surfactant and salt, amount of surfactant and water content were evaluated and optimized. The proposed method was successfully applied for removing phenol species in real water samples.

  1. Optimizing the rotating point spread function by SLM aided spiral phase modulation

    NASA Astrophysics Data System (ADS)

    Baránek, M.; Bouchal, Z.

    2014-12-01

    We demonstrate the vortex point spread function (PSF) whose shape and the rotation sensitivity to defocusing can be controlled by a phase-only modulation implemented in the spatial or frequency domains. Rotational effects are studied in detail as a result of the spiral modulation carried out in discrete radial and azimuthal sections with different topological charges. As the main result, a direct connection between properties of the PSF and the parameters of the spiral mask is found and subsequently used for an optimal shaping of the PSF and control of its defocusing rotation rate. Experiments on the PSF rotation verify a good agreement with theoretical predictions and demonstrate potential of the method for applications in microscopy, tracking of particles and 3D imaging.

  2. Optimizing operational flexibility and enforcement liability in Title V permits

    SciTech Connect

    McCann, G.T.

    1997-12-31

    Now that most states have interim or full approval of the portions of their state implementation plans (SIPs) implementing Title V (40 CFR Part 70) of the Clean Air Act Amendments (CAAA), most sources which require a Title V permit have submitted or are well on the way to submitting a Title V operating permit application. Numerous hours have been spent preparing applications to ensure the administrative completeness of the application and operational flexibility for the facility. Although much time and effort has been spent on Title V permit applications, the operating permit itself is the final goal. This paper outlines the major Federal requirements for Title V permits as given in the CAAA at 40 CFR 70.6, Permit Content. These Federal requirements and how they will effect final Title V permits and facilities will be discussed. This paper will provide information concerning the Federal requirements for Title V permits and suggestions on how to negotiate a Title V permit to maximize operational flexibility and minimize enforcement liability.

  3. Cost optimization for series-parallel execution of a collection of intersecting operation sets

    NASA Astrophysics Data System (ADS)

    Dolgui, Alexandre; Levin, Genrikh; Rozin, Boris; Kasabutski, Igor

    2016-05-01

    A collection of intersecting sets of operations is considered. These sets of operations are performed successively. The operations of each set are activated simultaneously. Operation durations can be modified. The cost of each operation decreases with the increase in operation duration. In contrast, the additional expenses for each set of operations are proportional to its time. The problem of selecting the durations of all operations that minimize the total cost under constraint on completion time for the whole collection of operation sets is studied. The mathematical model and method to solve this problem are presented. The proposed method is based on a combination of Lagrangian relaxation and dynamic programming. The results of numerical experiments that illustrate the performance of the proposed method are presented. This approach was used for optimization multi-spindle machines and machining lines, but the problem is common in engineering optimization and thus the techniques developed could be useful for other applications.

  4. Dynamical-decoupling noise spectroscopy at an optimal working point of a qubit

    NASA Astrophysics Data System (ADS)

    Cywiński, Łukasz

    2014-10-01

    I present a theory of environmental noise spectroscopy via dynamical decoupling of a qubit at an optimal working point. Considering a sequence of n pulses and pure dephasing due to quadratic coupling to Gaussian distributed noise ξ (t), I use the linked-cluster (cumulant) expansion to calculate the coherence decay. Solutions allowing for reconstruction of spectral density of noise are given. For noise with correlation time shorter than the time scale on which coherence decays, the noise filtered by the dynamical decoupling procedure can be treated as effectively Gaussian at large n, and well-established methods of noise spectroscopy can be used to reconstruct the spectrum of ξ2(t) noise. On the other hand, for noise of dominant low-frequency character (1/fβ noise with β >1), an infinite-order resummation of the cumulant expansion is necessary, and it leads to an analytical formula for coherence decay having a power-law tail at long times. In this case, the coherence at time t depends both on spectral density of ξ (t) noise at ω =nπ/t, and on the effective low-frequency cutoff of the noise spectrum, which is typically given by the inverse of the data acquisition time. Simulations of decoherence due to purely transverse noise show that the analytical formulas derived in this paper apply in this often encountered case of an optimal working point, provided that the number of pulses is not very large, and that the longitudinal qubit splitting is much larger than the transverse noise amplitude.

  5. Experimental Investigation of a Point Design Optimized Arrow Wing HSCT Configuration

    NASA Technical Reports Server (NTRS)

    Narducci, Robert P.; Sundaram, P.; Agrawal, Shreekant; Cheung, S.; Arslan, A. E.; Martin, G. L.

    1999-01-01

    The M2.4-7A Arrow Wing HSCT configuration was optimized for straight and level cruise at a Mach number of 2.4 and a lift coefficient of 0.10. A quasi-Newton optimization scheme maximized the lift-to-drag ratio (by minimizing drag-to-lift) using Euler solutions from FL067 to estimate the lift and drag forces. A 1.675% wind-tunnel model of the Opt5 HSCT configuration was built to validate the design methodology. Experimental data gathered at the NASA Langley Unitary Plan Wind Tunnel (UPWT) section #2 facility verified CFL3D Euler and Navier-Stokes predictions of the Opt5 performance at the design point. In turn, CFL3D confirmed the improvement in the lift-to-drag ratio obtained during the optimization, thus validating the design procedure. A data base at off-design conditions was obtained during three wind-tunnel tests. The entry into NASA Langley UPWT section #2 obtained data at a free stream Mach number, M(sub infinity), of 2.55 as well as the design Mach number, M(sub infinity)=2.4. Data from a Mach number range of 1.8 to 2.4 was taken at UPWT section #1. Transonic and low supersonic Mach numbers, M(sub infinity)=0.6 to 1.2, was gathered at the NASA Langley 16 ft. Transonic Wind Tunnel (TWT). In addition to good agreement between CFD and experimental data, highlights from the wind-tunnel tests include a trip dot study suggesting a linear relationship between trip dot drag and Mach number, an aeroelastic study that measured the outboard wing deflection and twist, and a flap scheduling study that identifies the possibility of only one leading-edge and trailing-edge flap setting for transonic cruise and another for low supersonic acceleration.

  6. Critical Point Facility (CPE) Group in the Spacelab Payload Operations Control Center (SL POCC)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The primary payload for Space Shuttle Mission STS-42, launched January 22, 1992, was the International Microgravity Laboratory-1 (IML-1), a pressurized manned Spacelab module. The goal of IML-1 was to explore in depth the complex effects of weightlessness of living organisms and materials processing. Around-the-clock research was performed on the human nervous system's adaptation to low gravity and effects of microgravity on other life forms such as shrimp eggs, lentil seedlings, fruit fly eggs, and bacteria. Materials processing experiments were also conducted, including crystal growth from a variety of substances such as enzymes, mercury iodide, and a virus. The Huntsville Operations Support Center (HOSC) Spacelab Payload Operations Control Center (SL POCC) at the Marshall Space Flight Center (MSFC) was the air/ground communication channel used between the astronauts and ground control teams during the Spacelab missions. Featured is the Critical Point Facility (CPE) group in the SL POCC during STS-42, IML-1 mission.

  7. Critical Point Facility (CPF) Team in the Spacelab Payload Operations Control Center (SL POCC)

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The primary payload for Space Shuttle Mission STS-42, launched January 22, 1992, was the International Microgravity Laboratory-1 (IML-1), a pressurized manned Spacelab module. The goal of IML-1 was to explore in depth the complex effects of weightlessness of living organisms and materials processing. Around-the-clock research was performed on the human nervous system's adaptation to low gravity and effects of microgravity on other life forms such as shrimp eggs, lentil seedlings, fruit fly eggs, and bacteria. Materials processing experiments were also conducted, including crystal growth from a variety of substances such as enzymes, mercury iodide, and a virus. The Huntsville Operations Support Center (HOSC) Spacelab Payload Operations Control Center (SL POCC) at the Marshall Space Flight Center (MSFC) was the air/ground communication channel used between the astronauts and ground control teams during the Spacelab missions. Featured is the Critical Point Facility (CPF) team in the SL POCC during the IML-1 mission.

  8. Reservoir Stimulation Optimization with Operational Monitoring for Creation of EGS

    DOE Data Explorer

    Carlos A. Fernandez

    2014-09-15

    EGS field projects have not sustained production at rates greater than ½ of what is needed for economic viability. The primary limitation that makes commercial EGS infeasible is our current inability to cost-effectively create high-permeability reservoirs from impermeable, igneous rock within the 3,000-10,000 ft depth range. Our goal is to develop a novel fracturing fluid technology that maximizes reservoir permeability while reducing stimulation cost and environmental impact. Laboratory equipment development to advance laboratory characterization/monitoring is also a priority of this project to study and optimize the physicochemical properties of these fracturing fluids in a range of reservoir conditions. Barrier G is the primarily intended GTO barrier to be addressed as well as support addressing barriers D, E and I.

  9. Reservoir Stimulation Optimization with Operational Monitoring for Creation of EGS

    DOE Data Explorer

    Fernandez, Carlos A.

    2013-09-25

    EGS field projects have not sustained production at rates greater than ½ of what is needed for economic viability. The primary limitation that makes commercial EGS infeasible is our current inability to cost-effectively create high-permeability reservoirs from impermeable, igneous rock within the 3,000-10,000 ft depth range. Our goal is to develop a novel fracturing fluid technology that maximizes reservoir permeability while reducing stimulation cost and environmental impact. Laboratory equipment development to advance laboratory characterization/monitoring is also a priority of this project to study and optimize the physicochemical properties of these fracturing fluids in a range of reservoir conditions. Barrier G is the primarily intended GTO barrier to be addressed as well as support addressing barriers D, E and I.

  10. Optimization of the thermogauge furnace for realizing high temperature fixed points

    SciTech Connect

    Wang, T.; Dong, W.; Liu, F.

    2013-09-11

    The thermogauge furnace was commonly used in many NMIs as a blackbody source for calibration of the radiation thermometer. It can also be used for realizing the high temperature fixed point(HTFP). According to our experience, when realizing HTFP we need the furnace provide relative good temperature uniformity to avoid the possible damage to the HTFP. To improve temperature uniformity in the furnace, the furnace tube was machined near the tube ends with a help of a simulation analysis by 'ansys workbench'. Temperature distributions before and after optimization were measured and compared at 1300 °C, 1700°C, 2500 °C, which roughly correspond to Co-C(1324 °C), Pt-C(1738 °C) and Re-C(2474 °C), respectively. The results clearly indicate that through machining the tube the temperature uniformity of the Thermogage furnace can be remarkably improved. A Pt-C high temperature fixed point was realized in the modified Thermogauge furnace subsequently, the plateaus were compared with what obtained using old heater, and the results were presented in this paper.

  11. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers

    NASA Astrophysics Data System (ADS)

    Weinmann, Martin; Jutzi, Boris; Hinz, Stefan; Mallet, Clément

    2015-07-01

    3D scene analysis in terms of automatically assigning 3D points a respective semantic label has become a topic of great importance in photogrammetry, remote sensing, computer vision and robotics. In this paper, we address the issue of how to increase the distinctiveness of geometric features and select the most relevant ones among these for 3D scene analysis. We present a new, fully automated and versatile framework composed of four components: (i) neighborhood selection, (ii) feature extraction, (iii) feature selection and (iv) classification. For each component, we consider a variety of approaches which allow applicability in terms of simplicity, efficiency and reproducibility, so that end-users can easily apply the different components and do not require expert knowledge in the respective domains. In a detailed evaluation involving 7 neighborhood definitions, 21 geometric features, 7 approaches for feature selection, 10 classifiers and 2 benchmark datasets, we demonstrate that the selection of optimal neighborhoods for individual 3D points significantly improves the results of 3D scene analysis. Additionally, we show that the selection of adequate feature subsets may even further increase the quality of the derived results while significantly reducing both processing time and memory consumption.

  12. Optimal control of a spinning double-pyramid Earth-pointing tethered formation

    NASA Astrophysics Data System (ADS)

    Williams, Paul

    2009-06-01

    The dynamics and control of a tethered satellite formation for Earth-pointing observation missions is considered. For most practical applications in Earth orbit, a tether formation must be spinning in order to maintain tension in the tethers. It is possible to obtain periodic spinning solutions for a triangular formation whose initial conditions are close to the orbit normal. However, these solutions contain significant deviations of the satellites on a sphere relative to the desired Earth-pointing configuration. To maintain a plane of satellites spinning normal to the orbit plane, it is necessary to utilize "anchors". Such a configuration resembles a double-pyramid. In this paper, control of a double-pyramid tethered formation is studied. The equations of motion are derived in a floating orbital coordinate system for the general case of an elliptic reference orbit. The motion of the satellites is derived assuming inelastic tethers that can vary in length in a controlled manner. Cartesian coordinates in a rotating reference frame attached to the desired spin frame provide a simple means of expressing the equations of motion, together with a set of constraint equations for the tether tensions. Periodic optimal control theory is applied to the system to determine sets of controlled periodic trajectories by varying the lengths of all interconnecting tethers (nine in total), as well as retrieval and simple reconfiguration trajectories. A modal analysis of the system is also performed using a lumped mass representation of the tethers.

  13. 77 FR 66641 - In the Matter of Entergy Nuclear Operations, Inc.; Entergy Nuclear Indian Point 2, LLC; Entergy...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-06

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY... the Matter of Entergy Nuclear Operations, Inc.; Entergy Nuclear Indian Point 2, LLC; Entergy Nuclear Indian Point 3, LLC; Indian Point Nuclear Generating, Units 1, 2, and 3; Director's Decision...

  14. Optimal two-point static calibration of measurement systems with quadratic response

    SciTech Connect

    Pallas-Areny, Ramon; Jordana, Josep; Casas, Oscar

    2004-12-01

    Measurement devices and instruments must be calibrated after manufacture to correct for component and assembly tolerances, and periodically to correct for drift and aging effects. The number of reference inputs needed for calibration depends on the actual transfer characteristic and the desired accuracy. Often, a linear characteristic is assumed for simplicity, either for the overall input range (global linearization) or for successive input subranges (piecewise linearization). Thus, only two reference inputs are needed for each straight line. This two-point static calibration can be easily implemented in any system having some basic computation capability and allows for the correction of zero and gain errors, and of their drifts if the system is periodically calibrated. Often, the reference inputs for that calibration are the end values of the measurement range (or subrange). However, this is not always the optimal selection because the calibration error is minimal for those reference inputs only, which are not necessarily the most relevant inputs for the system being considered. This article proposes three optimization criteria for the selection of calibration points: limiting the maximal error (LME), minimizing the integral square error (ISE), and minimizing the integral absolute error (IAE). Each of these criteria needs reference inputs whose values are symmetrical with respect to the midrange input (x{sub c}), have the form x{sub c}{+-}{delta}x/(2{radical}n) when the measurand has a uniform probability distribution function, {delta}x being the measurement span, and do not depend on the nonlinearity of the actual response, provided this is quadratic. The factor n depends on the particular criterion selected: n=2 for LME, n=3 for ISE, and n=4 for IAE. These three criteria give parallel calibration lines and can also be applied to other nonlinear responses by dividing the measurement span into convenient intervals. The application of those criteria to the

  15. Optimal two-point static calibration of measurement systems with quadratic response

    NASA Astrophysics Data System (ADS)

    Pallàs-Areny, Ramon; Jordana, Josep; Casas, Óscar

    2004-12-01

    Measurement devices and instruments must be calibrated after manufacture to correct for component and assembly tolerances, and periodically to correct for drift and aging effects. The number of reference inputs needed for calibration depends on the actual transfer characteristic and the desired accuracy. Often, a linear characteristic is assumed for simplicity, either for the overall input range (global linearization) or for successive input subranges (piecewise linearization). Thus, only two reference inputs are needed for each straight line. This two-point static calibration can be easily implemented in any system having some basic computation capability and allows for the correction of zero and gain errors, and of their drifts if the system is periodically calibrated. Often, the reference inputs for that calibration are the end values of the measurement range (or subrange). However, this is not always the optimal selection because the calibration error is minimal for those reference inputs only, which are not necessarily the most relevant inputs for the system being considered. This article proposes three optimization criteria for the selection of calibration points: limiting the maximal error (LME), minimizing the integral square error (ISE), and minimizing the integral absolute error (IAE). Each of these criteria needs reference inputs whose values are symmetrical with respect to the midrange input (xc), have the form xc±Δx/(2√n) when the measurand has a uniform probability distribution function, Δx being the measurement span, and do not depend on the nonlinearity of the actual response, provided this is quadratic. The factor n depends on the particular criterion selected: n=2 for LME, n=3 for ISE, and n=4 for IAE. These three criteria give parallel calibration lines and can also be applied to other nonlinear responses by dividing the measurement span into convenient intervals. The application of those criteria to the linearization of a type

  16. Enabling a viable technique for the optimization of LNG carrier cargo operations

    NASA Astrophysics Data System (ADS)

    Alaba, Onakoya Rasheed; Nwaoha, T. C.; Okwu, M. O.

    2016-09-01

    In this study, we optimize the loading and discharging operations of the Liquefied Natural Gas (LNG) carrier. First, we identify the required precautions for LNG carrier cargo operations. Next, we prioritize these precautions using the analytic hierarchy process (AHP) and experts' judgments, in order to optimize the operational loading and discharging exercises of the LNG carrier, prevent system failure and human error, and reduce the risk of marine accidents. Thus, the objective of our study is to increase the level of safety during cargo operations.

  17. Enabling a viable technique for the optimization of LNG carrier cargo operations

    NASA Astrophysics Data System (ADS)

    Alaba, Onakoya Rasheed; Nwaoha, T. C.; Okwu, M. O.

    2016-07-01

    In this study, we optimize the loading and discharging operations of the Liquefied Natural Gas (LNG) carrier. First, we identify the required precautions for LNG carrier cargo operations. Next, we prioritize these precautions using the analytic hierarchy process (AHP) and experts' judgments, in order to optimize the operational loading and discharging exercises of the LNG carrier, prevent system failure and human error, and reduce the risk of marine accidents. Thus, the objective of our study is to increase the level of safety during cargo operations.

  18. Methods and devices for optimizing the operation of a semiconductor optical modulator

    DOEpatents

    Zortman, William A.

    2015-07-14

    A semiconductor-based optical modulator includes a control loop to control and optimize the modulator's operation for relatively high data rates (above 1 GHz) and/or relatively high voltage levels. Both the amplitude of the modulator's driving voltage and the bias of the driving voltage may be adjusted using the control loop. Such adjustments help to optimize the operation of the modulator by reducing the number of errors present in a modulated data stream.

  19. Rod pumping optimization program reduces equipment failures and operating costs

    SciTech Connect

    Allen, L.F.; Svinos, J.G.

    1984-09-01

    In 1975, an intensive program was initiated by Gulf Oil EandP Central Area to reduce rod and tubing failure rates in the fields of the northwest corner of Crane County, Texas. Chronologically the program steps were: The replacement of rod strings experiencing three failures in three months. The replacement of tubing strings experiencing two failures in three months. The use of inspected, classified and plastic coated new or used grade ''C'' rods. The use of inspected, classified and internally plastic coated used or new tubing. The exclusive use of high working stress rods. The exclusive use of specially designed fiberglass sucker rod systems with improved sinker bar design. This program reduced rod failure rates from 16% to 4% and tubing failures from 7% to 3% per month. The lighter rod design reduced lifting costs by $2 MM per year on 880 active wells. Of the 219 wells equipped with fiberglass sucker rods in the last two years, there have been no operational body breaks or tubing leaks.

  20. Optimal Sunshade Configurations for Space-Based Geoengineering near the Sun-Earth L1 Point.

    PubMed

    Sánchez, Joan-Pau; McInnes, Colin R

    2015-01-01

    Within the context of anthropogenic climate change, but also considering the Earth's natural climate variability, this paper explores the speculative possibility of large-scale active control of the Earth's radiative forcing. In particular, the paper revisits the concept of deploying a large sunshade or occulting disk at a static position near the Sun-Earth L1 Lagrange equilibrium point. Among the solar radiation management methods that have been proposed thus far, space-based concepts are generally seen as the least timely, albeit also as one of the most efficient. Large occulting structures could potentially offset all of the global mean temperature increase due to greenhouse gas emissions. This paper investigates optimal configurations of orbiting occulting disks that not only offset a global temperature increase, but also mitigate regional differences such as latitudinal and seasonal difference of monthly mean temperature. A globally resolved energy balance model is used to provide insights into the coupling between the motion of the occulting disks and the Earth's climate. This allows us to revise previous studies, but also, for the first time, to search for families of orbits that improve the efficiency of occulting disks at offsetting climate change on both global and regional scales. Although natural orbits exist near the L1 equilibrium point, their period does not match that required for geoengineering purposes, thus forced orbits were designed that require small changes to the disk attitude in order to control its motion. Finally, configurations of two occulting disks are presented which provide the same shading area as previously published studies, but achieve reductions of residual latitudinal and seasonal temperature changes. PMID:26309047

  1. Optimal Sunshade Configurations for Space-Based Geoengineering near the Sun-Earth L1 Point.

    PubMed

    Sánchez, Joan-Pau; McInnes, Colin R

    2015-01-01

    Within the context of anthropogenic climate change, but also considering the Earth's natural climate variability, this paper explores the speculative possibility of large-scale active control of the Earth's radiative forcing. In particular, the paper revisits the concept of deploying a large sunshade or occulting disk at a static position near the Sun-Earth L1 Lagrange equilibrium point. Among the solar radiation management methods that have been proposed thus far, space-based concepts are generally seen as the least timely, albeit also as one of the most efficient. Large occulting structures could potentially offset all of the global mean temperature increase due to greenhouse gas emissions. This paper investigates optimal configurations of orbiting occulting disks that not only offset a global temperature increase, but also mitigate regional differences such as latitudinal and seasonal difference of monthly mean temperature. A globally resolved energy balance model is used to provide insights into the coupling between the motion of the occulting disks and the Earth's climate. This allows us to revise previous studies, but also, for the first time, to search for families of orbits that improve the efficiency of occulting disks at offsetting climate change on both global and regional scales. Although natural orbits exist near the L1 equilibrium point, their period does not match that required for geoengineering purposes, thus forced orbits were designed that require small changes to the disk attitude in order to control its motion. Finally, configurations of two occulting disks are presented which provide the same shading area as previously published studies, but achieve reductions of residual latitudinal and seasonal temperature changes.

  2. Optimal Sunshade Configurations for Space-Based Geoengineering near the Sun-Earth L1 Point

    PubMed Central

    Sánchez, Joan-Pau; McInnes, Colin R.

    2015-01-01

    Within the context of anthropogenic climate change, but also considering the Earth’s natural climate variability, this paper explores the speculative possibility of large-scale active control of the Earth’s radiative forcing. In particular, the paper revisits the concept of deploying a large sunshade or occulting disk at a static position near the Sun-Earth L1 Lagrange equilibrium point. Among the solar radiation management methods that have been proposed thus far, space-based concepts are generally seen as the least timely, albeit also as one of the most efficient. Large occulting structures could potentially offset all of the global mean temperature increase due to greenhouse gas emissions. This paper investigates optimal configurations of orbiting occulting disks that not only offset a global temperature increase, but also mitigate regional differences such as latitudinal and seasonal difference of monthly mean temperature. A globally resolved energy balance model is used to provide insights into the coupling between the motion of the occulting disks and the Earth’s climate. This allows us to revise previous studies, but also, for the first time, to search for families of orbits that improve the efficiency of occulting disks at offsetting climate change on both global and regional scales. Although natural orbits exist near the L1 equilibrium point, their period does not match that required for geoengineering purposes, thus forced orbits were designed that require small changes to the disk attitude in order to control its motion. Finally, configurations of two occulting disks are presented which provide the same shading area as previously published studies, but achieve reductions of residual latitudinal and seasonal temperature changes. PMID:26309047

  3. Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David

    2016-01-01

    Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.

  4. System and method of cylinder deactivation for optimal engine torque-speed map operation

    DOEpatents

    Sujan, Vivek A; Frazier, Timothy R; Follen, Kenneth; Moon, Suk-Min

    2014-11-11

    This disclosure provides a system and method for determining cylinder deactivation in a vehicle engine to optimize fuel consumption while providing the desired or demanded power. In one aspect, data indicative of terrain variation is utilized in determining a vehicle target operating state. An optimal active cylinder distribution and corresponding fueling is determined from a recommendation from a supervisory agent monitoring the operating state of the vehicle of a subset of the total number of cylinders, and a determination as to which number of cylinders provides the optimal fuel consumption. Once the optimal cylinder number is determined, a transmission gear shift recommendation is provided in view of the determined active cylinder distribution and target operating state.

  5. A three-stage Stirling pulse tube cryocooler operating below the critical point of helium-4

    NASA Astrophysics Data System (ADS)

    Qiu, L. M.; Cao, Q.; Zhi, X. Q.; Gan, Z. H.; Yu, Y. B.; Liu, Y.

    2011-10-01

    Precooled phase shifters can significantly enhance the phase shift effect and further improve the performance of pulse tube cryocoolers. A separate three-stage Stirling pulse tube cryocooler (SPTC) with a cold inertance tube was designed and fabricated. Helium-4 instead of the rare helium-3 was used as the working fluid. The cryocooler reached a bottom temperature of 4.97 K with a net cooling power of 25 mW at 6.0 K. The operating frequency was 29.9 Hz and the charging pressure was 0.91 MPa. It is the first time a refrigeration temperature below the critical point of helium-4 was obtained in a three-stage Stirling pulse tube cryocooler.

  6. Point-of-care ultrasonography during rescue operations on board a Polish Medical Air Rescue helicopter.

    PubMed

    Darocha, Tomasz; Gałązkowski, Robert; Sobczyk, Dorota; Żyła, Zbigniew; Drwiła, Rafał

    2014-12-01

    Point-of-care ultrasound examination has been increasingly widely used in pre-hospital care. The use of ultrasound in rescue medicine allows for a quick differential diagnosis, identification of the most important medical emergencies and immediate introduction of targeted treatment. Performing and interpreting a pre-hospital ultrasound examination can improve the accuracy of diagnosis and thus reduce mortality. The authors' own experiences are presented in this paper, which consist in using a portable, hand-held ultrasound apparatus during rescue operations on board a Polish Medical Air Rescue helicopter. The possibility of using an ultrasound apparatus during helicopter rescue service allows for a full professional evaluation of the patient's health condition and enables the patient to be brought to a center with the most appropriate facilities for their condition. PMID:26674604

  7. On the fixed points of monotonic operators in the critical case

    NASA Astrophysics Data System (ADS)

    Engibaryan, N. B.

    2006-10-01

    We consider the problem of constructing positive fixed points x of monotonic operators \\varphi acting on a cone K in a Banach space E. We assume that \\Vert\\varphi x\\Vert\\le\\Vert x\\Vert+\\gamma, \\gamma>0, for all x\\in K. In the case when \\varphi has a so-called non-trivial dissipation functional we construct a solution in an extension of E, which is a Banach space or a Fréchet space. We consider examples in which we prove the solubility of a conservative integral equation on the half-line with a sum-difference kernel, and of a non-linear integral equation of Urysohn type in the critical case.

  8. Using Interior Point Method Optimization Techniques to Improve 2- and 3-Dimensional Models of Earth Structures

    NASA Astrophysics Data System (ADS)

    Zamora, A.; Gutierrez, A. E.; Velasco, A. A.

    2014-12-01

    2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.

  9. Leveraging Data Fusion Strategies in Multireceptor Lead Optimization MM/GBSA End-Point Methods.

    PubMed

    Knight, Jennifer L; Krilov, Goran; Borrelli, Kenneth W; Williams, Joshua; Gunn, John R; Clowes, Alec; Cheng, Luciano; Friesner, Richard A; Abel, Robert

    2014-08-12

    Accurate and efficient affinity calculations are critical to enhancing the contribution of in silico modeling during the lead optimization phase of a drug discovery campaign. Here, we present a large-scale study of the efficacy of data fusion strategies to leverage results from end-point MM/GBSA calculations in multiple receptors to identify potent inhibitors among an ensemble of congeneric ligands. The retrospective analysis of 13 congeneric ligand series curated from publicly available data across seven biological targets demonstrates that in 90% of the individual receptor structures MM/GBSA scores successfully identify subsets of inhibitors that are more potent than a random selection, and data fusion strategies that combine MM/GBSA scores from each of the receptors significantly increase the robustness of the predictions. Among nine different data fusion metrics based on consensus scores or receptor rankings, the SumZScore (i.e., converting MM/GBSA scores into standardized Z-Scores within a receptor and computing the sum of the Z-Scores for a given ligand across the ensemble of receptors) is found to be a robust and physically meaningful metric for combining results across multiple receptors. Perhaps most surprisingly, even with relatively low to modest overall correlations between SumZScore and experimental binding affinities, SumZScore tends to reliably prioritize subsets of inhibitors that are at least as potent as those that are prioritized from a "best" single receptor identified from known compounds within the congeneric series. PMID:26588291

  10. Optimal point of insertion of the needle in neuraxial blockade using a midline approach: study in a geometrical model

    PubMed Central

    Vogt, Mark; van Gerwen, Dennis J; van den Dobbelsteen, John J; Hagenaars, Martin

    2016-01-01

    Performance of neuraxial blockade using a midline approach can be technically difficult. It is therefore important to optimize factors that are under the influence of the clinician performing the procedure. One of these factors might be the chosen point of insertion of the needle. Surprisingly few data exist on where between the tips of two adjacent spinous processes the needle should be introduced. A geometrical model was adopted to gain more insight into this issue. Spinous processes were represented by parallelograms. The length, the steepness relative to the skin, and the distance between the parallelograms were varied. The influence of the chosen point of insertion of the needle on the range of angles at which the epidural and subarachnoid space could be reached was studied. The optimal point of insertion was defined as the point where this range is the widest. The geometrical model clearly demonstrated, that the range of angles at which the epidural or subarachnoid space can be reached, is dependent on the point of insertion between the tips of the adjacent spinous processes. The steeper the spinous processes run, the more cranial the point of insertion should be. Assuming that the model is representative for patients, the performance of neuraxial blockade using a midline approach might be improved by choosing the optimal point of insertion. PMID:27570462

  11. Operational point of neural cardiovascular regulation in humans up to 6 months in space.

    PubMed

    Verheyden, B; Liu, J; Beckers, F; Aubert, A E

    2010-03-01

    Entering weightlessness affects central circulation in humans by enhancing venous return and cardiac output. We tested whether the operational point of neural cardiovascular regulation in space sets accordingly to adopt a level close to that found in the ground-based horizontal position. Heart rate (HR), finger blood and brachial blood pressure (BP), and respiratory frequency were collected in 11 astronauts from nine space missions. Recordings were made in supine and standing positions at least 10 days before launch and during spaceflight (days 5-19, 45-67, 77-116, 146-180). Cross-correlation analyses of HR and systolic BP were used to measure three complementary aspects of cardiac baroreflex modulation: 1) baroreflex sensitivity, 2) number of effective baroreflex estimates, and 3) baroreflex time delay. A fixed breathing protocol was performed to measure respiratory sinus arrhythmia and low-frequency power of systolic BP variability. We found that HR and mean arterial pressure did not differ from preflight supine values for up to 6 mo in space. Respiration frequency tended to decrease during prolonged spaceflight. Concerning neural markers of cardiovascular regulation, we observed in-flight adaptations toward homeostatic conditions similar to those found in the ground-based supine position. Surprisingly, this was not the case for baroreflex time delay distribution, which had somewhat longer latencies in space. Except for this finding, our results confirm that the operational point of neural cardiovascular regulation in space sets to a level close to that of an Earth-based supine position. This adaptation level suggests that circulation is chronically relaxed for at least 6 mo in space. PMID:20075261

  12. Receiver operating characteristic (ROC) to determine cut-off points of biomarkers in lung cancer patients.

    PubMed

    Weiss, Heidi L; Niwas, Santosh; Grizzle, William E; Piyathilake, Chandrika

    The role of biomarkers in disease prognosis continues to be an important investigation in many cancer studies. In order for these biomarkers to have practical application in clinical decision making regarding patient treatment and follow-up, it is common to dichotomize patients into those with low vs. high expression levels. In this study, receiver operating characteristic (ROC) curves, area under the curve (AUC) of the ROC, sensitivity, specificity, as well as likelihood ratios were calculated to determine levels of growth factor biomarkers that best differentiate lung cancer cases versus control subjects. Selected cut-off points for p185(erbB-2) and EGFR membrane appear to have good discriminating power to differentiate control tissues versus uninvolved tissues from patients with lung cancer (AUC = 89% and 90%, respectively); while AUC increased to at least 90% for selected cut-off points for p185(erbB-2) membrane, EGFR membrane, and FASE when comparing between control versus carcinoma tissues from lung cancer cases. Using data from control subjects compared to patients with lung cancer, we presented a simple and intuitive approach to determine dichotomized levels of biomarkers and validated the value of these biomarkers as surrogate endpoints for cancer outcome.

  13. The Spectrum of a Harmonic Oscillator Operator Perturbed by Point Interactions

    NASA Astrophysics Data System (ADS)

    Mityagin, Boris S.

    2015-11-01

    We consider the operator Ly = - (d/dx)2y + x2 y + w(x) y, quad y { in} L2(R), where w(x) = s δ (x - b) + t δ (x + b) , quad b ≠ 0 {real}, quad s, t in C. This operator has a discrete spectrum: eventually the eigenvalues are simple. Their asymptotic is given. In particular, if s=- t, λ n = (2n + 1) + s2 {κ (n)}/{n} + ρ (n) where κ (n) = {1}/{2π } [(-1)^{n + 1} sin (2 b √ {2n} ) - {1}/{2} sin (4 b √ {2n} ) ] and \\vert ρ (n) \\vert ≤ C {log n}/{n^{3/2}}. If overline {s} = -t, the number T( s) of non-real eigenvalues is finite, and T(s) ≤ (C (1 + \\vert s \\vert ) log (e + \\vert s \\vert ) )2. The analogue of the above asymptotic is given in the case of any two-point interaction perturbation.

  14. Tethered Balloon Operations at ARM AMF3 Site at Oliktok Point, AK

    NASA Astrophysics Data System (ADS)

    Dexheimer, D.; Lucero, D. A.; Helsel, F.; Hardesty, J.; Ivey, M.

    2015-12-01

    Oliktok Point has been the home of the Atmospheric Radiation Measurement Program's (ARM) third ARM Mobile Facility, or AMF3, since October 2013. The AMF3 is operated through Sandia National Laboratories and hosts instrumentation collecting continuous measurements of clouds, aerosols, precipitation, energy, and other meteorological variables. The Arctic region is warming more quickly than any other region due to climate change and Arctic sea ice is declining to record lows. Sparsity of atmospheric data from the Arctic leads to uncertainty in process comprehension, and atmospheric general circulation models (AGCM) are understood to underestimate low cloud presence in the Arctic. Increased vertical resolution of meteorological properties and cloud measurements will improve process understanding and help AGCMs better characterize Arctic clouds. SNL is developing a tethered balloon system capable of regular operation at AMF3 in order to provide increased vertical resolution atmospheric data. The tethered balloon can be operated within clouds at altitudes up to 7,000' AGL within DOE's R-2204 restricted area. Pressure, relative humidity, temperature, wind speed, and wind direction are recorded at multiple altitudes along the tether. These data were validated against stationary met tower data in Albuquerque, NM. The altitudes of the sensors were determined by GPS and calculated using a line counter and clinometer and compared. Wireless wetness sensors and supercooled liquid water content sensors have also been deployed and their data has been compared with other sensors. This presentation will provide an overview of the balloons, sensors, and test flights flown, and will provide a preliminary look at data from sensor validation campaigns and test flights.

  15. Optimizing transformations of stencil operations for parallel cache-based architectures

    SciTech Connect

    Bassetti, F.; Davis, K.

    1999-06-28

    This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation and applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.

  16. A hybrid-algorithm-based parallel computing framework for optimal reservoir operation

    NASA Astrophysics Data System (ADS)

    Li, X.; Wei, J.; Li, T.; Wang, G.

    2012-12-01

    Up to date, various optimization models have been developed to offer optimal operating policies for reservoirs. Each optimization model has its own merits and limitations, and no general algorithm exists even today. At times, some optimization models have to be combined to obtain desired results. In this paper, we present a parallel computing framework to combine various optimization models in a different way compared to traditional serial computing. This framework consists of three functional processor types, that is, master processor, slave processor and transfer processor. The master processor has a full computation scheme that allocates optimization models to slave processors; slave processors perform allocated optimization models; the transfer processor is in charge of the solution communication among all slave processors. Based on these, the proposed framework can perform various optimization models in parallel. Because of the solution communication, the framework can also integrate the merits of involved optimization models while in iteration and the performance of each optimization model can therefore be improved. And more, it can be concluded the framework can effectively improve the solution quality and increase the solution speed by making full use of computing power of parallel computers.

  17. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  18. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  19. GOSIM: A multi-scale iterative multiple-point statistics algorithm with global optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Hou, Weisheng; Cui, Chanjie; Cui, Jie

    2016-04-01

    Most current multiple-point statistics (MPS) algorithms are based on a sequential simulation procedure, during which grid values are updated according to the local data events. Because the realization is updated only once during the sequential process, errors that occur while updating data events cannot be corrected. Error accumulation during simulations decreases the realization quality. Aimed at improving simulation quality, this study presents an MPS algorithm based on global optimization, called GOSIM. An objective function is defined for representing the dissimilarity between a realization and the TI in GOSIM, which is minimized by a multi-scale EM-like iterative method that contains an E-step and M-step in each iteration. The E-step searches for TI patterns that are most similar to the realization and match the conditioning data. A modified PatchMatch algorithm is used to accelerate the search process in E-step. M-step updates the realization based on the most similar patterns found in E-step and matches the global statistics of TI. During categorical data simulation, k-means clustering is used for transforming the obtained continuous realization into a categorical realization. The qualitative and quantitative comparison results of GOSIM, MS-CCSIM and SNESIM suggest that GOSIM has a better pattern reproduction ability for both unconditional and conditional simulations. A sensitivity analysis illustrates that pattern size significantly impacts the time costs and simulation quality. In conditional simulations, the weights of conditioning data should be as small as possible to maintain a good simulation quality. The study shows that big iteration numbers at coarser scales increase simulation quality and small iteration numbers at finer scales significantly save simulation time.

  20. Collaboration pathway(s) using new tools for optimizing operational climate monitoring from space

    NASA Astrophysics Data System (ADS)

    Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.

    2014-10-01

    Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the needs of decision makers, scientific investigators and global users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent (2014) rulebased decision engine modeling runs that targeted optimizing the intended NPOESS architecture, becomes a surrogate for global operational climate monitoring architecture(s). This rule-based systems tools provide valuable insight for Global climate architectures, through the comparison and evaluation of alternatives considered and the exhaustive range of trade space explored. A representative optimization of Global ECV's (essential climate variables) climate monitoring architecture(s) is explored and described in some detail with thoughts on appropriate rule-based valuations. The optimization tools(s) suggest and support global collaboration pathways and hopefully elicit responses from the audience and climate science shareholders.

  1. Probability-Based Software for Grid Optimization: Improved Power System Operations Using Advanced Stochastic Optimization

    SciTech Connect

    2012-02-24

    GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.

  2. Experimental optimization of pivot point height for swing-arm type rear suspensions in off-road bicycles.

    PubMed

    Karchin, Ari; Hull, M L

    2002-02-01

    Towards the ultimate goal of designing dual suspension off-road bicycles which decouple the suspension motion from the pedaling action, this study focused on determining experimentally the optimum pivot point height for a swing-arm type rear suspension such that the suspension motion was minimized. Specific objectives were (1) to determine the effect of interaction between the front and rear suspensions on the optimal pivot point height, (2) to investigate the sensitivity of the optimal height to the pedaling mechanics of the rider in both the seated and standing postures, (3) to determine the dependence of the optimal height on the rider posture. Eleven experienced subjects rode a custom-built adjustable dual suspension off-road bicycle, [Needle, S., and Hull, M. L., 1997, "An Off-Road Bicycle With Adjustable Suspension Kinematics," Journal of Mechanical Design 119, pp. 370-375], on an inclined treadmill. The treadmill was set to a constant 6 percent grade at a constant velocity of 24.8 km/hr. With the bicycle in a fixed gear combination of 38 x 14, the corresponding cadence was 84 rpm. For each subject, the pivot point height was varied randomly while the motions across both the front and rear suspension elements were measured. Subjects rode in both the seated and standing postures and with the front suspension active and inactive. It was found that the power loss from the rear suspension at the optimal pivot point height was not significantly dependent on the interaction between the front and rear suspensions. In the seated posture, the optimal pivot point height was 9.8 cm on average and had a range of 8.0-12.3 cm. The average optimal pivot point height for the seated posture corresponded to an average power loss for the rear suspension that was within 10 percent of the minimum power loss for each subject for 8 of the 11 subjects. In the standing posture, the average height was 5.9 cm and ranged from 5.1-7.2 cm. The average heightfor the standing posture was

  3. Optimizing long-term reservoir operation through multi-tier interactive genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, K.-W.; Chang, L.-C.; Chang, F.-J.

    2012-04-01

    For long-term reservoir planning and management problems, the reservoir optimal operation in each period is commonly searched year by year. The search domain for the initial reservoir storage for each year is limited to certain ranges, the over-year conditions cannot be adequately delivered over time, and therefore such operation fails to integrate the conditions of all the considered years as a whole situation. In this study, a multi-tier interactive genetic algorithm (MIGA) was applied to searching the long-term reservoir optimal solution. MIGA can decompose a large-scale task into several small-scale sub-tasks with GAs applied to each sub-task, where the multi-tier optimal solutions mutually interact among individual sub-tasks to produce the optimal solution for the original task. In such way, the long-term reservoir operation task can be divided into several independent single-year tasks; therefore, the difficulty of the optimal search for a great number of decision variables can dramatically be reduced. The Shihmen Reservoir in northern Taiwan was used as a case study, and the long-term optimal reservoir storages (decision variables) were investigated. The objective was to best satisfy water demands in the downstream area; and a 10-day period, the traditional time frame in Chinese agricultural society, was used as a time step. According to this time scale, there were two cases with different time intervals (variables): Case I- five relative drought consecutive years (2001 to 2006) with 180 variables (i.e. 36×5=180); and Case II- twenty consecutive years (1986 to 2006) with 720 variables (i.e. 36×20=720). For the purpose of comparison, a simulation based on the reservoir operating rule curves and a sole GA search would be implemented to find the solutions. In Case I, despite the number of the decision variables which was 180, the sole GA could still well search the optimal solution. In Case II (720 variables), the sole GA could not reach the optimal solution

  4. 78 FR 4879 - Nine Mile Point 3 Nuclear Project, LLC and UniStar Nuclear Operating Services, LLC Combined...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-23

    ... as described in Federal Register Notice (FRN) 76 FR 32994 (June 7, 2011). The NRC is currently... COMMISSION Nine Mile Point 3 Nuclear Project, LLC and UniStar Nuclear Operating Services, LLC Combined... Nuclear Project, LLC, and UniStar Nuclear Operating Services, LLC (UniStar), submitted a Combined...

  5. Methodological approach for the optimization of drinking water treatment plants' operation: a case study.

    PubMed

    Sorlini, Sabrina; Collivignarelli, Maria Cristina; Castagnola, Federico; Crotti, Barbara Marianna; Raboni, Massimo

    2015-01-01

    Critical barriers to safe and secure drinking water may include sources (e.g. groundwater contamination), treatments (e.g. treatment plants not properly operating) and/or contamination within the distribution system (infrastructure not properly maintained). The performance assessment of these systems, based on monitoring, process parameter control and experimental tests, is a viable tool for the process optimization and water quality control. The aim of this study was to define a procedure for evaluating the performance of full-scale drinking water treatment plants (DWTPs) and for defining optimal solutions for plant upgrading in order to optimize operation. The protocol is composed of four main phases (routine and intensive monitoring programmes - Phases 1 and 2; experimental studies - Phase 3; plant upgrade and optimization - Phase 4). The protocol suggested in this study was tested in a full-scale DWTP placed in the North of Italy (Mortara, Pavia). The results outline some critical aspects of the plant operation and permit the identification of feasible solutions for the DWTP upgrading in order to optimize water treatment operation.

  6. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    NASA Astrophysics Data System (ADS)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  7. A MILP-Based Distribution Optimal Power Flow Model for Microgrid Operation

    SciTech Connect

    Liu, Guodong; Starke, Michael R; Zhang, Xiaohu; Tomsovic, Kevin

    2016-01-01

    This paper proposes a distribution optimal power flow (D-OPF) model for the operation of microgrids. The proposed model minimizes not only the operating cost, including fuel cost, purchasing cost and demand charge, but also several performance indices, including voltage deviation, network power loss and power factor. It co-optimizes the real and reactive power form distributed generators (DGs) and batteries considering their capacity and power factor limits. The D-OPF is formulated as a mixed-integer linear programming (MILP). Numerical simulation results show the effectiveness of the proposed model.

  8. Tuning operating point of extrinsic Fabry-Perot interferometric fiber-optic sensors using microstructured fiber and gas pressure.

    PubMed

    Tian, Jiajun; Zhang, Qi; Fink, Thomas; Li, Hong; Peng, Wei; Han, Ming

    2012-11-15

    Intensity-based demodulation of extrinsic Fabry-Perot interferometric (EFPI) fiber-optic sensors requires the light wavelength to be on the quadrature point of the interferometric fringes for maximum sensitivity. In this Letter, we propose a novel and remote operating-point tuning method for EFPI fiber-optic sensors using microstructured fibers (MFs) and gas pressure. We demonstrated the method using a diaphragm-based EFPI sensor with a microstructured lead-in fiber. The holes in the MF were used as gas channels to remotely control the gas pressure inside the Fabry-Perot cavity. Because of the deformation of the diaphragm with gas pressure, the cavity length and consequently the operating point can be remotely tuned for maximum sensitivity. The proposed operating-point tuning method has the advantage of reduced complexity and cost compared to previously reported methods. PMID:23164875

  9. Estimating the operating point of the cochlear transducer using low-frequency biased distortion products

    PubMed Central

    Brown, Daniel J.; Hartsock, Jared J.; Gill, Ruth M.; Fitzgerald, Hillary E.; Salt, Alec N.

    2009-01-01

    Distortion products in the cochlear microphonic (CM) and in the ear canal in the form of distortion product otoacoustic emissions (DPOAEs) are generated by nonlinear transduction in the cochlea and are related to the resting position of the organ of Corti (OC). A 4.8 Hz acoustic bias tone was used to displace the OC, while the relative amplitude and phase of distortion products evoked by a single tone [most often 500 Hz, 90 dB SPL (sound pressure level)] or two simultaneously presented tones (most often 4 kHz and 4.8 kHz, 80 dB SPL) were monitored. Electrical responses recorded from the round window, scala tympani and scala media of the basal turn, and acoustic emissions in the ear canal were simultaneously measured and compared during the bias. Bias-induced changes in the distortion products were similar to those predicted from computer models of a saturating transducer with a first-order Boltzmann distribution. Our results suggest that biased DPOAEs can be used to non-invasively estimate the OC displacement, producing a measurement equivalent to the transducer operating point obtained via Boltzmann analysis of the basal turn CM. Low-frequency biased DPOAEs might provide a diagnostic tool to objectively diagnose abnormal displacements of the OC, as might occur with endolymphatic hydrops. PMID:19354389

  10. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    SciTech Connect

    He, Yi; Scheraga, Harold A.; Liwo, Adam

    2015-12-28

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  11. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    NASA Astrophysics Data System (ADS)

    He, Yi; Liwo, Adam; Scheraga, Harold A.

    2015-12-01

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  12. Method and apparatus for optimizing operation of a power generating plant using artificial intelligence techniques

    DOEpatents

    Wroblewski, David; Katrompas, Alexander M.; Parikh, Neel J.

    2009-09-01

    A method and apparatus for optimizing the operation of a power generating plant using artificial intelligence techniques. One or more decisions D are determined for at least one consecutive time increment, where at least one of the decisions D is associated with a discrete variable for the operation of a power plant device in the power generating plant. In an illustrated embodiment, the power plant device is a soot cleaning device associated with a boiler.

  13. Short-term optimal operation of water systems using ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Raso, L.; Schwanenberg, D.; van de Giesen, N. C.; van Overloop, P. J.

    2014-09-01

    Short-term water system operation can be realized using Model Predictive Control (MPC). MPC is a method for operational management of complex dynamic systems. Applied to open water systems, MPC provides integrated, optimal, and proactive management, when forecasts are available. Notwithstanding these properties, if forecast uncertainty is not properly taken into account, the system performance can critically deteriorate. Ensemble forecast is a way to represent short-term forecast uncertainty. An ensemble forecast is a set of possible future trajectories of a meteorological or hydrological system. The growing ensemble forecasts’ availability and accuracy raises the question on how to use them for operational management. The theoretical innovation presented here is the use of ensemble forecasts for optimal operation. Specifically, we introduce a tree based approach. We called the new method Tree-Based Model Predictive Control (TB-MPC). In TB-MPC, a tree is used to set up a Multistage Stochastic Programming, which finds a different optimal strategy for each branch and enhances the adaptivity to forecast uncertainty. Adaptivity reduces the sensitivity to wrong forecasts and improves the operational performance. TB-MPC is applied to the operational management of Salto Grande reservoir, located at the border between Argentina and Uruguay, and compared to other methods.

  14. Characteristic matrix operation for finding global solution of one-time ray-tracing optimization method.

    PubMed

    Tsai, Ko-Fan; Chu, Shu-Chun

    2016-09-19

    The one-time ray-tracing optimization method is a fast way to design LED illumination systems [Opt. Express22, 5357 (2014)10.1364/OE.22.005357]. The method optimizes the performance of LED illumination systems by modifying the LEDs' luminous intensity distribution curve (LIDC) with a freeform lens, instead of modifying the illumination system structure. In finding the LEDs' LIDC for optimizing the illumination system's performance, the LEDs' LIDC found by means of a general gradient descent method can be trapped in a local solution. This study develops a matrix operation method to directly find the global solution of the LEDs' LIDC for the optimization of the illumination system's performance for any initial design of an illumination system structure. As compared with the gradient descent method, using the proposed characteristic matrix operation method to find the best LEDs' LIDC reduces the cost in time by several orders of magnitude. The proposed characteristic matrix operation method ensures that the one-time ray-tracing optimization method is an efficient and reliable method for designing LED illumination systems. PMID:27661876

  15. Improving multi-objective reservoir operation optimization with sensitivity-informed dimension reduction

    NASA Astrophysics Data System (ADS)

    Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.

    2015-08-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.

  16. Characteristic matrix operation for finding global solution of one-time ray-tracing optimization method.

    PubMed

    Tsai, Ko-Fan; Chu, Shu-Chun

    2016-09-19

    The one-time ray-tracing optimization method is a fast way to design LED illumination systems [Opt. Express22, 5357 (2014)10.1364/OE.22.005357]. The method optimizes the performance of LED illumination systems by modifying the LEDs' luminous intensity distribution curve (LIDC) with a freeform lens, instead of modifying the illumination system structure. In finding the LEDs' LIDC for optimizing the illumination system's performance, the LEDs' LIDC found by means of a general gradient descent method can be trapped in a local solution. This study develops a matrix operation method to directly find the global solution of the LEDs' LIDC for the optimization of the illumination system's performance for any initial design of an illumination system structure. As compared with the gradient descent method, using the proposed characteristic matrix operation method to find the best LEDs' LIDC reduces the cost in time by several orders of magnitude. The proposed characteristic matrix operation method ensures that the one-time ray-tracing optimization method is an efficient and reliable method for designing LED illumination systems.

  17. Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches

    NASA Astrophysics Data System (ADS)

    Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo

    This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.

  18. Evaluating Operational Specifications of Point-of-Care Diagnostic Tests: A Standardized Scorecard

    PubMed Central

    Lehe, Jonathan D.; Sitoe, Nádia E.; Tobaiwa, Ocean; Loquiha, Osvaldo; Quevedo, Jorge I.; Peter, Trevor F.; Jani, Ilesh V.

    2012-01-01

    The expansion of HIV antiretroviral therapy into decentralized rural settings will increasingly require simple point-of-care (POC) diagnostic tests that can be used without laboratory infrastructure and technical skills. New POC test devices are becoming available but decisions around which technologies to deploy may be biased without systematic assessment of their suitability for decentralized healthcare settings. To address this, we developed a standardized, quantitative scorecard tool to objectively evaluate the operational characteristics of POC diagnostic devices. The tool scores devices on a scale of 1–5 across 30 weighted characteristics such as ease of use, quality control, electrical requirements, shelf life, portability, cost and service, and provides a cumulative score that ranks products against a set of ideal POC characteristics. The scorecard was tested on 19 devices for POC CD4 T-lymphocyte cell counting, clinical chemistry or hematology testing. Single and multi-parameter devices were assessed in each of test categories. The scores across all devices ranged from 2.78 to 4.40 out of 5. The tool effectively ranked devices within each category (p<0.01) except the CD4 and multi-parameter hematology products. The tool also enabled comparison of different characteristics between products. Agreement across the four scorers for each product was high (intra-class correlation >0.80; p<0.001). Use of this tool enables the systematic evaluation of diagnostic tests to facilitate product selection and investment in appropriate technology. It is particularly relevant for countries and testing programs considering the adoption of new POC diagnostic tests. PMID:23118871

  19. A highly sensitive and simply operated protease sensor toward point-of-care testing.

    PubMed

    Park, Seonhwa; Shin, Yu Mi; Seo, Jeongwook; Song, Ji-Joon; Yang, Haesik

    2016-04-21

    Protease sensors for point-of-care testing (POCT) require simple operation, a detection period of less than 20 minutes, and a detection limit of less than 1 ng mL(-1). However, it is difficult to meet these requirements with protease sensors that are based on proteolytic cleavage. This paper reports a highly reproducible protease sensor that allows the sensitive and simple electrochemical detection of the botulinum neurotoxin type E light chain (BoNT/E-LC), which is obtained using (i) low nonspecific adsorption, (ii) high signal-to-background ratio, and (iii) one-step solution treatment. The BoNT/E-LC detection is based on two-step proteolytic cleavage using BoNT/E-LC (endopeptidase) and l-leucine-aminopeptidase (LAP, exopeptidase). Indium-tin oxide (ITO) electrodes are modified partially with reduced graphene oxide (rGO) to increase their electrocatalytic activities. Avidin is then adsorbed on the electrodes to minimize the nonspecific adsorption of proteases. Low nonspecific adsorption allows a highly reproducible sensor response. Electrochemical-chemical (EC) redox cycling involving p-aminophenol (AP) and dithiothreitol (DTT) is performed to obtain a high signal-to-background ratio. After adding a C-terminally AP-labeled oligopeptide, DTT, and LAP simultaneously to a sample solution, no further treatment of the solution is necessary during detection. The detection limits of BoNT/E-LC in phosphate-buffered saline are 0.1 ng mL(-1) for an incubation period of 15 min and 5 fg mL(-1) for an incubation period of 4 h. The detection limit in commercial bottled water is 1 ng mL(-1) for an incubation period of 15 min. The developed sensor is selective to BoNT/E-LC among the four types of BoNTs tested. These results indicate that the protease sensor meets the requirements for POCT. PMID:26980003

  20. Iterative most-likely point registration (IMLP): a robust algorithm for computing optimal shape alignment.

    PubMed

    Billings, Seth D; Boctor, Emad M; Taylor, Russell H

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP's probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes.

  1. Iterative Most-Likely Point Registration (IMLP): A Robust Algorithm for Computing Optimal Shape Alignment

    PubMed Central

    Billings, Seth D.; Boctor, Emad M.; Taylor, Russell H.

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700

  2. Optimal reducibility of all W states equivalent under stochastic local operations and classical communication

    SciTech Connect

    Rana, Swapan; Parashar, Preeti

    2011-11-15

    We show that all multipartite pure states that are stochastic local operation and classical communication (SLOCC) equivalent to the N-qubit W state can be uniquely determined (among arbitrary states) from their bipartite marginals. We also prove that only (N-1) of the bipartite marginals are sufficient and that this is also the optimal number. Thus, contrary to the Greenberger-Horne-Zeilinger (GHZ) class, W-type states preserve their reducibility under SLOCC. We also study the optimal reducibility of some larger classes of states. The generic Dicke states |GD{sub N}{sup l}> are shown to be optimally determined by their (l+1)-partite marginals. The class of ''G'' states (superposition of W and W) are shown to be optimally determined by just two (N-2)-partite marginals.

  3. Application of the dynamic ant colony algorithm on the optimal operation of cascade reservoirs

    NASA Astrophysics Data System (ADS)

    Tong, X. X.; Xu, W. S.; Wang, Y. F.; Zhang, Y. W.; Zhang, P. C.

    2016-08-01

    Due to the lack of dynamic adjustments between global searches and local optimization, it is difficult to maintain high diversity and overcome local optimum problems for Ant Colony Algorithms (ACA). Therefore, this paper proposes an improved ACA, Dynamic Ant Colony Algorithm (DACA). DACA applies dynamic adjustments on heuristic factor changes to balance global searches and local optimization in ACA, which decreases cosines. At the same time, by utilizing the randomness and ergodicity of the chaotic search, DACA implements the chaos disturbance on the path found in each ACA iteration to improve the algorithm's ability to jump out of the local optimum and avoid premature convergence. We conducted a case study with DACA for optimal joint operation of the Dadu River cascade reservoirs. The simulation results were compared with the results of the gradual optimization method and the standard ACA, which demonstrated the advantages of DACA in speed and precision.

  4. Integrated Data-Archive and Distributed Hydrological Modelling System for Optimized Dam Operation

    NASA Astrophysics Data System (ADS)

    Shibuo, Yoshihiro; Jaranilla-Sanchez, Patricia Ann; Koike, Toshio

    2013-04-01

    In 2012, typhoon Bopha, which passed through the southern part of the Philippines, devastated the nation leaving hundreds of death tolls and significant destruction of the country. Indeed the deadly events related to cyclones occur almost every year in the region. Such extremes are expected to increase both in frequency and magnitude around Southeast Asia, during the course of global climate change. Our ability to confront such hazardous events is limited by the best available engineering infrastructure and performance of weather prediction. An example of the countermeasure strategy is, for instance, early release of reservoir water (lowering the dam water level) during the flood season to protect the downstream region of impending flood. However, over release of reservoir water affect the regional economy adversely by losing water resources, which still have value for power generation, agricultural and industrial water use. Furthermore, accurate precipitation forecast itself is conundrum task, due to the chaotic nature of the atmosphere yielding uncertainty in model prediction over time. Under these circumstances we present a novel approach to optimize contradicting objectives of: preventing flood damage via priori dam release; while sustaining sufficient water supply, during the predicted storm events. By evaluating forecast performance of Meso-Scale Model Grid Point Value against observed rainfall, uncertainty in model prediction is probabilistically taken into account, and it is then applied to the next GPV issuance for generating ensemble rainfalls. The ensemble rainfalls drive the coupled land-surface- and distributed-hydrological model to derive the ensemble flood forecast. Together with dam status information taken into account, our integrated system estimates the most desirable priori dam release through the shuffled complex evolution algorithm. The strength of the optimization system is further magnified by the online link to the Data Integration and

  5. Towards the geometric optimization of potential field models - A new spatial operator tool and applications

    NASA Astrophysics Data System (ADS)

    Haase, Claudia; Götze, Hans-Jürgen

    2014-05-01

    We present a new method for automated geometric modifications of potential field models. Computational developments and the increasing amount of available potential field data, especially gradient data from the satellite missions, lead to increasingly complex models and integrated modelling tools. Editing of these models becomes more difficult. Our approach presents an optimization tool that is designed to modify vertex-based model geometries (e.g. polygons, polyhedrons, triangulated surfaces) by applying spatial operators to the model that use an adaptive, on-the-fly model discretization. These operators deform the existing model via vertex-dragging, aiming at a minimized misfit between measured and modelled potential field anomaly. The parameters that define the operators are subject to an optimization process. This kind of parametrization provides a means for the reduction of unknowns (dimensionality of the search space), allows a variety of possible modifications and ensures that geometries are not destroyed by crossing polygon lines or punctured planes. We implemented a particle swarm optimization as a global searcher with restart option for the task of finding optimal operator parameters. This approach provides us with an ensemble of model solutions that allows a selection and geologically reasonable interpretations. The applicability of the tool is demonstrated in two 2D case studies that provide models of different extent and with different objectives. The first model is a synthetic salt structure in a horizontally layered background model. Expected geometry modifications are considerably small and localized and the initial models contain rather little information on the intended salt structure. A large scale example is given in the second study. Here, the optimization is applied to a sedimentary basin model that is based on seismic interpretation. With the aim to evaluate the seismically derived model, large scale operators are applied that mainly cause

  6. Maximum Principle of Optimal Control of the Primitive Equations of the Ocean with Two Point Boundary State Constraint

    SciTech Connect

    Tachim Medjo, Theodore

    2010-08-15

    We study in this article the Pontryagin's maximum principle for a class of control problems associated with the primitive equations (PEs) of the ocean with two point boundary state constraint. These optimal problems involve a two point boundary state constraint similar to that considered in Wang, Nonlinear Anal. 51, 509-536, 2002 for the three-dimensional Navier-Stokes (NS) equations. The main difference between this work and Wang, Nonlinear Anal. 51, 509-536, 2002 is that the nonlinearity in the PEs is stronger than in the three-dimensional NS systems.

  7. Energy and operation management of a microgrid using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Radosavljević, Jordan; Jevtić, Miroljub; Klimenta, Dardan

    2016-05-01

    This article presents an efficient algorithm based on particle swarm optimization (PSO) for energy and operation management (EOM) of a microgrid including different distributed generation units and energy storage devices. The proposed approach employs PSO to minimize the total energy and operating cost of the microgrid via optimal adjustment of the control variables of the EOM, while satisfying various operating constraints. Owing to the stochastic nature of energy produced from renewable sources, i.e. wind turbines and photovoltaic systems, as well as load uncertainties and market prices, a probabilistic approach in the EOM is introduced. The proposed method is examined and tested on a typical grid-connected microgrid including fuel cell, gas-fired microturbine, wind turbine, photovoltaic and energy storage devices. The obtained results prove the efficiency of the proposed approach to solve the EOM of the microgrids.

  8. Optimizing the Long-Term Operating Plan of Railway Marshalling Station for Capacity Utilization Analysis

    PubMed Central

    Zhou, Wenliang; Yang, Xia; Deng, Lianbo

    2014-01-01

    Not only is the operating plan the basis of organizing marshalling station's operation, but it is also used to analyze in detail the capacity utilization of each facility in marshalling station. In this paper, a long-term operating plan is optimized mainly for capacity utilization analysis. Firstly, a model is developed to minimize railcars' average staying time with the constraints of minimum time intervals, marshalling track capacity, and so forth. Secondly, an algorithm is designed to solve this model based on genetic algorithm (GA) and simulation method. It divides the plan of whole planning horizon into many subplans, and optimizes them with GA one by one in order to obtain a satisfactory plan with less computing time. Finally, some numeric examples are constructed to analyze (1) the convergence of the algorithm, (2) the effect of some algorithm parameters, and (3) the influence of arrival train flow on the algorithm. PMID:25525614

  9. Floating point only SIMD instruction set architecture including compare, select, Boolean, and alignment operations

    DOEpatents

    Gschwind, Michael K.

    2011-03-01

    Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.

  10. Mixed integer model for optimizing equipment scheduling and overburden transport in a surface coal mining operation

    SciTech Connect

    Goodman, G.V.R.

    1987-01-01

    The lack of available techniques prompted the development of a mixed integer model to optimize the scheduling of equipment and the distribution of overburden in a typical mountaintop removal operation. Using this format, a (0-1) integer model and transportation model were constructed to determine the optimal equipment schedule and optimal overburden distribution, respectively. To solve this mixed integer program, the model was partitioned into its binary and real-valued components. Each problem was successively solved and their values added to form estimates of the value of the mixed integer program. Optimal convergence was indicated when the difference between two successive estimates satisfied some pre-specific accuracy value. The performance of the mixed integer model was tested against actual field data to determine its practical applications. To provide the necessary input information, production data was obtained from a single seam, mountaintop removal operation located in the Appalachian coal field. As a means of analyzing the resultant equipment schedule, the total idle time was calculated for each machine type and each lift location. Also, the final overburden assignments were analyzed by determining the distribution of spoil material for various overburden removal productivities. Subsequent validation of the mixed integer model was conducted in two distinct areas. The first dealt with changes in algorithmic data and their effects on the optimality of the model. The second area concerned variations in problem structure, specifically those dealing with changes in problem size and other user-inputed values such as equipment productivities or required reclamation.

  11. Improving multi-objective reservoir operation optimization with sensitivity-informed problem decomposition

    NASA Astrophysics Data System (ADS)

    Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.

    2015-04-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.

  12. Optimal use of buffer volumes for the measurement of atmospheric gas concentration in multi-point systems

    NASA Astrophysics Data System (ADS)

    Cescatti, Alessandro; Marcolla, Barbara; Goded, Ignacio; Gruening, Carsten

    2016-09-01

    Accurate multi-point monitoring systems are required to derive atmospheric measurements of greenhouse gas concentrations both for the calculation of surface fluxes with inversion transport models and for the estimation of non-turbulent components of the mass balance equation (i.e. advection and storage fluxes) at eddy covariance sites. When a single analyser is used to monitor multiple sampling points, the deployment of buffer volumes (BVs) along sampling lines can reduce the uncertainty due to the discrete temporal sampling of the signal. In order to optimize the use of buffer volumes we explored various set-ups by simulating their effect on time series of high-frequency CO2 concentration collected at three Fluxnet sites. Besides, we proposed a novel scheme to calculate half-hourly weighted arithmetic means from discrete point samples, accounting for the probabilistic fraction of the signal generated in the averaging period. Results show that the use of BVs with the new averaging scheme reduces the mean absolute error (MAE) up to 80 % compared to a set-up without BVs and up to 60 % compared to the case with BVs and a standard, non-weighted averaging scheme. The MAE of CO2 concentration measurements was observed to depend on the variability of the concentration field and on the size of BVs, which therefore have to be carefully dimensioned. The optimal volume size depends on two main features of the instrumental set-up: the number of measurement points and the time needed to sample at one point (i.e. line purging plus sampling time). A linear and consistent relationship was observed at all sites between the sampling frequency, which summarizes the two features mentioned above, and the renewal frequency associated with the volume. Ultimately, this empirical relationship can be applied to estimate the optimal volume size according to the technical specifications of the sampling system.

  13. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOEpatents

    Archer, Charles J; Faraj, Ahmad A; Inglett, Todd A; Ratterman, Joseph D

    2013-04-16

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  14. OPTIMAL DESIGN AND OPERATION OF HELIUM REFRIGERATION SYSTEMS USING THE GANNI CYCLE

    SciTech Connect

    Venkatarao Ganni, Peter Knudsen

    2010-04-01

    The constant pressure ratio process, as implemented in the floating pressure - Ganni cycle, is a new variation to prior cryogenic refrigeration and liquefaction cycle designs that allows for optimal operation and design of helium refrigeration systems. This cycle is based upon the traditional equipment used for helium refrigeration system designs, i.e., constant volume displacement compression and critical flow expansion devices. It takes advantage of the fact that for a given load, the expander sets the compressor discharge pressure and the compressor sets its own suction pressure. This cycle not only provides an essentially constant system Carnot efficiency over a wide load range, but invalidates the traditional philosophy that the (‘TS’) design condition is the optimal operating condition for a given load using the as-built hardware. As such, the Floating Pressure- Ganni Cycle is a solution to reduce the energy consumption while increasing the reliability, flexibility and stability of these systems over a wide operating range and different operating modes and is applicable to most of the existing plants. This paper explains the basic theory behind this cycle operation and contrasts it to the traditional operational philosophies presently used.

  15. Collaboration pathway(s) using new tools for optimizing `operational' climate monitoring from space

    NASA Astrophysics Data System (ADS)

    Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.

    2015-09-01

    Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a long term solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the collective needs of policy makers, scientific communities and global academic users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent rule-based expert system (RBES) optimization modeling of the intended NPOESS architecture becomes a surrogate for global operational climate monitoring architecture(s). These rulebased systems tools provide valuable insight for global climate architectures, by comparison/evaluation of alternatives and the sheer range of trade space explored. Optimization of climate monitoring architecture(s) for a partial list of ECV (essential climate variables) is explored and described in detail with dialogue on appropriate rule-based valuations. These optimization tool(s) suggest global collaboration advantages and elicit responses from the audience and climate science community. This paper will focus on recent research exploring joint requirement implications of the high profile NPOESS architecture and extends the research and tools to optimization for a climate centric case study. This reflects work from SPIE RS Conferences 2013 and 2014, abridged for simplification30, 32. First, the heavily securitized NPOESS architecture; inspired the recent research question - was Complexity (as a cost/risk factor) overlooked when considering the benefits of aggregating different missions into a single platform. Now years later a complete reversal; should agencies considering Disaggregation as the answer. We'll discuss what some academic research suggests. Second, using the GCOS requirements of earth climate observations via ECV (essential climate variables) many collected from space-based sensors; and accepting their

  16. Online Optimization Method for Operation of Generators in a Micro Grid

    NASA Astrophysics Data System (ADS)

    Hayashi, Yasuhiro; Miyamoto, Hideki; Matsuki, Junya; Iizuka, Toshio; Azuma, Hitoshi

    Recently a lot of studies and developments about distributed generator such as photovoltaic generation system, wind turbine generation system and fuel cell have been performed under the background of the global environment issues and deregulation of the electricity market, and the technique of these distributed generators have progressed. Especially, micro grid which consists of several distributed generators, loads and storage battery is expected as one of the new operation system of distributed generator. However, since precipitous load fluctuation occurs in micro grid for the reason of its smaller capacity compared with conventional power system, high-accuracy load forecasting and control scheme to balance of supply and demand are needed. Namely, it is necessary to improve the precision of operation in micro grid by observing load fluctuation and correcting start-stop schedule and output of generators online. But it is not easy to determine the operation schedule of each generator in short time, because the problem to determine start-up, shut-down and output of each generator in micro grid is a mixed integer programming problem. In this paper, the authors propose an online optimization method for the optimal operation schedule of generators in micro grid. The proposed method is based on enumeration method and particle swarm optimization (PSO). In the proposed method, after picking up all unit commitment patterns of each generators satisfied with minimum up time and minimum down time constraint by using enumeration method, optimal schedule and output of generators are determined under the other operational constraints by using PSO. Numerical simulation is carried out for a micro grid model with five generators and photovoltaic generation system in order to examine the validity of the proposed method.

  17. Optimizing Canal Structure Operation Using Meta-heuristic Algorithms in the Treasure Valley, Idaho

    NASA Astrophysics Data System (ADS)

    Hernandez, J.; Ha, W.; Campbell, A.

    2012-12-01

    The computer program that was proven to produce optimal operational solutions for open-channel irrigation conveyance and distribution networks for synthetic data in previous research was tested for real world data. Data gathered from databases and the field by the Boise Project, Idaho, provided input to the hydraulic model for the physical characteristics of the conveyance system. We selected three reaches of the Deer Flat Low Line in the Treasure Valley for optimizing actual gate operations. The total of 59.1 km canal with a maximum capacity of 34 m3/s irrigates mainly corn, wheat, sugar-beet and potato crops. The computer model uses an accuracy-based learning classifier system (XCS) with an embedded genetic algorithm to produce optimal rules for gate structure operation in irrigation canals. Rules are generated through the exploration and exploitation of genetic algorithm population, with the support of RootCanal, an unsteady-state hydraulic simulation model. The objective function was set for satisfying variable demand along three reaches while minimizing water level deviations from target. All canal gate structures operate simultaneously while maintaining water depth near target values during variable-demand periods, with a hydraulically stabilized system. It is noteworthy to mention that this very simple 3-reach problem, requires the computer performing several thousand simulations during continuous days for finding plausible solutions. The model is currently simulating the Deer Flat Low Line Canal in Caldwell, Idaho with promising results. The population evolution is measured by a fitness parameter, which shows that canal structure operations generated by the model are improving towards plausible solutions. This research is one step forward for optimizing the way we use and manage water resources. Relying on management practices of the past will no longer work in a world that is impacted by global climate variability.

  18. Is the 90th Percentile Adequate? The Optimal Waist Circumference Cutoff Points for Predicting Cardiovascular Risks in 124,643 15-Year-Old Taiwanese Adolescents

    PubMed Central

    Ho, ChinYu; Chen, Hsin-Jen; Huang, Nicole; Yeh, Jade Chienyu; deFerranti, Sarah

    2016-01-01

    Adolescent obesity has increased to alarming proportions globally. However, few studies have investigated the optimal waist circumference (WC) of Asian adolescents. This study sought to establish the optimal WC cutoff points that identify a cluster of cardiovascular risk factors (CVRFs) among 15-year-old ethnically Chinese adolescents. This study was a regional population-based study on the CVRFs among adolescents who enrolled in all the senior high schools in Taipei City, Taiwan, between 2011 and 2014. Four cross-sectional health examinations of first-year senior high school (grade 10) students were conducted from September to December of each year. A total of 124,643 adolescents aged 15 (boys: 63,654; girls: 60,989) were recruited. Participants who had at least three of five CVRFs were classified as the high-risk group. We used receiver-operating characteristic curves and the area under the curve (AUC) to determine the optimal WC cutoff points and the accuracy of WC in predicting high cardiovascular risk. WC was a good predictor for high cardiovascular risk for both boys (AUC: 0.845, 95% confidence interval [CI]: 0.833–0.857) and girls (AUC: 0.763, 95% CI: 0.731–0.795). The optimal WC cutoff points were ≥78.9 cm for boys (77th percentile) and ≥70.7 cm for girls (77th percentile). Adolescents with normal weight and an abnormal WC were more likely to be in the high cardiovascular risk group (odds ratio: 3.70, 95% CI: 2.65–5.17) compared to their peers with normal weight and normal WC. The optimal WC cutoff point of 15-year-old Taiwanese adolescents for identifying CVRFs should be the 77th percentile; the 90th percentile of the WC might be inadequate. The high WC criteria can help health professionals identify higher proportion of the adolescents with cardiovascular risks and refer them for further evaluations and interventions. Adolescents’ height, weight and WC should be measured as a standard practice in routine health checkups. PMID:27389572

  19. Is the 90th Percentile Adequate? The Optimal Waist Circumference Cutoff Points for Predicting Cardiovascular Risks in 124,643 15-Year-Old Taiwanese Adolescents.

    PubMed

    Lee, Jason Jiunshiou; Ho, ChinYu; Chen, Hsin-Jen; Huang, Nicole; Yeh, Jade Chienyu; deFerranti, Sarah

    2016-01-01

    Adolescent obesity has increased to alarming proportions globally. However, few studies have investigated the optimal waist circumference (WC) of Asian adolescents. This study sought to establish the optimal WC cutoff points that identify a cluster of cardiovascular risk factors (CVRFs) among 15-year-old ethnically Chinese adolescents. This study was a regional population-based study on the CVRFs among adolescents who enrolled in all the senior high schools in Taipei City, Taiwan, between 2011 and 2014. Four cross-sectional health examinations of first-year senior high school (grade 10) students were conducted from September to December of each year. A total of 124,643 adolescents aged 15 (boys: 63,654; girls: 60,989) were recruited. Participants who had at least three of five CVRFs were classified as the high-risk group. We used receiver-operating characteristic curves and the area under the curve (AUC) to determine the optimal WC cutoff points and the accuracy of WC in predicting high cardiovascular risk. WC was a good predictor for high cardiovascular risk for both boys (AUC: 0.845, 95% confidence interval [CI]: 0.833-0.857) and girls (AUC: 0.763, 95% CI: 0.731-0.795). The optimal WC cutoff points were ≥78.9 cm for boys (77th percentile) and ≥70.7 cm for girls (77th percentile). Adolescents with normal weight and an abnormal WC were more likely to be in the high cardiovascular risk group (odds ratio: 3.70, 95% CI: 2.65-5.17) compared to their peers with normal weight and normal WC. The optimal WC cutoff point of 15-year-old Taiwanese adolescents for identifying CVRFs should be the 77th percentile; the 90th percentile of the WC might be inadequate. The high WC criteria can help health professionals identify higher proportion of the adolescents with cardiovascular risks and refer them for further evaluations and interventions. Adolescents' height, weight and WC should be measured as a standard practice in routine health checkups. PMID:27389572

  20. Is the 90th Percentile Adequate? The Optimal Waist Circumference Cutoff Points for Predicting Cardiovascular Risks in 124,643 15-Year-Old Taiwanese Adolescents.

    PubMed

    Lee, Jason Jiunshiou; Ho, ChinYu; Chen, Hsin-Jen; Huang, Nicole; Yeh, Jade Chienyu; deFerranti, Sarah

    2016-01-01

    Adolescent obesity has increased to alarming proportions globally. However, few studies have investigated the optimal waist circumference (WC) of Asian adolescents. This study sought to establish the optimal WC cutoff points that identify a cluster of cardiovascular risk factors (CVRFs) among 15-year-old ethnically Chinese adolescents. This study was a regional population-based study on the CVRFs among adolescents who enrolled in all the senior high schools in Taipei City, Taiwan, between 2011 and 2014. Four cross-sectional health examinations of first-year senior high school (grade 10) students were conducted from September to December of each year. A total of 124,643 adolescents aged 15 (boys: 63,654; girls: 60,989) were recruited. Participants who had at least three of five CVRFs were classified as the high-risk group. We used receiver-operating characteristic curves and the area under the curve (AUC) to determine the optimal WC cutoff points and the accuracy of WC in predicting high cardiovascular risk. WC was a good predictor for high cardiovascular risk for both boys (AUC: 0.845, 95% confidence interval [CI]: 0.833-0.857) and girls (AUC: 0.763, 95% CI: 0.731-0.795). The optimal WC cutoff points were ≥78.9 cm for boys (77th percentile) and ≥70.7 cm for girls (77th percentile). Adolescents with normal weight and an abnormal WC were more likely to be in the high cardiovascular risk group (odds ratio: 3.70, 95% CI: 2.65-5.17) compared to their peers with normal weight and normal WC. The optimal WC cutoff point of 15-year-old Taiwanese adolescents for identifying CVRFs should be the 77th percentile; the 90th percentile of the WC might be inadequate. The high WC criteria can help health professionals identify higher proportion of the adolescents with cardiovascular risks and refer them for further evaluations and interventions. Adolescents' height, weight and WC should be measured as a standard practice in routine health checkups.

  1. Ultrasensitive optical microfiber coupler based sensors operating near the turning point of effective group index difference

    NASA Astrophysics Data System (ADS)

    Li, Kaiwei; Zhang, Ting; Liu, Guigen; Zhang, Nan; Zhang, Mengying; Wei, Lei

    2016-09-01

    We propose and study an optical microfiber coupler (OMC) sensor working near the turning point of effective group index difference between the even supermode and odd supermode to achieve high refractive index (RI) sensitivity. Theoretical calculations reveal that infinite sensitivity can be obtained when the measured RI is close to the turning point value. This diameter-dependent turning point corresponds to the condition that the effective group index difference equals zero. To validate our proposed sensing mechanism, we experimentally demonstrate an ultrahigh sensitivity of 39541.7 nm/RIU at a low ambient RI of 1.3334 based on an OMC with the diameter of 1.4 μm. An even higher sensitivity can be achieved by carrying out the measurements at RI closer to the turning point. The resulting ultrasensitive RI sensing platform offers a substantial impact on a variety of applications from high performance trace analyte detection to small molecule sensing.

  2. Early Mission Maneuver Operations for the Deep Space Climate Observatory Sun-Earth L1 Libration Point Mission

    NASA Technical Reports Server (NTRS)

    Roberts, Craig; Case, Sara; Reagoso, John; Webster, Cassandra

    2015-01-01

    The Deep Space Climate Observatory mission launched on February 11, 2015, and inserted onto a transfer trajectory toward a Lissajous orbit around the Sun-Earth L1 libration point. This paper presents an overview of the baseline transfer orbit and early mission maneuver operations leading up to the start of nominal science orbit operations. In particular, the analysis and performance of the spacecraft insertion, mid-course correction maneuvers, and the deep-space Lissajous orbit insertion maneuvers are discussed, com-paring the baseline orbit with actual mission results and highlighting mission and operations constraints..

  3. 78 FR 26662 - Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit No. 3 Extension of Public...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-07

    ... notice appearing in the Federal Register on April 3, 2013 (78 FR 20144), by extending the original public... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit No. 3 Extension of...

  4. An intelligent factory-wide optimal operation system for continuous production process

    NASA Astrophysics Data System (ADS)

    Ding, Jinliang; Chai, Tianyou; Wang, Hongfeng; Wang, Junwei; Zheng, Xiuping

    2016-03-01

    In this study, a novel intelligent factory-wide operation system for a continuous production process is designed to optimise the entire production process, which consists of multiple units; furthermore, this system is developed using process operational data to avoid the complexity of mathematical modelling of the continuous production process. The data-driven approach aims to specify the structure of the optimal operation system; in particular, the operational data of the process are used to formulate each part of the system. In this context, the domain knowledge of process engineers is utilised, and a closed-loop dynamic optimisation strategy, which combines feedback, performance prediction, feed-forward, and dynamic tuning schemes into a framework, is employed. The effectiveness of the proposed system has been verified using industrial experimental results.

  5. The Optimal Operation of Multi-reservoir Floodwater Resources Control Based on GA-PSO

    NASA Astrophysics Data System (ADS)

    Huang, X.; Zhu, X.; Lian, Y.; Fang, G.; Zhu, L.

    2015-12-01

    Floodwater resources control operation has an important role to reduce flood disaster, ease the contradiction between water supply and demand and improve flood resource utilization. Based on the basin safety and floodwater resources utilization with the maximum benefit for floodwater optimal scheduling, the optimal operation of multi-reservoir floodwater resources control model is established. There are two objectives of floodwater resources control operation in multi-reservoir system. The first one is floodwater control safety, the other one is floodwater resource utilization with the maximum benefit. For the floodwater control safety target, the maximal flood peak reduction criterion is selected as the objective function. The maximal flood peak reduction criterion means that choosing reducing most peak flow as the judgment standard of the flood control operations optimal solution. For the floodwater resource utilization, maximum benefit of floodwater utilization refers to make full use of multi-reservoir capacity, accumulate transit flood as much as possible. In the other word, it refers to take releasing water as least as possible as the target in the case of determining the flood process. The model is solved by the coupling optimal method of genetic algorithm and particle swarm optimization (GA-PSO). GA-PSO uses the mutation for reference and takes PSO as a template, introduces the crossover and mutation into the search process of PSO in order to improve the search capabilities of particles. In order to make the particles have the characteristics of the current global best solution, crossover and mutation are used in the updated particles. Taking Shilianghe reservoir and Anfengshan reservoir in Jiangsu Province, China, for an case study, the results show that the optimal operation will reduce the floodwater resources control pressure, as well as keep nearly 81.11 million cubic meters floodwater resources accumulating in Longlianghe river and Anfengshan

  6. Optimum structural properties for an anode current collector used in a polymer electrolyte membrane water electrolyzer operated at the boiling point of water

    NASA Astrophysics Data System (ADS)

    Li, Hua; Fujigaya, Tsuyohiko; Nakajima, Hironori; Inada, Akiko; Ito, Kohei

    2016-11-01

    This study attempts to optimize the properties of the anode current collector of a polymer electrolyte membrane water electrolyzer at high temperatures, particularly at the boiling point of water. Different titanium meshes (4 commercial ones and 4 modified ones) with various properties are experimentally examined by operating a cell with each mesh under different conditions. The average pore diameter, thickness, and contact angle of the anode current collector are controlled in the ranges of 10-35 μm, 0.2-0.3 mm, and 0-120°, respectively. These results showed that increasing the temperature from the conventional temperature of 80 °C to the boiling point could reduce both the open circuit voltage and the overvoltages to a large extent without notable dehydration of the membrane. These results also showed that decreasing the contact angle and the thickness suppresses the electrolysis overvoltage largely by decreasing the concentration overvoltage. The effect of the average pore diameter was not evident until the temperature reached the boiling point. Using operating conditions of 100 °C and 2 A/cm2, the electrolysis voltage is minimized to 1.69 V with a hydrophilic titanium mesh with an average pore diameter of 21 μm and a thickness of 0.2 mm.

  7. A study on the influence of operating circuit on the position of emission point of fluorescent lamp

    NASA Astrophysics Data System (ADS)

    Uetsuki, Tadao; Genba, Yuki; Kanda, Takashi

    2009-10-01

    High efficiency fluorescent lamp systems driven by high frequency are very popular for general lighting. Therefore it is very beneficial to be able to predict the lamp's life before the lamp dying, because people can buy a new lamp just before the lamp dying and need not have stocks. In order to judge the lifetime of a lamp it is very useful to know where the emission point is on the electrode filament. With regard to a method for grasping the emission point, it has been reported that the distance from the emission point to the end of the filament can be calculated by measuring the voltage across the filament and the currents flowing in both ends of the filament. The lamp's life can be predicted by grasping the movement of the emission point with operating time. Therefore it is very important to confirm whether the movement of the emission point changes or not when the operating circuit is changed. The authors investigated the difference in the way the emission points moved for two lamp systems which are very popular. One system had an electronic ballast having an auxiliary power source for the heating cathode. Another system had an electronic ballast with no power source, but with a capacitor connected to the lamp in parallel. In this presentation these measurement results will be reported.

  8. How does network design constrain optimal operation of intermittent water supply?

    NASA Astrophysics Data System (ADS)

    Lieb, Anna; Wilkening, Jon; Rycroft, Chris

    2015-11-01

    Urban water distribution systems do not always supply water continuously or reliably. As pipes fill and empty, pressure transients may contribute to degraded infrastructure and poor water quality. To help understand and manage this undesirable side effect of intermittent water supply--a phenomenon affecting hundreds of millions of people in cities around the world--we study the relative contributions of fixed versus dynamic properties of the network. Using a dynamical model of unsteady transition pipe flow, we study how different elements of network design, such as network geometry, pipe material, and pipe slope, contribute to undesirable pressure transients. Using an optimization framework, we then investigate to what extent network operation decisions such as supply timing and inflow rate may mitigate these effects. We characterize some aspects of network design that make them more or less amenable to operational optimization.

  9. Better Redd than Dead: Optimizing Reservoir Operations for Wild Fish Survival During Drought

    NASA Astrophysics Data System (ADS)

    Adams, L. E.; Lund, J. R.; Quiñones, R.

    2014-12-01

    Extreme droughts are difficult to predict and may incur large economic and ecological costs. Dam operations in drought usually consider minimizing economic costs. However, dam operations also offer an opportunity to increase wild fish survival under difficult conditions. Here, we develop a probabilistic optimization approach to developing reservoir release schedules to maximize fish survival in regulated rivers. A case study applies the approach to wild Fall-run Chinook Salmon below Folsom Dam on California's American River. Our results indicate that releasing more water early in the drought will, on average, save more wild fish over the long term.

  10. Improved nonparametric estimation of the optimal diagnostic cut-off point associated with the Youden index under different sampling schemes.

    PubMed

    Yin, Jingjing; Samawi, Hani; Linder, Daniel

    2016-07-01

    A diagnostic cut-off point of a biomarker measurement is needed for classifying a random subject to be either diseased or healthy. However, the cut-off point is usually unknown and needs to be estimated by some optimization criteria. One important criterion is the Youden index, which has been widely adopted in practice. The Youden index, which is defined as the maximum of (sensitivity + specificity -1), directly measures the largest total diagnostic accuracy a biomarker can achieve. Therefore, it is desirable to estimate the optimal cut-off point associated with the Youden index. Sometimes, taking the actual measurements of a biomarker is very difficult and expensive, while ranking them without the actual measurement can be relatively easy. In such cases, ranked set sampling can give more precise estimation than simple random sampling, as ranked set samples are more likely to span the full range of the population. In this study, kernel density estimation is utilized to numerically solve for an estimate of the optimal cut-off point. The asymptotic distributions of the kernel estimators based on two sampling schemes are derived analytically and we prove that the estimators based on ranked set sampling are relatively more efficient than that of simple random sampling and both estimators are asymptotically unbiased. Furthermore, the asymptotic confidence intervals are derived. Intensive simulations are carried out to compare the proposed method using ranked set sampling with simple random sampling, with the proposed method outperforming simple random sampling in all cases. A real data set is analyzed for illustrating the proposed method.

  11. Improved nonparametric estimation of the optimal diagnostic cut-off point associated with the Youden index under different sampling schemes.

    PubMed

    Yin, Jingjing; Samawi, Hani; Linder, Daniel

    2016-07-01

    A diagnostic cut-off point of a biomarker measurement is needed for classifying a random subject to be either diseased or healthy. However, the cut-off point is usually unknown and needs to be estimated by some optimization criteria. One important criterion is the Youden index, which has been widely adopted in practice. The Youden index, which is defined as the maximum of (sensitivity + specificity -1), directly measures the largest total diagnostic accuracy a biomarker can achieve. Therefore, it is desirable to estimate the optimal cut-off point associated with the Youden index. Sometimes, taking the actual measurements of a biomarker is very difficult and expensive, while ranking them without the actual measurement can be relatively easy. In such cases, ranked set sampling can give more precise estimation than simple random sampling, as ranked set samples are more likely to span the full range of the population. In this study, kernel density estimation is utilized to numerically solve for an estimate of the optimal cut-off point. The asymptotic distributions of the kernel estimators based on two sampling schemes are derived analytically and we prove that the estimators based on ranked set sampling are relatively more efficient than that of simple random sampling and both estimators are asymptotically unbiased. Furthermore, the asymptotic confidence intervals are derived. Intensive simulations are carried out to compare the proposed method using ranked set sampling with simple random sampling, with the proposed method outperforming simple random sampling in all cases. A real data set is analyzed for illustrating the proposed method. PMID:26756282

  12. Aortic aneurysm repair. Reduced operative mortality associated with maintenance of optimal cardiac performance.

    PubMed Central

    Whittemore, A D; Clowes, A W; Hechtman, H B; Mannick, J A

    1980-01-01

    Recent advances in the operative management of aortic aneurysms have resulted in a decreased rate of morbidity and mortality. In 1972, we hypothesized that a further reduction in operative mortality might be obtained with controlled perioperative fluid management based on data provided by the thermistor-tipped pulmonary artery balloon catheter. From 1972 to 1979 a flow directed pulmonary artery catheter was inserted in each of 110 consecutive patients prior to elective or urgent repair of nonruptured infrarenal aortic aneurysms. The slope of the left ventricular performance curve was determined preoperatively by incremental infusions of salt-poor albumin and Ringer's lactate solution. With each increase in the pulmonary arterial wedge pressure (PAWP), the cardiac index (CI) was measured. The PAWP was then maintained intra- and postoperatively at levels providing optimal left ventricular performance for the individual patient. There were no 30-day operative deaths among the patients in this series and only one in-hospital mortality (0.9%), four months following surgery. The five-year cumulative survival rate for patients in the present series was 84%, a rate which does not differ significantly from that expected for a normal age-corrected population. Since the patient population was unselected and there were no substantial alterations in operative technique during the present period, these improved results support the hypothesis that operative mortality attending the elective or urgent repair of abdominal aortic aneurysm can be minimized by maintenance of optimal cardiac performance with careful attention to fluid therapy during the perioperative period. PMID:7416834

  13. Information Points and Optimal Discharging Speed: Effects on the Saturation Flow at Signalized Intersections

    ERIC Educational Resources Information Center

    Gao, Lijun

    2015-01-01

    An information point was defined in this study as any object, structure, or activity located outside of a traveling vehicle that could potentially attract the visual attention of the driver. Saturation flow rates were studied for three pairs of signalized intersections in Toledo, Ohio. Each pair of intersections consisted of one intersection with…

  14. Optimal reservoir operation considering the water quality issues: A stochastic conflict resolution approach

    NASA Astrophysics Data System (ADS)

    Kerachian, Reza; Karamouz, Mohammad

    2006-12-01

    In this study, an algorithm combining a water quality simulation model and a deterministic/stochastic conflict resolution technique is developed for determining optimal reservoir operating rules. As different decision makers and stakeholders are involved in reservoir operation, the Nash bargaining theory is used to resolve the existing conflict of interests. The utility functions of the proposed models are developed on the basis of the reliability of the water supply to downstream demands, water storage, and the quality of the withdrawn water. The expected value on the Nash product is considered as the objective function of the stochastic model, which can incorporate the inherent uncertainty of reservoir inflow. A water quality simulation model is also developed to simulate the thermal stratification cycle and the reservoir discharge quality through a selective withdrawal structure. The optimization models are solved using a new version of genetic algorithms called varying chromosome length genetic algorithm (VLGA). In this algorithm the chromosome length is sequentially increased to provide a good initial solution for the final traditional GA-based optimization model. The proposed stochastic optimization model can also reduce the computational burden of the previously proposed models such as stochastic dynamic programming (SDP) by reducing the number of state transitions in each stage. The proposed models which are called VLGAQ and SVLGAQ are applied to the 15-Khordad Reservoir in the central part of Iran. The results show that the proposed models can reduce the salinity of allocated water to different water demands as well as the salinity buildup in the reservoir.

  15. PLIO: a generic tool for real-time operational predictive optimal control of water networks.

    PubMed

    Cembrano, G; Quevedo, J; Puig, V; Pérez, R; Figueras, J; Verdejo, J M; Escaler, I; Ramón, G; Barnet, G; Rodríguez, P; Casas, M

    2011-01-01

    This paper presents a generic tool, named PLIO, that allows to implement the real-time operational control of water networks. Control strategies are generated using predictive optimal control techniques. This tool allows the flow management in a large water supply and distribution system including reservoirs, open-flow channels for water transport, water treatment plants, pressurized water pipe networks, tanks, flow/pressure control elements and a telemetry/telecontrol system. Predictive optimal control is used to generate flow control strategies from the sources to the consumer areas to meet future demands with appropriate pressure levels, optimizing operational goals such as network safety volumes and flow control stability. PLIO allows to build the network model graphically and then to automatically generate the model equations used by the predictive optimal controller. Additionally, PLIO can work off-line (in simulation) and on-line (in real-time mode). The case study of Santiago-Chile is presented to exemplify the control results obtained using PLIO off-line (in simulation).

  16. PLIO: a generic tool for real-time operational predictive optimal control of water networks.

    PubMed

    Cembrano, G; Quevedo, J; Puig, V; Pérez, R; Figueras, J; Verdejo, J M; Escaler, I; Ramón, G; Barnet, G; Rodríguez, P; Casas, M

    2011-01-01

    This paper presents a generic tool, named PLIO, that allows to implement the real-time operational control of water networks. Control strategies are generated using predictive optimal control techniques. This tool allows the flow management in a large water supply and distribution system including reservoirs, open-flow channels for water transport, water treatment plants, pressurized water pipe networks, tanks, flow/pressure control elements and a telemetry/telecontrol system. Predictive optimal control is used to generate flow control strategies from the sources to the consumer areas to meet future demands with appropriate pressure levels, optimizing operational goals such as network safety volumes and flow control stability. PLIO allows to build the network model graphically and then to automatically generate the model equations used by the predictive optimal controller. Additionally, PLIO can work off-line (in simulation) and on-line (in real-time mode). The case study of Santiago-Chile is presented to exemplify the control results obtained using PLIO off-line (in simulation). PMID:22097020

  17. Leveraging an existing data warehouse to annotate workflow models for operations research and optimization.

    PubMed

    Borlawsky, Tara; LaFountain, Jeanne; Petty, Lynda; Saltz, Joel H; Payne, Philip R O

    2008-11-06

    Workflow analysis is frequently performed in the context of operations research and process optimization. In order to develop a data-driven workflow model that can be employed to assess opportunities to improve the efficiency of perioperative care teams at The Ohio State University Medical Center (OSUMC), we have developed a method for integrating standard workflow modeling formalisms, such as UML activity diagrams with data-centric annotations derived from our existing data warehouse.

  18. Determining the optimal operator allocation in SME's food manufacturing company using computer simulation and data envelopment analysis

    NASA Astrophysics Data System (ADS)

    Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Rahman, Asmahanim Ab

    2014-09-01

    In a labor intensive manufacturing system, optimal operator allocation is one of the most important decisions in determining the efficiency of the system. In this paper, ten operator allocation alternatives are identified using the computer simulation ARENA. Two inputs; average wait time and average cycle time and two outputs; average operator utilization and total packet values of each alternative are generated. Four Data Envelopment Analysis (DEA) models; CCR, BCC, MCDEA and AHP/DEA are used to determine the optimal operator allocation at one of the SME food manufacturing companies in Selangor. The results of all four DEA models showed that the optimal operator allocation is six operators at peeling process, three operators at washing and slicing process, three operators at frying process and two operators at packaging process.

  19. Optimization of magnetic refrigerators by tuning the heat transfer medium and operating conditions

    NASA Astrophysics Data System (ADS)

    Ghahremani, Mohammadreza; Aslani, Amir; Bennett, Lawrence; Della Torre, Edward

    A new reciprocating Active Magnetic Regenerator (AMR) experimental device has been designed, built and tested to evaluate the effect of the system's parameters on a reciprocating Active Magnetic Regenerator (AMR) near room temperature. Gadolinium turnings were used as the refrigerant, silicon oil as the heat transfer medium, and a magnetic field of 1.3 T was cycled. This study focuses on the methodology of single stage AMR operation conditions to get a higher temperature span near room temperature. Herein, the main objective is not to report the absolute maximum attainable temperature span seen in an AMR system, but rather to find the system's optimal operating conditions to reach that maximum span. The results of this work show that there is an optimal operating frequency, heat transfer fluid flow rate, flow duration, and displaced volume ratio in an AMR system. It is expected that such optimization and the results provided herein will permit the future design and development of more efficient room-temperature magnetic refrigeration systems.

  20. 78 FR 44881 - Drawbridge Operation Regulation; York River, Between Yorktown and Gloucester Point, VA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-25

    ... maintenance work on the moveable spans on the Coleman Memorial Bridge. This temporary deviation allows the... operating regulation set out in 33 CFR 117.1025, to facilitate maintenance of the moveable spans on...

  1. Optimal design and operation of solid oxide fuel cell systems for small-scale stationary applications

    NASA Astrophysics Data System (ADS)

    Braun, Robert Joseph

    The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell

  2. An optimized structure on FPGA of key point description in SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Chenyu; Peng, Jinlong; Zhu, En; Zou, Yuxin

    2015-12-01

    SIFT algorithm is one of the most significant and effective algorithms to describe the features of image in the field of image matching. To implement SIFT algorithm to hardware environment is apparently considerable and difficult. In this paper, we mainly discuss the realization of Key Point Description in SIFT algorithm, along with Matching process. In Key Point Description, we have proposed a new method of generating histograms, to avoid the rotation of adjacent regions and insure the rotational invariance. In Matching, we replace conventional Euclidean distance with Hamming distance. The results of the experiments fully prove that the structure we propose is real-time, accurate, and efficient. Future work is still needed to improve its performance in harsher conditions.

  3. Optimization principle of operating parameters of heat exchanger by using CFD simulation

    NASA Astrophysics Data System (ADS)

    Mičieta, Jozef; Jiří, Vondál; Jandačka, Jozef; Lenhard, Richard

    2016-03-01

    Design of effective heat transfer devices and minimizing costs are desired sections in industry and they are important for both engineers and users due to the wide-scale use of heat exchangers. Traditional approach to design is based on iterative process in which is gradually changed design parameters, until a satisfactory solution is achieved. The design process of the heat exchanger is very dependent on the experience of the engineer, thereby the use of computational software is a major advantage in view of time. Determination of operating parameters of the heat exchanger and the subsequent estimation of operating costs have a major impact on the expected profitability of the device. There are on the one hand the material and production costs, which are immediately reflected in the cost of device. But on the other hand, there are somewhat hidden costs in view of economic operation of the heat exchanger. The economic balance of operation significantly affects the technical solution and accompanies the design of the heat exchanger since its inception. Therefore, there is important not underestimate the choice of operating parameters. The article describes an optimization procedure for choice of cost-effective operational parameters for a simple double pipe heat exchanger by using CFD software and the subsequent proposal to modify its design for more economical operation.

  4. The use of experimental design to find the operating maximum power point of PEM fuel cells

    SciTech Connect

    Crăciunescu, Aurelian; Pătularu, Laurenţiu; Ciumbulea, Gloria; Olteanu, Valentin; Pitorac, Cristina; Drugan, Elena

    2015-03-10

    Proton Exchange Membrane (PEM) Fuel Cells are difficult to model due to their complex nonlinear nature. In this paper, the development of a PEM Fuel Cells mathematical model based on the Design of Experiment methodology is described. The Design of Experiment provides a very efficient methodology to obtain a mathematical model for the studied multivariable system with only a few experiments. The obtained results can be used for optimization and control of the PEM Fuel Cells systems.

  5. 47 CFR 90.471 - Points of operation in internal transmitter control systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... transmitter control systems are not synonymous with dispatch points (See § 90.467) nor with telephone positions which are part of the public, switched telephone network; and the scheme of regulation is to be... the licensee. The fixed position may be part of a private telephone exchange or it may be any...

  6. Computational Analysis of Distance Operators for the Iterative Closest Point Algorithm

    PubMed Central

    Mora-Pascual, Jerónimo M.; García-García, Alberto; Martínez-González, Pablo

    2016-01-01

    The Iterative Closest Point (ICP) algorithm is currently one of the most popular methods for rigid registration so that it has become the standard in the Robotics and Computer Vision communities. Many applications take advantage of it to align 2D/3D surfaces due to its popularity and simplicity. Nevertheless, some of its phases present a high computational cost thus rendering impossible some of its applications. In this work, it is proposed an efficient approach for the matching phase of the Iterative Closest Point algorithm. This stage is the main bottleneck of that method so that any efficiency improvement has a great positive impact on the performance of the algorithm. The proposal consists in using low computational cost point-to-point distance metrics instead of classic Euclidean one. The candidates analysed are the Chebyshev and Manhattan distance metrics due to their simpler formulation. The experiments carried out have validated the performance, robustness and quality of the proposal. Different experimental cases and configurations have been set up including a heterogeneous set of 3D figures, several scenarios with partial data and random noise. The results prove that an average speed up of 14% can be obtained while preserving the convergence properties of the algorithm and the quality of the final results. PMID:27768714

  7. 78 FR 20144 - Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit 3

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-03

    ... exemption and FONSI were published in the Federal Register (FR) on the same day the exemption was issued (72 FR 55254). The exemption was then implemented at Indian Point Unit 3. A draft EA for public comment.... See 75 FR 20248 (April 19, 2010). That 2010 rulemaking expanded the scope of an existing...

  8. Optimal robustness of supervised learning from a noniterative point of view

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Lun J.

    1995-08-01

    In most artificial neural network applications, (e.g. pattern recognition) if the dimension of the input vectors is much larger than the number of patterns to be recognized, generally, a one- layered, hard-limited perceptron is sufficient to do the recognition job. As long as the training input-output mapping set is numerically given, and as long as this given set satisfies a special linear-independency relation, the connection matrix to meet the supervised learning requirements can be solved by a noniterative, one-step, algebra method. The learning of this noniterative scheme is very fast (close to real-time learning) because the learning is one-step and noniterative. The recognition of the untrained patterns is very robust because a universal geometrical optimization process of selecting the solution can be applied to the learning process. This paper reports the theoretical foundation of this noniterative learning scheme and focuses the result at the optimal robustness analysis. A real-time character recognition scheme is then designed along this line. This character recognition scheme will be used (in a movie presentation) to demonstrate the experimental results of some theoretical parts reported in this paper.

  9. Optimal cut-off points of fasting plasma glucose for two-step strategy in estimating prevalence and screening undiagnosed diabetes and pre-diabetes in Harbin, China.

    PubMed

    Bao, Chundan; Zhang, Dianfeng; Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition.

  10. Optimal cut-off points of fasting plasma glucose for two-step strategy in estimating prevalence and screening undiagnosed diabetes and pre-diabetes in Harbin, China.

    PubMed

    Bao, Chundan; Zhang, Dianfeng; Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  11. Optimal Cut-Off Points of Fasting Plasma Glucose for Two-Step Strategy in Estimating Prevalence and Screening Undiagnosed Diabetes and Pre-Diabetes in Harbin, China

    PubMed Central

    Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  12. Decision support system for optimal reservoir operation modeling within sediment deposition control.

    PubMed

    Hadihardaja, Iwan K

    2009-01-01

    Suspended sediment deals with surface runoff moving toward watershed affects reservoir sustainability due to the reduction of storage capacity. The purpose of this study is to introduce a reservoir operation model aimed at minimizing sediment deposition and maximizing energy production expected to obtain optimal decision policy for both objectives. The reservoir sediment-control operation model is formulated by using Non-Linear Programming with an iterative procedure based on a multi-objective measurement in order to achieve optimal decision policy that is established in association with the development of a relationship between stream inflow and sediment rate by utilizing the Artificial Neural Network. Trade off evaluation is introduced to generate a strategy for controlling sediment deposition at same level of target ratio while producing hydroelectric energy. The case study is carried out at the Sanmenxia Reservoir in China where redesign and reconstruction have been accomplished. However, this model deals only with the original design and focuses on a wet year operation. This study will also observe a five-year operation period to show the accumulation of sediment due to the impact of reservoir storage capacity.

  13. AI techniques for optimizing multi-objective reservoir operation upon human and riverine ecosystem demands

    NASA Astrophysics Data System (ADS)

    Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.

    2015-11-01

    Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.

  14. An optimal point spread function subtraction algorithm for high-contrast imaging: a demonstration with angular differential imaging

    SciTech Connect

    Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D

    2006-09-19

    Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.

  15. Parameters Optimization for Operational Storm Surge/Tide Forecast Model using a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, W.; You, S.; Ryoo, S.; Global Environment System Research Laboratory

    2010-12-01

    Typhoons generated in northwestern Pacific Ocean annually affect the Korean Peninsula and storm surges generated by strong low pressure and sea winds often cause serious damage to property in the coastal region. To predict storm surges, a lot of researches have been conducted by using numerical models for many years. Various parameters used for calculation of physics process are used in numerical models based on laws of physics, but they are not accurate values. Because those parameters affect to the model performance, these uncertain values can sensitively operate results of the model. Therefore, optimization of these parameters used in numerical model is essential for accurate storm surge predictions. A genetic algorithm (GA) is recently used to estimate optimized values of these parameters. The GA is a stochastic exploration modeling natural phenomenon named genetic heritance and competition for survival. To realize breeding of species and selection, the groups which may be harmed are kept and use genetic operators such as inheritance, mutation, selection and crossover. In this study, we have improved operational storm surge/tide forecast model(STORM) of NIMR/KMA (National Institute of Meteorological Research/Korea Meteorological Administration) that covers 115E - 150E, 20N - 52N based on POM (Princeton Ocean Model) with 8km horizontal resolutions using the GA. Optimized values have been estimated about main 4 parameters which are bottom drag coefficient, background horizontal diffusivity coefficient, Smagoranski’s horizontal viscosity coefficient and sea level pressure scaling coefficient within STORM. These optimized parameters were estimated on typhoon MAEMI in 2003 and 9 typhoons which have affected to Korea peninsula from 2005 to 2007. The 4 estimated parameters were also used to compare one-month predictions in February and August 2008. During the 48h forecast time, the mean and median model accuracies improved by 25 and 51%, respectively.

  16. A survey of ground operations tools developed to simulate the pointing of space telescopes and the design for WISE

    NASA Technical Reports Server (NTRS)

    Fabinsky, Beth

    2006-01-01

    WISE, the Wide Field Infrared Survey Explorer, is scheduled for launch in June 2010. The mission operations system for WISE requires a software modeling tool to help plan, integrate and simulate all spacecraft pointing and verify that no attitude constraints are violated. In the course of developing the requirements for this tool, an investigation was conducted into the design of similar tools for other space-based telescopes. This paper summarizes the ground software and processes used to plan and validate pointing for a selection of space telescopes; with this information as background, the design for WISE is presented.

  17. Optimization of non-aqueous electrolytes for Primary lithium/air batteries operated in Ambient Enviroment

    SciTech Connect

    Xu, Wu; Xiao, Jie; Zhang, Jian; Wang, Deyu; Zhang, Jiguang

    2009-07-07

    The selection and optimization of non-aqueous electrolytes for ambient operations of lithium/air batteries has been studied. Organic solvents with low volatility and low moisture absorption are necessary to minimize the change of electrolyte compositions and the reaction between lithium anode and water during discharge process. It is critical to make the electrolytes with high polarity so that it can reduce wetting and flooding of carbon based air electrode and lead to improved battery performance. For ambient operations, the viscosity, ionic conductivity, and oxygen solubility of the electrolyte are less important than the polarity of organic solvents once the electrolyte has reasonable viscosity, conductivity, and oxygen solubility. It has been found that PC/EC mixture is the best solvent system and LiTFSI is the most feasible salt for ambient operations of Li/air batteries. Battery performance is not very sensitive to PC/EC ratio or salt concentration.

  18. Optimizing the CEBAF Injector for Beam Operation with a Higher Voltage Electron Gun

    SciTech Connect

    F.E. Hannon, A.S. Hofler, R. Kazimi

    2011-03-01

    Recent developments in the DC gun technology used at CEBAF have allowed an increase in operational voltage from 100kV to 130kV. In the near future this will be extended further to 200kV with the purchase of a new power supply. The injector components and layout at this time have been designed specifically for 100kV operation. It is anticipated that with an increase in gun voltage and optimization of the layout and components for 200kV operation, that the electron bunch length and beam brightness can be improved upon. This paper explores some upgrade possibilities for a 200kV gun CEBAF injector through beam dynamic simulations.

  19. Long-term energy capture and the effects of optimizing wind turbine operating strategies

    NASA Technical Reports Server (NTRS)

    Miller, A. H.; Formica, W. J.

    1982-01-01

    Methods of increasing energy capture without affecting the turbine design were investigated. The emphasis was on optimizing the wind turbine operating strategy. The operating strategy embodies the startup and shutdown algorithm as well as the algorithm for determining when to yaw (rotate) the axis of the turbine more directly into the wind. Using data collected at a number of sites, the time-dependent simulation of a MOD-2 wind turbine using various, site-dependent operating strategies provided evidence that site-specific fine tuning can produce significant increases in long-term energy capture as well as reduce the number of start-stop cycles and yawing maneuvers, which may result in reduced fatigue and subsequent maintenance.

  20. Operation of a low temperature absorption chiller at rating point and at reduced evaporator temperature

    NASA Astrophysics Data System (ADS)

    Best, R.; Biermann, W.; Reimann, R. C.

    1985-01-01

    The returned fifteen ton Solar Absorption Machine (SAM) 015 chiller was given a cursory visual inspection, some obvious problems were remedied, and then it was placed on a test stand to get a measure of dirty performance. It was then given a standard acid clean, the water side of the tubes was brushed clean, and then the machine was retested. The before and after cleaning data were compared to equivalent data taken before the machine was shipped. The second part of the work statement was to experimentally demonstrate the technical feasibility of operating the chiller at evaporator temperatures below 0(0)C (32(0)F) and identify any operational problems.

  1. Partial difference operators on weighted graphs for image processing on surfaces and point clouds.

    PubMed

    Lozes, Francois; Elmoataz, Abderrahim; Lezoray, Olivier

    2014-09-01

    Partial difference equations (PDEs) and variational methods for image processing on Euclidean domains spaces are very well established because they permit to solve a large range of real computer vision problems. With the recent advent of many 3D sensors, there is a growing interest in transposing and solving PDEs on surfaces and point clouds. In this paper, we propose a simple method to solve such PDEs using the framework of PDEs on graphs. This latter approach enables us to transcribe, for surfaces and point clouds, many models and algorithms designed for image processing. To illustrate our proposal, three problems are considered: (1) p -Laplacian restoration and inpainting; (2) PDEs mathematical morphology; and (3) active contours segmentation.

  2. Requests to an optimal process and plant management from a production point of view

    NASA Astrophysics Data System (ADS)

    Heller, Matthias; Hsu, Jack; Terhuerne, Joerg

    2000-10-01

    Well done designs and well equipped machines are only half the way to reproducible and stable quality of a coating production. To achieve this aim it is also necessary to have a complex production management system to one's disposal, including recipe management, machinery management and quality assurance. Most production errors and rejects are caused by wrong handling, which can be leaded back to a lack of actual information, or by errors of measurement-systems and by fails of equipment during a coating process. So, the very simple rules one has to observe are: 1.) Transfer really all necessary information about the process to operator and to machine and force the operator to read this information, by using an online-checklist during charging a batch. 2.) Never trust in a single measurement-result of your plant-equipment, without a cross-check to independent generated data's; redundancy is the magic word for process-assurance. 3.) Check the status of your equipment as often as possible; integrate a maintenance plan in your plant control and let the machine record all parameters, which are relevant for wearing parts or media. This essay will show, how to organize your recipe parameters, transfering information to plant and operator, methods for redundancy and cross-checks of parameters, and an example for a complex coating system based on a LH-A700QE.

  3. Efficiency and Optimality of 2-period Gait from Kinetic Energy Point of View

    NASA Astrophysics Data System (ADS)

    Asano, Fumihiko

    This paper investigates the efficiency of a 2-period limit-cycle gait from the kinetic energy viewpoint. First, we formulate a steady 2-period gait by using simple recurrence formulas for the kinetic energy of an asymmetric rimless wheel. Second, we theoretically show that, in the case that the mean value of the hip angle is constant, the generated 2-period steady gait is less efficient than a 1-period symmetric one in terms of kinetic energy. Furthermore, we show that the symmetric gait is not always optimal from another viewpoint. Finally, we investigate the validity of the derived theory through numerical simulations of virtual passive dynamic walking using a compass-like biped robot.

  4. Optical configuration optimization and calibration for the POINT system on EAST

    NASA Astrophysics Data System (ADS)

    Zou, Z. Y.; Liu, H. Q.; Li, W. M.; Lian, H.; Wang, S. X.; Yao, Y.; Lan, T.; Zeng, L.; Jie, Y. X.

    2016-11-01

    Calibration of the polarimeter system is one of the key elements to determine the overall measurement accuracy. The anisotropic reflection and transmission properties of the mesh beam splitters can easily distort the polarization state of the circularly polarized beams. Using a rotating crystal quartz λ/2-waveplate to replace the plasma can effectively allow us to obtain the ratio of the measured Faraday rotation angle to the known rotation angle of the waveplate. This ratio is used to estimate the calibration factor for each chord to be accurately determined and help to minimize distortions introduced by the wire-mesh beam splitters. With the novel configuration optimization, the distortion of polarization state is effectively eliminated.

  5. Comparison of a single end point to determine optimal initial warfarin dosing (5 mg versus 10 mg) for venous thromboembolism.

    PubMed

    Quiroz, Rene; Gerhard-Herman, Marie; Kosowsky, Joshua M; DeSantis, Stacia M; Kucher, Nils; McKean, Sylvia C; Goldhaber, Samuel Z

    2006-08-15

    There remains considerable controversy regarding optimal initial warfarin dosing in patients with acute venous thromboembolism. Therefore, an open-label, randomized trial comparing 2 warfarin initiation nomograms (5 vs 10 mg) was conducted in patients with acute venous thromboembolism. All participants received fondaparinux for > or = 5 days as a "bridge" to warfarin. The primary end point was defined as the number of days necessary to achieve 2 consecutive international normalized ratio laboratory test values > 1.9. A total of 50 patients were enrolled and randomly assigned to each of the treatment arms. The median time to 2 consecutive international normalized ratios was 5 days in the 2 groups. There was no statistical difference in achieving the primary end point using either the 5- or the 10-mg nomogram (p = 0.69). These results should provide clinicians with increased warfarin dosing options in patients presenting with acute venous thromboembolism.

  6. Temperature Effects of Point Sources, Riparian Shading, and Dam Operations on the Willamette River, Oregon

    USGS Publications Warehouse

    Rounds, Stewart A.

    2007-01-01

    Water temperature is an important factor influencing the migration, rearing, and spawning of several important fish species in rivers of the Pacific Northwest. To protect these fish populations and to fulfill its responsibilities under the Federal Clean Water Act, the Oregon Department of Environmental Quality set a water temperature Total Maximum Daily Load (TMDL) in 2006 for the Willamette River and the lower reaches of its largest tributaries in northwestern Oregon. As a result, the thermal discharges of the largest point sources of heat to the Willamette River now are limited at certain times of the year, riparian vegetation has been targeted for restoration, and upstream dams are recognized as important influences on downstream temperatures. Many of the prescribed point-source heat-load allocations are sufficiently restrictive that management agencies may need to expend considerable resources to meet those allocations. Trading heat allocations among point-source dischargers may be a more economical and efficient means of meeting the cumulative point-source temperature limits set by the TMDL. The cumulative nature of these limits, however, precludes simple one-to-one trades of heat from one point source to another; a more detailed spatial analysis is needed. In this investigation, the flow and temperature models that formed the basis of the Willamette temperature TMDL were used to determine a spatially indexed 'heating signature' for each of the modeled point sources, and those signatures then were combined into a user-friendly, spreadsheet-based screening tool. The Willamette River Point-Source Heat-Trading Tool allows the user to increase or decrease the heating signature of each source and thereby evaluate the effects of a wide range of potential point-source heat trades. The predictions of the Trading Tool were verified by running the Willamette flow and temperature models under four different trading scenarios, and the predictions typically were accurate

  7. A Novel Hybrid Clonal Selection Algorithm with Combinatorial Recombination and Modified Hypermutation Operators for Global Optimization

    PubMed Central

    Lin, Jingjing; Jing, Honglei

    2016-01-01

    Artificial immune system is one of the most recently introduced intelligence methods which was inspired by biological immune system. Most immune system inspired algorithms are based on the clonal selection principle, known as clonal selection algorithms (CSAs). When coping with complex optimization problems with the characteristics of multimodality, high dimension, rotation, and composition, the traditional CSAs often suffer from the premature convergence and unsatisfied accuracy. To address these concerning issues, a recombination operator inspired by the biological combinatorial recombination is proposed at first. The recombination operator could generate the promising candidate solution to enhance search ability of the CSA by fusing the information from random chosen parents. Furthermore, a modified hypermutation operator is introduced to construct more promising and efficient candidate solutions. A set of 16 common used benchmark functions are adopted to test the effectiveness and efficiency of the recombination and hypermutation operators. The comparisons with classic CSA, CSA with recombination operator (RCSA), and CSA with recombination and modified hypermutation operator (RHCSA) demonstrate that the proposed algorithm significantly improves the performance of classic CSA. Moreover, comparison with the state-of-the-art algorithms shows that the proposed algorithm is quite competitive. PMID:27698662

  8. A Novel Hybrid Clonal Selection Algorithm with Combinatorial Recombination and Modified Hypermutation Operators for Global Optimization

    PubMed Central

    Lin, Jingjing; Jing, Honglei

    2016-01-01

    Artificial immune system is one of the most recently introduced intelligence methods which was inspired by biological immune system. Most immune system inspired algorithms are based on the clonal selection principle, known as clonal selection algorithms (CSAs). When coping with complex optimization problems with the characteristics of multimodality, high dimension, rotation, and composition, the traditional CSAs often suffer from the premature convergence and unsatisfied accuracy. To address these concerning issues, a recombination operator inspired by the biological combinatorial recombination is proposed at first. The recombination operator could generate the promising candidate solution to enhance search ability of the CSA by fusing the information from random chosen parents. Furthermore, a modified hypermutation operator is introduced to construct more promising and efficient candidate solutions. A set of 16 common used benchmark functions are adopted to test the effectiveness and efficiency of the recombination and hypermutation operators. The comparisons with classic CSA, CSA with recombination operator (RCSA), and CSA with recombination and modified hypermutation operator (RHCSA) demonstrate that the proposed algorithm significantly improves the performance of classic CSA. Moreover, comparison with the state-of-the-art algorithms shows that the proposed algorithm is quite competitive.

  9. 77 FR 36015 - Atomic Safety and Licensing Board; Entergy Nuclear Operations, Inc. (Indian Point Nuclear...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-15

    ... FR 55,834 (Oct. 1, 2007). \\2\\ Establishment of Atomic Safety and Licensing Board, 72 FR 60,394 (Oct... and 3); Notice of Atomic Safety and Licensing Board Reconstitution, 77 FR 22,361 (Apr. 13, 2012). On... Renewal of Facility Operating License Nos. DPR-26 and DPR-64 for an Additional 20-Year Period, 72 FR...

  10. Space tug point design study. Volume 2: Operations, performance and requirements

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A design study to determine the configuration and characteristics of a space tug was conducted. Among the subjects analyzed in the study are: (1) flight and ground operations, (2) vehicle flight performance and performance enhancement techniques, (3) flight requirements, (4) basic design criteria, and (5) functional and procedural interface requirements between the tug and other systems.

  11. 78 FR 33223 - Drawbridge Operation Regulation; York River, Between Yorktown and Gloucester Point, VA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-04

    ... draw of the US 17/George P. Coleman Memorial Swing Bridge across the York River, mile 7.0, between... the George P. Coleman Memorial Swing Bridge. This temporary deviation allows the drawbridge to remain.... Under the regular operating schedule, the Coleman Memorial Bridge, mile 7.0, between Gloucester...

  12. Energy operator demodulating of optimal resonance components for the compound faults diagnosis of gearboxes

    NASA Astrophysics Data System (ADS)

    Zhang, Dingcheng; Yu, Dejie; Zhang, Wenyi

    2015-11-01

    Compound faults diagnosis is a challenge for rotating machinery fault diagnosis. The vibration signals measured from gearboxes are usually complex, non-stationary, and nonlinear. When compound faults occur in a gearbox, weak fault characteristic signals are always submerged by the strong ones. Therefore, it is difficult to detect a weak fault by using the demodulating analysis of vibration signals of gearboxes directly. The key to compound faults diagnosis of gearboxes is to separate different fault characteristic signals from the collected vibration signals. Aiming at that problem, a new method for the compound faults diagnosis of gearboxes is proposed based on the energy operator demodulating of optimal resonance components. In this method, the genetic algorithm is first used to obtain the optimal decomposition parameters. Then the compound faults vibration signals of a gearbox are subject to resonance-based signal sparse decomposition (RSSD) to separate the fault characteristic signals of the gear and the bearing by using the optimal decomposition parameters. Finally, the separated fault characteristic signals are analyzed by energy operator demodulating, and each one’s instantaneous amplitude can be calculated. According to the spectra of instantaneous amplitudes of fault characteristic signals, the faults of the gear and the bearing can be diagnosed, respectively. The performance of the proposed method is validated by using the simulation data and the experiment vibration signals from a gearbox with compound faults.

  13. A New Tool for Environmental and Economic Optimization of Hydropower Operations

    NASA Astrophysics Data System (ADS)

    Saha, S.; Hayse, J. W.

    2012-12-01

    As part of a project funded by the U.S. Department of Energy, researchers from Argonne, Oak Ridge, Pacific Northwest, and Sandia National Laboratories collaborated on the development of an integrated toolset to enhance hydropower operational decisions related to economic value and environmental performance. As part of this effort, we developed an analytical approach (Index of River Functionality, IRF) and an associated software tool to evaluate how well discharge regimes achieve ecosystem management goals for hydropower facilities. This approach defines site-specific environmental objectives using relationships between environmental metrics and hydropower-influenced flow characteristics (e.g., discharge or temperature), with consideration given to seasonal timing, duration, and return frequency requirements for the environmental objectives. The IRF approach evaluates the degree to which an operational regime meets each objective and produces a score representing how well that regime meets the overall set of defined objectives. When integrated with other components in the toolset that are used to plan hydropower operations based upon hydrologic forecasts and various constraints on operations, the IRF approach allows an optimal release pattern to be developed based upon tradeoffs between environmental performance and economic value. We tested the toolset prototype to generate a virtual planning operation for a hydropower facility located in the Upper Colorado River basin as a demonstration exercise. We conducted planning as if looking five months into the future using data for the recently concluded 2012 water year. The environmental objectives for this demonstration were related to spawning and nursery habitat for endangered fishes using metrics associated with maintenance of instream habitat and reconnection of the main channel with floodplain wetlands in a representative reach of the river. We also applied existing mandatory operational constraints for the

  14. Optimization of PHEV Power Split Gear Ratio to Minimize Fuel Consumption and Operation Cost

    NASA Astrophysics Data System (ADS)

    Li, Yanhe

    A Plug-in Hybrid Electric Vehicle (PHEV) is a vehicle powered by a combination of an internal combustion engine and an electric motor with a battery pack. The battery pack can be charged by plugging the vehicle to the electric grid and from using excess engine power. The research activity performed in this thesis focused on the development of an innovative optimization approach of PHEV Power Split Device (PSD) gear ratio with the aim to minimize the vehicle operation costs. Three research activity lines have been followed: • Activity 1: The PHEV control strategy optimization by using the Dynamic Programming (DP) and the development of PHEV rule-based control strategy based on the DP results. • Activity 2: The PHEV rule-based control strategy parameter optimization by using the Non-dominated Sorting Genetic Algorithm (NSGA-II). • Activity 3: The comprehensive analysis of the single mode PHEV architecture to offer the innovative approach to optimize the PHEV PSD gear ratio.

  15. Analysis of turbine stator adjustment required for compressor design-point operation in high Mach number supersonic turbojet engines

    NASA Technical Reports Server (NTRS)

    English, Robert E; Cavicchi, Richard H

    1953-01-01

    For turbojet engines designed for flight Mach numbers of 2.5 and 3.0, use of turbine stator adjustment to maintain compressor design-point operation was evaluated analytically to determine the effect on the aerodynamics of the turbine. Since the effect of turbine stator adjustment is to make the turbine design sensitive to the particular engine design conditions selected, in some cases the turbine must be conservatively designed for the high-speed flight condition to assure satisfactory turbine performance at take-off. A new concept, the break-even point, is introduced to provide quick evaluation of the proximity of turbines to the blade-loading limit at any off-design operation.

  16. Composite laminate failure parameter optimization through four-point flexure experimentation and analysis

    DOE PAGESBeta

    Nelson, Stacy; English, Shawn; Briggs, Timothy

    2016-05-06

    Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less

  17. Hazard Analysis and Critical Control Points among Chinese Food Business Operators

    PubMed Central

    Amadei, Paolo; Masotti, Gianfranco; Condoleo, Roberto; Guidi, Alessandra

    2014-01-01

    The purpose of the present paper is to highlight some critical situations emerged during the implementation of long-term projects locally managed by Prevention Services, to control some manufacturing companies in Rome and Prato, Central Italy. In particular, some critical issues on the application of self-control in marketing and catering held by Chinese operators are underlined. The study showed serious flaws in preparing and controlling of manuals for good hygiene practice, participating of the consultants among food business operators (FBOs) to the control of the procedures. Only after regular actions by the Prevention Services, there have been satisfying results. This confirms the need to have qualified and expert partners able to promptly act among FBOs and to give adequate support to authorities in charge in order to guarantee food safety. PMID:27800356

  18. Optimal operational conditions for supercontinuum-based ultrahigh-resolution endoscopic OCT imaging.

    PubMed

    Yuan, Wu; Mavadia-Shukla, Jessica; Xi, Jiefeng; Liang, Wenxuan; Yu, Xiaoyun; Yu, Shaoyong; Li, Xingde

    2016-01-15

    We investigated the optimal operational conditions for utilizing a broadband supercontinuum (SC) source in a portable 800 nm spectral-domain (SD) endoscopic OCT system to enable high resolution, high-sensitivity, and high-speed imaging in vivo. A SC source with a 3-dB bandwidth of ∼246  nm was employed to obtain an axial resolution of ∼2.7  μm (in air) and an optimal detection sensitivity of ∼-107  dB with an imaging speed up to 35 frames/s (at 70 k A-scans/s). The performance of the SC-based SD-OCT endoscopy system was demonstrated by imaging guinea pig esophagus in vivo, achieving image quality comparable to that acquired with a broadband home-built Ti:sapphire laser. PMID:26766686

  19. Many-body decoherence dynamics and optimized operation of a single-photon switch

    NASA Astrophysics Data System (ADS)

    Murray, C. R.; Gorshkov, A. V.; Pohl, T.

    2016-09-01

    We develop a theoretical framework to characterize the decoherence dynamics due to multi-photon scattering in an all-optical switch based on Rydberg atom induced nonlinearities. By incorporating the knowledge of this decoherence process into optimal photon storage and retrieval strategies, we establish optimized switching protocols for experimentally relevant conditions, and evaluate the corresponding limits in the achievable fidelities. Based on these results we work out a simplified description that reproduces recent experiments (Nat. Commun. 7 12480) and provides a new interpretation in terms of many-body decoherence involving multiple incident photons and multiple gate excitations forming the switch. Aside from offering insights into the operational capacity of realistic photon switching capabilities, our work provides a complete description of spin wave decoherence in a Rydberg quantum optics setting, and has immediate relevance to a number of further applications employing photon storage in Rydberg media.

  20. Determination of the Optimal Operating Parameters for the Jefferson Lab's Cryogenic Cold Compressor System

    SciTech Connect

    Joe Wilson; Venkatarao Ganni; Dana Arenius; Jonathan Creel

    2004-06-01

    Jefferson Lab's (JLab) Continuous Electron Beam Accelerator Facility (CEBAF) and Free Electron Laser (FEL) are supported by 2 K helium refrigerator known as the Central Helium Liquefier (CHL), which maintains a constant low vapor pressure over the accelerators' large liquid helium inventory with a five-stage centrifugal compressor train. The cold compressor train operates with constrained discharge pressure and can be varied over a range of suction pressures and mass flows to meet the operational requirements of the two accelerators. Using data from commissioning and routine operations of the cold compressor system, the presented procedure predicts an operating point for each cold compressor such that maximum efficiency is attained for the overall cold compressor system for a given combination of mass flow and vapor pressure. The procedure predicts expected efficiency of the system and relative compressors speeds for operating vapor pressures from 4 to 2.5 kPa (corresponds to overall pressure ratios of 29 to 56) and flow rates of 135 g/s to 250 g/s. The results of the predictions are verified by test for a few operating conditions of mass flows and vapor pressures.

  1. Fast methodology to design the optimal collection point locations and number of waste bins: A case study.

    PubMed

    Boskovic, Goran; Jovicic, Nebojsa

    2015-12-01

    This paper concerns the development of a methodology aimed at determining the optimal number of waste bins as well optimizing the location of collection points. The methodology was based on a geographic information system, which handled different sets of information, such as street directions, spatial location of objects and number of inhabitants, location of waste bins, and radius of their coverage. The study was conducted in a district in the central area of the city of Kragujevac. Due to a lack of information about the existing situation, all necessary data was collected by fieldwork and by using GPS equipment. By using the developed methodology, the results indicated a reduction of 24% in the number of collection points and 33.5% in the number of waste bins, without reducing the quality of the provided services. It has led to cost and time savings for waste collection and environmental benefits. All users of the services were covered within a 75-m radius, and the usage of bins is more efficient. According to the reduction in the number of waste bins, a total amount of €26,000 may be achieved. In addition, the time for waste collection was reduced, resulting in a €1700 saving per year in fuel costs, as well as 4.5 tons of emitted CO2 into the atmosphere. PMID:26467320

  2. Fast methodology to design the optimal collection point locations and number of waste bins: A case study.

    PubMed

    Boskovic, Goran; Jovicic, Nebojsa

    2015-12-01

    This paper concerns the development of a methodology aimed at determining the optimal number of waste bins as well optimizing the location of collection points. The methodology was based on a geographic information system, which handled different sets of information, such as street directions, spatial location of objects and number of inhabitants, location of waste bins, and radius of their coverage. The study was conducted in a district in the central area of the city of Kragujevac. Due to a lack of information about the existing situation, all necessary data was collected by fieldwork and by using GPS equipment. By using the developed methodology, the results indicated a reduction of 24% in the number of collection points and 33.5% in the number of waste bins, without reducing the quality of the provided services. It has led to cost and time savings for waste collection and environmental benefits. All users of the services were covered within a 75-m radius, and the usage of bins is more efficient. According to the reduction in the number of waste bins, a total amount of €26,000 may be achieved. In addition, the time for waste collection was reduced, resulting in a €1700 saving per year in fuel costs, as well as 4.5 tons of emitted CO2 into the atmosphere.

  3. Heuristic optimization of a continuous flow point-of-use UV-LED disinfection reactor using computational fluid dynamics.

    PubMed

    Jenny, Richard M; Jasper, Micah N; Simmons, Otto D; Shatalov, Max; Ducoste, Joel J

    2015-10-15

    Alternative disinfection sources such as ultraviolet light (UV) are being pursued to inactivate pathogenic microorganisms such as Cryptosporidium and Giardia, while simultaneously reducing the risk of exposure to carcinogenic disinfection by-products (DBPs) in drinking water. UV-LEDs offer a UV disinfecting source that do not contain mercury, have the potential for long lifetimes, are robust, and have a high degree of design flexibility. However, the increased flexibility in design options will add a substantial level of complexity when developing a UV-LED reactor, particularly with regards to reactor shape, size, spatial orientation of light, and germicidal emission wavelength. Anticipating that LEDs are the future of UV disinfection, new methods are needed for designing such reactors. In this research study, the evaluation of a new design paradigm using a point-of-use UV-LED disinfection reactor has been performed. ModeFrontier, a numerical optimization platform, was coupled with COMSOL Multi-physics, a computational fluid dynamics (CFD) software package, to generate an optimized UV-LED continuous flow reactor. Three optimality conditions were considered: 1) single objective analysis minimizing input supply power while achieving at least (2.0) log10 inactivation of Escherichia coli ATCC 11229; and 2) two multi-objective analyses (one of which maximized the log10 inactivation of E. coli ATCC 11229 and minimized the supply power). All tests were completed at a flow rate of 109 mL/min and 92% UVT (measured at 254 nm). The numerical solution for the first objective was validated experimentally using biodosimetry. The optimal design predictions displayed good agreement with the experimental data and contained several non-intuitive features, particularly with the UV-LED spatial arrangement, where the lights were unevenly populated throughout the reactor. The optimal designs may not have been developed from experienced designers due to the increased degrees of

  4. Heuristic optimization of a continuous flow point-of-use UV-LED disinfection reactor using computational fluid dynamics.

    PubMed

    Jenny, Richard M; Jasper, Micah N; Simmons, Otto D; Shatalov, Max; Ducoste, Joel J

    2015-10-15

    Alternative disinfection sources such as ultraviolet light (UV) are being pursued to inactivate pathogenic microorganisms such as Cryptosporidium and Giardia, while simultaneously reducing the risk of exposure to carcinogenic disinfection by-products (DBPs) in drinking water. UV-LEDs offer a UV disinfecting source that do not contain mercury, have the potential for long lifetimes, are robust, and have a high degree of design flexibility. However, the increased flexibility in design options will add a substantial level of complexity when developing a UV-LED reactor, particularly with regards to reactor shape, size, spatial orientation of light, and germicidal emission wavelength. Anticipating that LEDs are the future of UV disinfection, new methods are needed for designing such reactors. In this research study, the evaluation of a new design paradigm using a point-of-use UV-LED disinfection reactor has been performed. ModeFrontier, a numerical optimization platform, was coupled with COMSOL Multi-physics, a computational fluid dynamics (CFD) software package, to generate an optimized UV-LED continuous flow reactor. Three optimality conditions were considered: 1) single objective analysis minimizing input supply power while achieving at least (2.0) log10 inactivation of Escherichia coli ATCC 11229; and 2) two multi-objective analyses (one of which maximized the log10 inactivation of E. coli ATCC 11229 and minimized the supply power). All tests were completed at a flow rate of 109 mL/min and 92% UVT (measured at 254 nm). The numerical solution for the first objective was validated experimentally using biodosimetry. The optimal design predictions displayed good agreement with the experimental data and contained several non-intuitive features, particularly with the UV-LED spatial arrangement, where the lights were unevenly populated throughout the reactor. The optimal designs may not have been developed from experienced designers due to the increased degrees of

  5. Optimizing the efficiency and reliability of fluid system operations: An ongoing process

    SciTech Connect

    Casada, D.A. |

    1996-05-01

    At most industrial facilities, motor loads associated with pumps and fans are the dominant electric energy users. As plant loads and consequent system functions change, the optimal operating conditions for these components change. In response, modifications to system operations are often made with only one consideration in mind - keeping the system on line. At the Y-12 plant in Oak Ridge, a fluid system energy efficiency improvement methodology is being developed to facilitate the systematic review and modification of system design and operations to increase operational efficiency. Since the bulk of the changes are associated with reducing the numbers and/or loads of motor-driven pumps or fans, there are direct benefits in reduced electrical generation and consequent waste heat production and air emissions. This paper will discuss the types of inefficiencies that tend to evolve as system functional requirements change and equipment ages, describe some of the fundamental parameters that are useful in identifying these inefficiencies, provide examples of design and operating changes being made, and detail the resultant savings in energy.

  6. Effects on pulmonary health of neighboring residents of concentrated animal feeding operations: exposure assessed using optimized estimation technique.

    PubMed

    Schulze, Anja; Römmelt, Horst; Ehrenstein, Vera; van Strien, Rob; Praml, Georg; Küchenhoff, Helmut; Nowak, Dennis; Radon, Katja

    2011-01-01

    Potential adverse health effects of concentrated animal feeding operations (CAFOs), which were also shown in the authors' Lower Saxony Lung Study, are of public concern. The authors aimed to investigate pulmonary health effect of neighboring residents assessed using optimized estimation technique. Annual ammonia emission was measured to assess the emission from CAFO and from surrounding fields. Location of sampling points was optimized using cluster analysis. Individual exposure of 457 nonfarm subjects was interpolated by weighting method. Mean estimated annual ammonia levels varied between 16 and 24 μg/m³. Higher exposed participants were more likely to be sensitized against ubiquitous allergens as compared to lower exposed subjects (adjusted odds ratio [OR] 4.2; 95% confidence interval [CI] 1.2-13.2). In addition, they showed a significantly lower forced expiratory volume in 1 second (FEV₁) (adjusted mean difference in % of predicted -8%; 95% CI -13% to -3%). The authors' previous findings that CAFOs may contribute to burden of respiratory diseases were confirmed by this study. PMID:21864103

  7. MagRad: A code to optimize the operation of superconducting magnets in a radiation environment

    SciTech Connect

    Yeaw, C.T.

    1995-12-31

    A powerful computational tool, called MagRad, has been developed which optimizes magnet design for operation in radiation fields. Specifically, MagRad has been used for the analysis and design modification of the cable-in-conduit conductors of the TF magnet systems in fusion reactor designs. Since the TF magnets must operate in a radiation environment which damages the material components of the conductor and degrades their performance, the optimization of conductor design must account not only for start-up magnet performance, but also shut-down performance. The degradation in performance consists primarily of three effects: reduced stability margin of the conductor; a transition out of the well-cooled operating regime; and an increased maximum quench temperature attained in the conductor. Full analysis of the magnet performance over the lifetime of the reactor includes: radiation damage to the conductor, stability, protection, steady state heat removal, shielding effectiveness, optimal annealing schedules, and finally costing of the magnet and reactor. Free variables include primary and secondary conductor geometric and compositional parameters, as well as fusion reactor parameters. A means of dealing with the radiation damage to the conductor, namely high temperature superconductor anneals, is proposed, examined, and demonstrated to be both technically feasible and cost effective. Additionally, two relevant reactor designs (ITER CDA and ARIES-II/IV) have been analyzed. Upon addition of pure copper strands to the cable, the ITER CDA TF magnet design was found to be marginally acceptable, although much room for both performance improvement and cost reduction exists. A cost reduction of 10-15% of the capital cost of the reactor can be achieved by adopting a suitable superconductor annealing schedule. In both of these reactor analyses, the performance predictive capability of MagRad and its associated costing techniques have been demonstrated.

  8. Point focusing using loudspeaker arrays from the perspective of optimal beamforming.

    PubMed

    Bai, Mingsian R; Hsieh, Yu-Hao

    2015-06-01

    Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.

  9. Point focusing using loudspeaker arrays from the perspective of optimal beamforming.

    PubMed

    Bai, Mingsian R; Hsieh, Yu-Hao

    2015-06-01

    Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance. PMID:26093429

  10. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  11. Performance evaluation and optimization of a fast tool servo for single point diamond turning machines

    SciTech Connect

    Miller, A.C. Jr.; Cuttino, J.F.

    1997-08-01

    This paper describes a new, fast tool servo system for fabricating non-rotationally symmetric components using single point diamond turning machines. A prototype, designed for flexible interfacing to typical machine tool controllers, will be described along with performance testing data of tilted flat and off-axis conic sections. Evaluation data show that servo produced surfaces have an rms roughness less than 175 angstroms (2-200 {mu}m spatial filter). Techniques for linearizing the hysteretic effects in the piezoelectric actuator are also discussed. The nonlinear effects due to hysteresis are reduced using a dynamic compensator module in conjunction with a linear controller. The compensator samples the hysteretic voltage/displacement relation in real time and modifies the effective gain accordingly. Simulation results indicate that errors in the performance of the system caused by hysteresis can be compensated and reduced by 90%. Experimental implementation results in 80% reduction in motion error caused by hysteresis, but peak-to- valley errors are limited by side effects from the compensation. The uncompensated servo system demonstrated a peak-to-valley error of less than 0.80 micrometer for an off-axis conic section turned on-axis.

  12. Transcranial Doppler Sonography for Optimization of Cerebral Perfusion in Aortic Arch Operation.

    PubMed

    Ghazy, Tamer; Darwisch, Ayham; Schmidt, Torsten; Fajfrova, Zuzana; Zickmüller, Claudia; Masshour, Ahmed; Matschke, Klaus; Kappert, Utz

    2016-01-01

    An open operation on the aortic arch is a complex procedure that requires not only surgical expertise but also meticulous management to ensure excellent outcomes. In recent years, the procedure has often been performed with the patient under circulatory arrest, with antegrade cerebral perfusion. With such a strategy, efficient monitoring to ensure adequate cerebral perfusion is essential. Here we describe a case of Stanford type A aortic dissection repair in which transcranial Doppler sonography was used as an excellent monitoring tool to allow visualization of cerebral flow and the online status of perfusion, providing instant feedback to allow changes in strategy to optimize inadequate cerebral perfusion. PMID:26694304

  13. Optimal fixed-finite-dimensional compensator for Burgers' equation with unbounded input/output operators

    NASA Technical Reports Server (NTRS)

    Burns, John A.; Marrekchi, Hamadi

    1993-01-01

    The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.

  14. Optimal operational strategies for a day-ahead electricity market in the presence of market power using multi-objective evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Rodrigo, Deepal

    2007-12-01

    reinforced the selection of these algorithms. The results obtained from each of the three algorithms used in the evaluations are very comparable. Thus one could safely conclude that the results obtained are valid. Three distinct test power systems operating under different conditions were studied for evaluating the suitability of each of these algorithms. The test cases included scenarios in which the power system was unconstrained as well as constrained. Repeated simulations carried out for the same test case with varying starting points provided evidence that the algorithms and the solutions were robust. Influences of different market concentrations on the optimal economic dispatch are evidenced by the pareto-optimal-fronts obtained for each test case studied. Results obtained from a traditional linear programming (LP) based solution algorithm that is used at present by many market operators are also presented for comparison. Very high market-concentration-indices were found for each solution from the LP algorithm. This suggests the need to use a formal method for mitigating market concentration. Operating the market at industry-recommended threshold levels of market concentration for selecting an optimal operational point is presented for all test cases studied. Given that a solution-set instead of a single operating point is found from the multi-objective optimization methods, additional flexibility to select any operational point based on the preference of those operating the market clearly is an added benefit of using multi-objective optimization methods. However, in order to help the market operator, a more logical fuzzy decision criterion was tested for selecting a suitable operating point. The results show that the optimal operating point chosen using the fuzzy decision criterion provides a higher economic benefit to the market, although at a slightly increased market concentration. Since the main objective of this research was to simultaneously optimize the

  15. Optimizing the Operating Temperature for an array of MOX Sensors on an Open Sampling System

    NASA Astrophysics Data System (ADS)

    Trincavelli, M.; Vergara, A.; Rulkov, N.; Murguia, J. S.; Lilienthal, A.; Huerta, R.

    2011-09-01

    Chemo-resistive transduction is essential for capturing the spatio-temporal structure of chemical compounds dispersed in different environments. Due to gas dispersion mechanisms, namely diffusion, turbulence and advection, the sensors in an open sampling system, i.e. directly exposed to the environment to be monitored, are exposed to low concentrations of gases with many fluctuations making, as a consequence, the identification and monitoring of the gases even more complicated and challenging than in a controlled laboratory setting. Therefore, tuning the value of the operating temperature becomes crucial for successfully identifying and monitoring the pollutant gases, particularly in applications such as exploration of hazardous areas, air pollution monitoring, and search and rescue1. In this study we demonstrate the benefit of optimizing the sensor's operating temperature when the sensors are deployed in an open sampling system, i.e. directly exposed to the environment to be monitored.

  16. Optimization of the terrain following radar flight cues in special operations aircraft

    NASA Astrophysics Data System (ADS)

    Garman, Patrick J.; Trang, Jeff A.

    1995-05-01

    Over the past 18 months the Army has been developing a terrain following capability in it's next generation special operations aircraft (SOA), the MH-60K and the MH-47E. As two experimental test pilots assigned to the Army's Airworthiness Qualification Test Directorate of the US Army Aviation Technical Test Center, we would like to convey the role that human factors has played in the development of the MMR for terrain following operations in the SOA. In the MH-60K, the pilot remains the interface between the aircraft, via the flight controls and the processed radar data, and the flight director cues. The presentation of the processed radar data to the pilot significantly affects the overall system performance, and is directly driven by the way humans see, process, and react to stimuli. Our development has been centered around the optimization of this man-machine interface.

  17. Characterizing and Optimizing Photocathode Laser Distributions for Ultra-low Emittance Electron Beam Operations

    SciTech Connect

    Zhou, F.; Bohler, D.; Ding, Y.; Gilevich, S.; Huang, Z.; Loos, H.; Ratner, D.; Vetter, S.

    2015-12-07

    Photocathode RF gun has been widely used for generation of high-brightness electron beams for many different applications. We found that the drive laser distributions in such RF guns play important roles in minimizing the electron beam emittance. Characterizing the laser distributions with measurable parameters and optimizing beam emittance versus the laser distribution parameters in both spatial and temporal directions are highly desired for high-brightness electron beam operation. In this paper, we report systematic measurements and simulations of emittance dependence on the measurable parameters represented for spatial and temporal laser distributions at the photocathode RF gun systems of Linac Coherent Light Source. The tolerable parameter ranges for photocathode drive laser distributions in both directions are presented for ultra-low emittance beam operations.

  18. An efficient approach to cathode operational parameters optimization for microbial fuel cell using response surface methodology

    PubMed Central

    2014-01-01

    Background In the recent study, optimum operational conditions of cathode compartment of microbial fuel cell were determined by using Response Surface Methodology (RSM) with a central composite design to maximize power density and COD removal. Methods The interactive effects of parameters such as, pH, buffer concentration and ionic strength on power density and COD removal were evaluated in two-chamber microbial batch-mode fuel cell. Results Power density and COD removal for optimal conditions (pH of 6.75, buffer concentration of 0.177 M and ionic strength of cathode chamber of 4.69 mM) improve by 17 and 5%, respectively, in comparison with normal conditions (pH of 7, buffer concentration of 0.1 M and ionic strength of 2.5 mM). Conclusions In conclusion, results verify that response surface methodology could successfully determine cathode chamber optimum operational conditions. PMID:24423039

  19. Methodology for optimizing the development and operation of gas storage fields

    SciTech Connect

    Mercer, J.C.; Ammer, J.R.; Mroz, T.H.

    1995-04-01

    The Morgantown Energy Technology Center is pursuing the development of a methodology that uses geologic modeling and reservoir simulation for optimizing the development and operation of gas storage fields. Several Cooperative Research and Development Agreements (CRADAs) will serve as the vehicle to implement this product. CRADAs have been signed with National Fuel Gas and Equitrans, Inc. A geologic model is currently being developed for the Equitrans CRADA. Results from the CRADA with National Fuel Gas are discussed here. The first phase of the CRADA, based on original well data, was completed last year and reported at the 1993 Natural Gas RD&D Contractors Review Meeting. Phase 2 analysis was completed based on additional core and geophysical well log data obtained during a deepening/relogging program conducted by the storage operator. Good matches, within 10 percent, of wellhead pressure were obtained using a numerical simulator to history match 2 1/2 injection withdrawal cycles.

  20. Towards optimizing two-qubit operations in three-electron double quantum dots

    NASA Astrophysics Data System (ADS)

    Frees, Adam; Gamble, John King; Mehl, Sebastian; Friesen, Mark; Coppersmith, S. N.

    The successful implementation of single-qubit gates in the quantum dot hybrid qubit motivates our interest in developing a high fidelity two-qubit gate protocol. Recently, extensive work has been done to characterize the theoretical limitations and advantages in performing two-qubit operations at an operation point located in the charge transition region. Additionally, there is evidence to support that single-qubit gate fidelities improve while operating in the so-called ``far-detuned'' region, away from the charge transition. Here we explore the possibility of performing two-qubit gates in this region, considering the challenges and the benefits that may present themselves while implementing such an operational paradigm. This work was supported in part by ARO (W911NF-12-0607) (W911NF-12-R-0012), NSF (PHY-1104660), ONR (N00014-15-1-0029). The authors gratefully acknowledge support from the Sandia National Laboratories Truman Fellowship Program, which is funded by the Laboratory Directed Research and Development (LDRD) Program. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.

  1. Intracellular calcium affects prestin's voltage operating point indirectly via turgor-induced membrane tension

    NASA Astrophysics Data System (ADS)

    Song, Lei; Santos-Sacchi, Joseph

    2015-12-01

    Recent identification of a calmodulin binding site within prestin's C-terminus indicates that calcium can significantly alter prestin's operating voltage range as gauged by the Boltzmann parameter Vh (Keller et al., J. Neuroscience, 2014). We reasoned that those experiments may have identified the molecular substrate for the protein's tension sensitivity. In an effort to understand how this may happen, we evaluated the effects of turgor pressure on such shifts produced by calcium. We find that the shifts are induced by calcium's ability to reduce turgor pressure during whole cell voltage clamp recording. Clamping turgor pressure to 1kPa, the cell's normal intracellular pressure, completely counters the calcium effect. Furthermore, following unrestrained shifts, collapsing the cells abolishes induced shifts. We conclude that calcium does not work by direct action on prestin's conformational state. The possibility remains that calcium interaction with prestin alters water movements within the cell, possibly via its anion transport function.

  2. Particulate emissions calculations from fall tillage operations using point and remote sensors.

    PubMed

    Moore, Kori D; Wojcik, Michael D; Martin, Randal S; Marchant, Christian C; Bingham, Gail E; Pfeiffer, Richard L; Prueger, John H; Hatfield, Jerry L

    2013-07-01

    Soil preparation for agricultural crops produces aerosols that may significantly contribute to seasonal atmospheric particulate matter (PM). Efforts to reduce PM emissions from tillage through a variety of conservation management practices (CMPs) have been made, but the reductions from many of these practices have not been measured in the field. A study was conducted in California's San Joaquin Valley to quantify emissions reductions from fall tillage CMP. Emissions were measured from conventional tillage methods and from a "combined operations" CMP, which combines several implements to reduce tractor passes. Measurements were made of soil moisture, bulk density, meteorological profiles, filter-based total suspended PM (TSP), concentrations of PM with an equivalent aerodynamic diameter ≤10 μm (PM) and PM with an equivalent aerodynamic diameter ≤2.5 μm (PM), and aerosol size distribution. A mass-calibrated, scanning, three-wavelength light detection and ranging (LIDAR) procedure estimated PM through a series of algorithms. Emissions were calculated via inverse modeling with mass concentration measurements and applying a mass balance to LIDAR data. Inverse modeling emission estimates were higher, often with statistically significant differences. Derived PM emissions for conventional operations generally agree with literature values. Sampling irregularities with a few filter-based samples prevented calculation of a complete set of emissions through inverse modeling; however, the LIDAR-based emissions dataset was complete. The CMP control effectiveness was calculated based on LIDAR-derived emissions to be 29 ± 2%, 60 ± 1%, and 25 ± 1% for PM, PM, and TSP size fractions, respectively. Implementation of this CMP provides an effective method for the reduction of PM emissions. PMID:24216354

  3. Evolutionary operation (EVOP) to optimize whey independent serratiopeptidase production from Serratia marcescens NRRL B-23112.

    PubMed

    Pansuriya, Ruchir C; Singhal, Rekha S

    2010-05-01

    Serratiopeptidase (SRP), a 50 kDa metalloprotease produced from Serratia marcescens species is a drug with potent anti-inflammatory property. In this study, a powerful statistical design, Evolutionary operation (EVOP) was applied to optimize the media composition for SRP production in shake-flask culture of Serratia. marcescens NRRL B-23112. Initially, factors such as inoculum size, initial pH, carbon source and organic nitrogen source were optimized using one factor at a time. Most significant medium components affecting the production of SRP were identified as maltose, soybean meal and KHPO. The SRP so produced was not found to be dependent on whey protein, rather notably induced by most of the organic nitrogen sources used in the study and free from other concomitant protease contaminant revealed by protease inhibition study. Further, experiments were performed using different sets of EVOP design with each factor varied at three levels. The experimental data were analyzed with standard set of statistical formula. The EVOP optimized medium, maltose 4.5%, soybean meal 6.5%, KHPO 0.8% and NaCl 0.5% w/v gave SRP production of 7,333 EU/ml, which was 17-fold higher than the unoptimized media. The application of EVOP resulted in significant enhancement of SRP production. PMID:20519921

  4. Robust optimal sensor placement for operational modal analysis based on maximum expected utility

    NASA Astrophysics Data System (ADS)

    Li, Binbin; Der Kiureghian, Armen

    2016-06-01

    Optimal sensor placement is essentially a decision problem under uncertainty. The maximum expected utility theory and a Bayesian linear model are used in this paper for robust sensor placement aimed at operational modal identification. To avoid nonlinear relations between modal parameters and measured responses, we choose to optimize the sensor locations relative to identifying modal responses. Since the modal responses contain all the information necessary to identify the modal parameters, the optimal sensor locations for modal response estimation provide at least a suboptimal solution for identification of modal parameters. First, a probabilistic model for sensor placement considering model uncertainty, load uncertainty and measurement error is proposed. The maximum expected utility theory is then applied with this model by considering utility functions based on three principles: quadratic loss, Shannon information, and K-L divergence. In addition, the prior covariance of modal responses under band-limited white-noise excitation is derived and the nearest Kronecker product approximation is employed to accelerate evaluation of the utility function. As demonstration and validation examples, sensor placements in a 16-degrees-of-freedom shear-type building and in Guangzhou TV Tower under ground motion and wind load are considered. Placements of individual displacement meter, velocimeter, accelerometer and placement of mixed sensors are illustrated.

  5. Evolutionary operation (EVOP) to optimize whey independent serratiopeptidase production from Serratia marcescens NRRL B-23112.

    PubMed

    Pansuriya, Ruchir C; Singhal, Rekha S

    2010-05-01

    Serratiopeptidase (SRP), a 50 kDa metalloprotease produced from Serratia marcescens species is a drug with potent anti-inflammatory property. In this study, a powerful statistical design, Evolutionary operation (EVOP) was applied to optimize the media composition for SRP production in shake-flask culture of Serratia. marcescens NRRL B-23112. Initially, factors such as inoculum size, initial pH, carbon source and organic nitrogen source were optimized using one factor at a time. Most significant medium components affecting the production of SRP were identified as maltose, soybean meal and KHPO. The SRP so produced was not found to be dependent on whey protein, rather notably induced by most of the organic nitrogen sources used in the study and free from other concomitant protease contaminant revealed by protease inhibition study. Further, experiments were performed using different sets of EVOP design with each factor varied at three levels. The experimental data were analyzed with standard set of statistical formula. The EVOP optimized medium, maltose 4.5%, soybean meal 6.5%, KHPO 0.8% and NaCl 0.5% w/v gave SRP production of 7,333 EU/ml, which was 17-fold higher than the unoptimized media. The application of EVOP resulted in significant enhancement of SRP production.

  6. Is there an optimal resting velopharyngeal gap in operated cleft palate patients?

    PubMed Central

    Yellinedi, Rajesh; Damalacheruvu, Mukunda Reddy

    2013-01-01

    Context: Videofluoroscopy in operated cleft palate patients. Aims: To determine the existence of an optimal resting velopharyngeal (VP) gap in operated cleft palate patients Settings and Design: A retrospective analysis of lateral view videofluoroscopy of operated cleft palate patients. Materials and Methods: A total of 117 cases of operated cleft palate underwent videofluoroscopy between 2006 and 2011. The lateral view of videofluoroscopy was utilised in the study. A retrospective analysis of the lateral view of videofluoroscopy of these 117 patients was performed to analyse the resting VP gap and its relationship to VP closure. Statistical analysis used: None. Results: Of the 117 cases, 35 had a resting gap of less than 6 mm, 34 had a resting gap between 6 and 10 mm and 48 patients had a resting gap of more than 10 mm. Conclusions: The conclusive finding was that almost all the patients with a resting gap of <6 mm (group C) achieved radiological closure of the velopharynx with speech; thus, they had the least chance of VP insufficiency (VPI). Those patients with a resting gap of >10 mm (group A) did not achieve VP closure on phonation, thus having full-blown VPI. Therefore, it can be concluded that the ideal resting VP gap is approximately 6 mm so as to get the maximal chance of VP closure and thus prevent VPI. PMID:23960311

  7. COS4 compensation and optimal TDI operation in multielement linear TDI IR detectors

    NASA Astrophysics Data System (ADS)

    Berger, Michael J.; Lauber, Yair Z.; Citroen, Meira; Topaz, Jeremy M.

    2003-01-01

    High-resolution IR scanning systems able to scan large areas quickly require linear detector arrays with more than 1000 elements and high sensitivity, achieved by TDI. ELOP initiated the development of such a long detector array in the 3-5μm spectral region. The architecture of the detector is based on several sub-segments butted together in a staggered configuration to achieve the desired detector length. One problem is the large non-uniformity of the detector, which is exacerbated by the cos4α optical effect. With the entrance pupil imaged on the cold shield aperture to enhance efficiency, the angle a becomes large. This imposes significant additional non-uniformity that has to be compensated and affects the dynamic range of the electronics. A way to overcome this problem is suggested, based on de-selecting specific pixels in any TDI channel. Another problem is that while higher TDI levels increase the SNR, they increase the smear (blur) due to vibrations, drift etc. The optimal TDI level depends on the specific conditions of the system, namely: signal level and vibrations. Using superfluous pixels in the overlap between segments, several TDI levels can be operated simultaneously, allowing a decision to be made automatically as to the optimal TDI level for operation.

  8. A Concept and Implementation of Optimized Operations of Airport Surface Traffic

    NASA Technical Reports Server (NTRS)

    Jung, Yoon C.; Hoang, Ty; Montoya, Justin; Gupta, Gautam; Malik, Waqar; Tobias, Leonard

    2010-01-01

    This paper presents a new concept of optimized surface operations at busy airports to improve the efficiency of taxi operations, as well as reduce environmental impacts. The suggested system architecture consists of the integration of two decoupled optimization algorithms. The Spot Release Planner provides sequence and timing advisories to tower controllers for releasing departure aircraft into the movement area to reduce taxi delay while achieving maximum throughput. The Runway Scheduler provides take-off sequence and arrival runway crossing sequence to the controllers to maximize the runway usage. The description of a prototype implementation of this integrated decision support tool for the airport control tower controllers is also provided. The prototype decision support tool was evaluated through a human-in-the-loop experiment, where both the Spot Release Planner and Runway Scheduler provided advisories to the Ground and Local Controllers. Initial results indicate the average number of stops made by each departure aircraft in the departure runway queue was reduced by more than half when the controllers were using the advisories, which resulted in reduced taxi times in the departure queue.

  9. A Data Filter for Identifying Steady-State Operating Points in Engine Flight Data for Condition Monitoring Applications

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Litt, Jonathan S.

    2010-01-01

    This paper presents an algorithm that automatically identifies and extracts steady-state engine operating points from engine flight data. It calculates the mean and standard deviation of select parameters contained in the incoming flight data stream. If the standard deviation of the data falls below defined constraints, the engine is assumed to be at a steady-state operating point, and the mean measurement data at that point are archived for subsequent condition monitoring purposes. The fundamental design of the steady-state data filter is completely generic and applicable for any dynamic system. Additional domain-specific logic constraints are applied to reduce data outliers and variance within the collected steady-state data. The filter is designed for on-line real-time processing of streaming data as opposed to post-processing of the data in batch mode. Results of applying the steady-state data filter to recorded helicopter engine flight data are shown, demonstrating its utility for engine condition monitoring applications.

  10. Chandra X-Ray Observatory Pointing Control System Performance During Transfer Orbit and Initial On-Orbit Operations

    NASA Technical Reports Server (NTRS)

    Quast, Peter; Tung, Frank; West, Mark; Wider, John

    2000-01-01

    The Chandra X-ray Observatory (CXO, formerly AXAF) is the third of the four NASA great observatories. It was launched from Kennedy Space Flight Center on 23 July 1999 aboard the Space Shuttle Columbia and was successfully inserted in a 330 x 72,000 km orbit by the Inertial Upper Stage (IUS). Through a series of five Integral Propulsion System burns, CXO was placed in a 10,000 x 139,000 km orbit. After initial on-orbit checkout, Chandra's first light images were unveiled to the public on 26 August, 1999. The CXO Pointing Control and Aspect Determination (PCAD) subsystem is designed to perform attitude control and determination functions in support of transfer orbit operations and on-orbit science mission. After a brief description of the PCAD subsystem, the paper highlights the PCAD activities during the transfer orbit and initial on-orbit operations. These activities include: CXO/IUS separation, attitude and gyro bias estimation with earth sensor and sun sensor, attitude control and disturbance torque estimation for delta-v burns, momentum build-up due to gravity gradient and solar pressure, momentum unloading with thrusters, attitude initialization with star measurements, gyro alignment calibration, maneuvering and transition to normal pointing, and PCAD pointing and stability performance.

  11. Self adjoint extensions of differential operators in application to shape optimization

    NASA Astrophysics Data System (ADS)

    Nazarov, Serguei A.; Sokolowski, Jan

    2003-10-01

    Two approaches are proposed for the modelling of problems with small geometrical defects. The first approach is based on the theory of self adjoint extensions of differential operators. In the second approach function spaces with separated asymptotics and point asymptotic conditions are introduced, and the variational formulation is established. For both approaches the accuracy estimates are derived. Finally, the spectral problems are considered and the error estimates for eigenvalues are given. To cite this article: S.A. Nazarov, J. Sokolowski, C. R. Mecanique 331 (2003).

  12. [Garbage incineration plants -- planning, organisation and operation from health point of view].

    PubMed

    Thriene, B

    2004-12-01

    The Waste Disposal Regulation which became effective March 1, 2001 stipulates that from June 1, 2005 biodegradable residential household and commercial waste may only be deposited on landfills after thermal or mechanical-biological pre-treatment. The Regulation aims at preventing generation of landfill gases that are detrimental to health and climate, and discharge of pollutants from landfills into the groundwater. Waste calculations for the year 2005 predict a volume of 28 million tons. Existing incineration and mechanical-biological treatment plants cover volumes of 14 and 2.5 million tons, respectively. Consequently, their capacity does not meet the demand in Germany. Waste disposal plans have been prepared in the German Federal State of Saxony-Anhalt since 1996 and potential sites for garbage incineration plants have been identified. Energy and waste management companies have initiated application procedures for thermal waste treatment plants and utilization of energy. Health Departments and the Hygiene Institute contributed to the approval procedure by providing the required Health Impact Assessment. We recommended selecting sites in the vicinity of large cities and conurbations and - taking into account the main wind direction - preferably in the northeast. Long-distance transport should be avoided. Based on immission forecasts for territorial background pollution, additional noise and air pollution were examined for reasonableness. In addition, providing structural safety of plants and guaranteeing continuous monitoring of emission limit values of air pollutants, was a prerequisite for strict observance of the 17 (th) BImSchV (Federal Decree on the Prevention of Immissions). The paper informs about planning, construction and conditions for operating the combined garbage heating and power station in Magdeburg-Rothensee (600,000 t/a). Saxony-Anhalt's waste legislation requires non-recyclable waste to be disposed of at the place of its generation, if possible

  13. [Garbage incineration plants -- planning, organisation and operation from health point of view].

    PubMed

    Thriene, B

    2004-12-01

    The Waste Disposal Regulation which became effective March 1, 2001 stipulates that from June 1, 2005 biodegradable residential household and commercial waste may only be deposited on landfills after thermal or mechanical-biological pre-treatment. The Regulation aims at preventing generation of landfill gases that are detrimental to health and climate, and discharge of pollutants from landfills into the groundwater. Waste calculations for the year 2005 predict a volume of 28 million tons. Existing incineration and mechanical-biological treatment plants cover volumes of 14 and 2.5 million tons, respectively. Consequently, their capacity does not meet the demand in Germany. Waste disposal plans have been prepared in the German Federal State of Saxony-Anhalt since 1996 and potential sites for garbage incineration plants have been identified. Energy and waste management companies have initiated application procedures for thermal waste treatment plants and utilization of energy. Health Departments and the Hygiene Institute contributed to the approval procedure by providing the required Health Impact Assessment. We recommended selecting sites in the vicinity of large cities and conurbations and - taking into account the main wind direction - preferably in the northeast. Long-distance transport should be avoided. Based on immission forecasts for territorial background pollution, additional noise and air pollution were examined for reasonableness. In addition, providing structural safety of plants and guaranteeing continuous monitoring of emission limit values of air pollutants, was a prerequisite for strict observance of the 17 (th) BImSchV (Federal Decree on the Prevention of Immissions). The paper informs about planning, construction and conditions for operating the combined garbage heating and power station in Magdeburg-Rothensee (600,000 t/a). Saxony-Anhalt's waste legislation requires non-recyclable waste to be disposed of at the place of its generation, if possible

  14. Point of optimal kinematic error: improvement of the instantaneous helical pivot method for locating centers of rotation.

    PubMed

    De Rosario, Helios; Page, Alvaro; Mata, Vicente

    2014-05-01

    This paper proposes a variation of the instantaneous helical pivot technique for locating centers of rotation. The point of optimal kinematic error (POKE), which minimizes the velocity at the center of rotation, may be obtained by just adding a weighting factor equal to the square of angular velocity in Woltring׳s equation of the pivot of instantaneous helical axes (PIHA). Calculations are simplified with respect to the original method, since it is not necessary to make explicit calculations of the helical axis, and the effect of accidental errors is reduced. The improved performance of this method was validated by simulations based on a functional calibration task for the gleno-humeral joint center. Noisy data caused a systematic dislocation of the calculated center of rotation towards the center of the arm marker cluster. This error in PIHA could even exceed the effect of soft tissue artifacts associated to small and medium deformations, but it was successfully reduced by the POKE estimation.

  15. [Making optimal operation for a BNR process: modeling prediction and experimental verification].

    PubMed

    Hao, Xiao-di; Hu, Yuan-sheng; Wan, Ke-wei

    2010-03-01

    Based on the process model of a BNR system (BCFS), the effects of operational parameters on the effluent quality were predicted by modeling, and were testified simultaneously by a lab-scale experiment, from which almost the same results in the modeling and the experiment were obtained. This means that modeling can be realizably applied to make the optimal operation schemes regardless of pilot-scale and/or full-scale experiments. Both the modeling and the experiment demonstrated that the bio-P removal performance was not influenced by the biomass amount in the anaerobic tank when the returned ratio (rA ) reached 1.5 and that rA had no significant correlation with COD and N removals. After the returned mixed liquor ratio (rB) increased over 2, the TN removal efficiency was not improved any more, and the COD and TP removals were not influenced by the variations of the rB. The returned mixed liquor ratio rC had almost no influences on the COD, TP and TN removals. Further, the COD and TP removals were not influenced when the dissolved oxygen (DO(R5)) in the aerobic tank was in the range of 1-2.5 mg x L(-1), but the effluent NH4+ -N increased over 1 mg x L(-1) when DO(R5 ) was below 2 mg x L(-1). So, the optimal operational parameters for the BCFS should be set at rA = 2, rB 2-2.5, rC = 0, DO(R5) 2-2.5 mg x L(-1).

  16. Determinants of self-reported smoking and misclassification during pregnancy, and analysis of optimal cut-off points for urinary cotinine: a cross-sectional study

    PubMed Central

    Aurrekoetxea, Juan J; Murcia, Mario; Rebagliato, Marisa; López, María José; Castilla, Ane Miren; Santa-Marina, Loreto; Guxens, Mónica; Fernández-Somoano, Ana; Espada, Mercedes; Lertxundi, Aitana; Tardón, Adonina; Ballester, Ferran

    2013-01-01

    Objectives To estimate the prevalence and factors associated with smoking and misclassification in pregnant women from INMA (INfancia y Medio Ambiente, Environment and Childhood) project, Spain, and to assess the optimal cut-offs for urinary cotinine (UC) that best distinguish daily and occasional smokers with varying levels of second-hand smoke (SHS) exposure. Design We used logistic regression models to study the relationship between sociodemographic variables and self-reported smoking and misclassification (self-reported non-smokers with UC >50 ng/ml). Receiver operating characteristic (ROC) curves were used to calculate the optimal cut-off point for discriminating smokers. The cut-offs were also calculated after stratification among non-smokers by the number of sources of SHS exposure. The cut-off points used to discriminate smoking status were the level of UC given by Youden's index and for 50 and 100 ng/ml for daily smokers, or 25 and 50 ng/ml for occasional smokers. Participants At the third trimester of pregnancy, 2263 pregnant women of the INMA Project were interviewed between 2004 and 2008 and a urine sample was collected. Results Prevalence of self-reported smokers at the third trimester of pregnancy was 18.5%, and another 3.9% misreported their smoking status. Variables associated with self-reported smoking and misreporting were similar, including born in Europe, educational level and exposure to SHS. The optimal cut-off was 82 ng/ml (95% CI 42 to 133), sensitivity 95.2% and specificity 96.6%. The area under the ROC curve was 0.986 (95% CI 0.982 to 0.990). The cut-offs varied according to the SHS exposure level being 42 (95% CI 27 to 57), 82 (95% CI 46 to 136) and 106 ng/ml (95% CI 58 to 227) for not being SHS exposed, exposed to one, and to two or more sources of SHS, respectively. The optimal cut-off for discriminating occasional smokers from non-smokers was 27 ng/ml (95% CI 11 to 43). Conclusions Prevalence of smoking during pregnancy in

  17. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    PubMed Central

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission. PMID:26754955

  18. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM).

    PubMed

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission. PMID:26754955

  19. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM).

    PubMed

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  20. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    NASA Astrophysics Data System (ADS)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  1. Determination of optimal operation time for the management of acute cholecystitis: a clinical trial

    PubMed Central

    Ucar, Ahmet Deniz; Yakan, Savas; Carti, Erdem Baris; Coskun, Ali; Erkan, Nazif; Yildirim, Mehmet

    2014-01-01

    Introduction Although all studies have reported that laparoscopic cholecystectomy (LC) is a safe and effective treatment for acute cholecystitis, the optimal timing for the procedure is still the subject of some debate. Aim This retrospective analysis of a prospective database was aimed at comparing early with delayed LC for acute cholecystitis. Material and methods The LC was performed in 165 patients, of whom 83 were operated within 72 h of admission (group 1) and 82 patients after 72 h (group 2) with acute cholecystitis between January 2012 and August 2013. All data were collected prospectively and both groups compared in terms of age, sex, fever, white blood count count, ultrasound findings, operation time, conversion to open surgery, complications and mean hospital stay. Results The study included 165 patients, 53 men and 112 women, who had median age 54 (20–85) years. The overall conversion rate was 27.9%. There was no significant difference in conversion rates (21% vs. 34%) between groups (p = 0.08). The operation time (116 min vs. 102 min, p = 0.02) was significantly increased in group 1. The complication rates (9% vs. 18%, p = 0.03) and total hospital stay (3.8 days vs. 7.9 days, p = 0.001) were significantly reduced in group 1. Conclusions Early LC within 72 h of admission reduces complications and hospital stay and is the preferred approach for acute cholecystitis. PMID:25097711

  2. Multi-objective optimization to support rapid air operations mission planning

    NASA Astrophysics Data System (ADS)

    Gonsalves, Paul G.; Burge, Janet E.

    2005-05-01

    Within the context of military air operations, Time-sensitive targets (TSTs) are targets where modifiers such, "emerging, perishable, high-payoff, short dwell, or highly mobile" can be used. Time-critical targets (TCTs) further the criticality of TSTs with respect to achievement of mission objectives and a limited window of opportunity for attack. The importance of TST/TCTs within military air operations has been met with a significant investment in advanced technologies and platforms to meet these challenges. Developments in ISR systems, manned and unmanned air platforms, precision guided munitions, and network-centric warfare have made significant strides for ensuring timely prosecution of TSTs/TCTs. However, additional investments are needed to further decrease the targeting decision cycle. Given the operational needs for decision support systems to enable time-sensitive/time-critical targeting, we present a tool for the rapid generation and analysis of mission plan solutions to address TSTs/TCTs. Our system employs a genetic algorithm-based multi-objective optimization scheme that is well suited to the rapid generation of approximate solutions in a dynamic environment. Genetic Algorithms (GAs) allow for the effective exploration of the search space for potentially novel solutions, while addressing the multiple conflicting objectives that characterize the prosecution of TSTs/TCTs (e.g. probability of target destruction, time to accomplish task, level of disruption to other mission priorities, level of risk to friendly assets, etc.).

  3. A numerical investigation for the optimal positions and weighting coefficients of point dose measurements in the weighted CTDI

    NASA Astrophysics Data System (ADS)

    Choi, Jang-Hwan; Constantin, Dragos; Fahrig, Rebecca

    2015-03-01

    The mean dose over the central phantom plane (i.e., z = 0, dose maximum image) is useful in that it allows us to compare radiation dose levels across different CT scanners and acquisition protocols. The mean dose from a conventional CT scan with table translation is typically estimated by weighted CTDI (CTDIW). However, conventional CTDIW has inconsistent performance, depending on its weighting coefficients ("1/2 and 1/2" or "1/3 and 2/3") and acquisition protocols. We used a Monte Carlo (MC) model based on Geant4 (GEometry ANd Tracking) to generate dose profiles in the central plane of the CTDI phantom. MC simulations were carried out for three different sizes of z-collimator and different tube voltages (80, 100, or 120 kVp), a tube current of 80 mA, and an exposure time of 25 ms. We derived optimal weighting coefficients by taking the integral of the radial dose profiles. The first-order linear equation and the quadratic equation were used to fit the dose profiles along the radial direction perpendicular to the central plane, and the fitted profiles were revolved about the Z-axis to compute the mean dose (i.e., total volume under the fitted profiles/the central plane area). The integral computed using the linear equation resulted in the same equation as conventional CTDIW, and the integral computed using the quadratic equation resulted in a new CTDIW (CTDIMW) that incorporates different weightings ("2/3 and 1/3") and the middle dose point instead of the central dose point. Compared to the results of MC simulations, our new CTDIMW showed less error than the previous CTDIW methods by successfully incorporating the curvature of the dose profiles regardless of acquisition protocols. Our new CTDIMW will also be applicable to the AAPM-ICRU phantom, which has a middle dose point.

  4. Optimizing operational water management with soil moisture data from Sentinel-1 satellites

    NASA Astrophysics Data System (ADS)

    Pezij, Michiel; Augustijn, Denie; Hendriks, Dimmie; Hulscher, Suzanne

    2016-04-01

    In the Netherlands, regional water authorities are responsible for management and maintenance of regional water bodies. Due to socio-economic developments (e.g. agricultural intensification and on-going urbanisation) and an increase in climate variability, the pressure on these water bodies is growing. Optimization of water availability by taking into account the needs of different users, both in wet and dry periods, is crucial for sustainable developments. To support timely and well-directed operational water management, accurate information on the current state of the system as well as reliable models to evaluate water management optimization measures are essential. Previous studies showed that the use of remote sensing data (for example soil moisture data) in water management offers many opportunities (e.g. Wanders et al. (2014)). However, these data are not yet used in operational applications at a large scale. The Sentinel-1 satellites programme offers high spatiotemporal resolution soil moisture data (1 image per 6 days with a spatial resolution of 10 by 10 m) that are freely available. In this study, these data will be used to improve the Netherlands Hydrological Instrument (NHI). The NHI consists of coupled models for the unsaturated zone (MetaSWAP), groundwater (iMODFLOW) and surface water (Mozart and DM). The NHI is used for scenario analyses and operational water management in the Netherlands (De Lange et al., 2014). Due to the lack of soil moisture data, the unsaturated zone model is not yet thoroughly validated and its output is not used by regional water authorities for decision-making. Therefore, the newly acquired remotely sensed soil moisture data will be used to improve the skill of the MetaSWAP-model and the NHI as whole. The research will focus among other things on the calibration of soil parameters by comparing model output (MetaSWAP) with the remotely sensed soil moisture data. Eventually, we want to apply data-assimilation to improve

  5. Estimates of Optimal Operating Conditions for Hydrogen-Oxygen Cesium-Seeded Magnetohydrodynamic Power Generator

    NASA Technical Reports Server (NTRS)

    Smith, J. M.; Nichols, L. D.

    1977-01-01

    The value of percent seed, oxygen to fuel ratio, combustion pressure, Mach number, and magnetic field strength which maximize either the electrical conductivity or power density at the entrance of an MHD power generator was obtained. The working fluid is the combustion product of H2 and O2 seeded with CsOH. The ideal theoretical segmented Faraday generator along with an empirical form found from correlating the data of many experimenters working with generators of different sizes, electrode configurations, and working fluids, are investigated. The conductivity and power densities optimize at a seed fraction of 3.5 mole percent and an oxygen to hydrogen weight ratio of 7.5. The optimum values of combustion pressure and Mach number depend on the operating magnetic field strength.

  6. Optimal Technology Selection and Operation of Microgrids inCommercial Buildings

    SciTech Connect

    Marnay, Chris; Venkataramanan, Giri; Stadler, Michael; Siddiqui,Afzal; Firestone, Ryan; Chandran, Bala

    2007-01-15

    The deployment of small (<1-2 MW) clusters of generators,heat and electrical storage, efficiency investments, and combined heatand power (CHP) applications (particularly involving heat activatedcooling) in commercial buildings promises significant benefits but posesmany technical and financial challenges, both in system choice and itsoperation; if successful, such systems may be precursors to widespreadmicrogrid deployment. The presented optimization approach to choosingsuch systems and their operating schedules uses Berkeley Lab'sDistributed Energy Resources Customer Adoption Model [DER-CAM], extendedto incorporate electrical storage options. DER-CAM chooses annual energybill minimizing systems in a fully technology-neutral manner. Anillustrative example for a San Francisco hotel is reported. The chosensystem includes two engines and an absorption chiller, providing anestimated 11 percent cost savings and 10 percent carbon emissionreductions, under idealized circumstances.

  7. Optimizing operational efficiencies in early phase trials: The Pediatric Trials Network experience.

    PubMed

    England, Amanda; Wade, Kelly; Smith, P Brian; Berezny, Katherine; Laughon, Matthew

    2016-03-01

    Performing drug trials in pediatrics is challenging. In support of the Best Pharmaceuticals for Children Act, the Eunice Kennedy Shriver National Institute of Child Health and Human Development funded the formation of the Pediatric Trials Network (PTN) in 2010. Since its inception, the PTN has developed strategies to increase both efficiency and safety of pediatric drug trials. Through use of innovative techniques such as sparse and scavenged blood sampling as well as opportunistic study design, participation in trials has grown. The PTN has also strived to improve consistency of adverse event reporting in neonatal drug trials through the development of a standardized adverse event table. We review how the PTN is optimizing operational efficiencies in pediatric drug trials to increase the safety of drugs in children. PMID:26968616

  8. Long Series Multi-objectives Optimal Operation of Water And Sediment Regulation

    NASA Astrophysics Data System (ADS)

    Bai, T.; Jin, W.

    2015-12-01

    Secondary suspended river in Inner Mongolia reaches have formed and the security of reach and ecological health of the river are threatened. Therefore, researches on water-sediment regulation by cascade reservoirs are urgent and necessary. Under this emergency background, multi-objectives water and sediment regulation are studied in this paper. Firstly, multi-objective optimal operation models of Longyangxia and Liujiaxia cascade reservoirs are established. Secondly, based on constraints handling and feasible search space techniques, the Non-dominated Sorting Genetic Algorithm (NSGA-II) is greatly improved to solve the model. Thirdly, four different scenarios are set. It is demonstrated that: (1) scatter diagrams of perato front are obtained to show optimal solutions of power generation maximization, sediment maximization and the global equilibrium solutions between the two; (2) the potentiality of water-sediment regulation by Longyangxia and Liujiaxia cascade reservoirs are analyzed; (3) with the increasing water supply in future, conflict between water supply and water-sediment regulation occurred, and the sustainability of water and sediment regulation will confront with negative influences for decreasing transferable water in cascade reservoirs; (4) the transfer project has less benefit for water-sediment regulation. The research results have an important practical significance and application on water-sediment regulation by cascade reservoirs in the Upper Yellow River, to construct water and sediment control system in the whole Yellow River basin.

  9. Nonlinear bioheat transfer models and multi-objective numerical optimization of the cryosurgery operations

    NASA Astrophysics Data System (ADS)

    Kudryashov, Nikolay A.; Shilnikov, Kirill E.

    2016-06-01

    Numerical computation of the three dimensional problem of the freezing interface propagation during the cryosurgery coupled with the multi-objective optimization methods is used in order to improve the efficiency and safety of the cryosurgery operations performing. Prostate cancer treatment and cutaneous cryosurgery are considered. The heat transfer in soft tissue during the thermal exposure to low temperature is described by the Pennes bioheat model and is coupled with an enthalpy method for blurred phase change computations. The finite volume method combined with the control volume approximation of the heat fluxes is applied for the cryosurgery numerical modeling on the tumor tissue of a quite arbitrary shape. The flux relaxation approach is used for the stability improvement of the explicit finite difference schemes. The method of the additional heating elements mounting is studied as an approach to control the cellular necrosis front propagation. Whereas the undestucted tumor tissue and destucted healthy tissue volumes are considered as objective functions, the locations of additional heating elements in cutaneous cryosurgery and cryotips in prostate cancer cryotreatment are considered as objective variables in multi-objective problem. The quasi-gradient method is proposed for the searching of the Pareto front segments as the multi-objective optimization problem solutions.

  10. Energetic optimization of a piezo-based touch-operated button for man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Sun, Hao; de Vries, Theo J. A.; de Vries, Rene; van Dalen, Harry

    2012-03-01

    This paper discusses the optimization of a touch-operated button for man-machine interfaces based on piezoelectric energy harvesting techniques. In the mechanical button, a common piezoelectric diaphragm, is assembled to harvest the ambient energy from the source, i.e. the operator’s touch. Under touch force load, the integrated diaphragm will have a bending deformation. Then, its mechanical strain is converted into the required electrical energy by means of the piezoelectric effect presented to the diaphragm. Structural design (i) makes the piezoceramic work under static compressive stress instead of static or dynamic tensile stress, (ii) achieves a satisfactory stress level and (iii) provides the diaphragm and the button with a fatigue lifetime in excess of millions of touch operations. To improve the button’s function, the effect of some key properties consisting of dimension, boundary condition and load condition on electrical behavior of the piezoelectric diaphragm are evaluated by electromechanical coupling analysis in ANSYS. The finite element analysis (FEA) results indicate that the modification of these properties could enhance the diaphragm significantly. Based on the key properties’ different contributions to the improvement of the diaphragm’s electrical energy output, they are incorporated into the piezoelectric diaphragm’s redesign or the structural design of the piezo-based button. The comparison of the original structure and the optimal result shows that electrical energy stored in the diaphragm and the voltage output are increased by 1576% and 120%, respectively, and the volume of the piezoceramic is reduced to 33.6%. These results will be adopted to update the design of the self-powered button, thus enabling a large decrease of energy consumption and lifetime cost of the MMI.

  11. Optimization of Preprocessing and Densification of Sorghum Stover at Full-scale Operation

    SciTech Connect

    Neal A. Yancey; Jaya Shankar Tumuluru; Craig C. Conner; Christopher T. Wright

    2011-08-01

    Transportation costs can be a prohibitive step in bringing biomass to a preprocessing location or biofuel refinery. One alternative to transporting biomass in baled or loose format to a preprocessing location, is to utilize a mobile preprocessing system that can be relocated to various locations where biomass is stored, preprocess and densify the biomass, then ship it to the refinery as needed. The Idaho National Laboratory has a full scale 'Process Demonstration Unit' PDU which includes a stage 1 grinder, hammer mill, drier, pellet mill, and cooler with the associated conveyance system components. Testing at bench and pilot scale has been conducted to determine effects of moisture on preprocessing, crop varieties on preprocessing efficiency and product quality. The INLs PDU provides an opportunity to test the conclusions made at the bench and pilot scale on full industrial scale systems. Each component of the PDU is operated from a central operating station where data is collected to determine power consumption rates for each step in the process. The power for each electrical motor in the system is monitored from the control station to monitor for problems and determine optimal conditions for the system performance. The data can then be viewed to observe how changes in biomass input parameters (moisture and crop type for example), mechanical changes (screen size, biomass drying, pellet size, grinding speed, etc.,), or other variations effect the power consumption of the system. Sorgum in four foot round bales was tested in the system using a series of 6 different screen sizes including: 3/16 in., 1 in., 2 in., 3 in., 4 in., and 6 in. The effect on power consumption, product quality, and production rate were measured to determine optimal conditions.

  12. Design optimization of MR-compatible rotating anode x-ray tubes for stable operation

    SciTech Connect

    Shin, Mihye; Lillaney, Prasheel; Hinshaw, Waldo; Fahrig, Rebecca

    2013-11-15

    Purpose: Hybrid x-ray/MR systems can enhance the diagnosis and treatment of endovascular, cardiac, and neurologic disorders by using the complementary advantages of both modalities for image guidance during interventional procedures. Conventional rotating anode x-ray tubes fail near an MR imaging system, since MR fringe fields create eddy currents in the metal rotor which cause a reduction in the rotation speed of the x-ray tube motor. A new x-ray tube motor prototype has been designed and built to be operated close to a magnet. To ensure the stability and safety of the motor operation, dynamic characteristics must be analyzed to identify possible modes of mechanical failure. In this study a 3D finite element method (FEM) model was developed in order to explore possible modifications, and to optimize the motor design. The FEM provides a valuable tool that permits testing and evaluation using numerical simulation instead of building multiple prototypes.Methods: Two experimental approaches were used to measure resonance characteristics: the first obtained the angular speed curves of the x-ray tube motor employing an angle encoder; the second measured the power spectrum using a spectrum analyzer, in which the large amplitude of peaks indicates large vibrations. An estimate of the bearing stiffness is required to generate an accurate FEM model of motor operation. This stiffness depends on both the bearing geometry and adjacent structures (e.g., the number of balls, clearances, preload, etc.) in an assembly, and is therefore unknown. This parameter was set by matching the FEM results to measurements carried out with the anode attached to the motor, and verified by comparing FEM predictions and measurements with the anode removed. The validated FEM model was then used to sweep through design parameters [bearing stiffness (1×10{sup 5}–5×10{sup 7} N/m), shaft diameter (0.372–0.625 in.), rotor diameter (2.4–2.9 in.), and total length of motor (5.66–7.36 in.)] to

  13. Design optimization of MR-compatible rotating anode x-ray tubes for stable operation

    PubMed Central

    Shin, Mihye; Lillaney, Prasheel; Hinshaw, Waldo; Fahrig, Rebecca

    2013-01-01

    Purpose: Hybrid x-ray/MR systems can enhance the diagnosis and treatment of endovascular, cardiac, and neurologic disorders by using the complementary advantages of both modalities for image guidance during interventional procedures. Conventional rotating anode x-ray tubes fail near an MR imaging system, since MR fringe fields create eddy currents in the metal rotor which cause a reduction in the rotation speed of the x-ray tube motor. A new x-ray tube motor prototype has been designed and built to be operated close to a magnet. To ensure the stability and safety of the motor operation, dynamic characteristics must be analyzed to identify possible modes of mechanical failure. In this study a 3D finite element method (FEM) model was developed in order to explore possible modifications, and to optimize the motor design. The FEM provides a valuable tool that permits testing and evaluation using numerical simulation instead of building multiple prototypes. Methods: Two experimental approaches were used to measure resonance characteristics: the first obtained the angular speed curves of the x-ray tube motor employing an angle encoder; the second measured the power spectrum using a spectrum analyzer, in which the large amplitude of peaks indicates large vibrations. An estimate of the bearing stiffness is required to generate an accurate FEM model of motor operation. This stiffness depends on both the bearing geometry and adjacent structures (e.g., the number of balls, clearances, preload, etc.) in an assembly, and is therefore unknown. This parameter was set by matching the FEM results to measurements carried out with the anode attached to the motor, and verified by comparing FEM predictions and measurements with the anode removed. The validated FEM model was then used to sweep through design parameters [bearing stiffness (1×105–5×107 N/m), shaft diameter (0.372–0.625 in.), rotor diameter (2.4–2.9 in.), and total length of motor (5.66–7.36 in.)] to increase the

  14. Display analysis with the optimal control model of the human operator. [pilot-vehicle display interface and information processing

    NASA Technical Reports Server (NTRS)

    Baron, S.; Levison, W. H.

    1977-01-01

    Application of the optimal control model of the human operator to problems in display analysis is discussed. Those aspects of the model pertaining to the operator-display interface and to operator information processing are reviewed and discussed. The techniques are then applied to the analysis of advanced display/control systems for a Terminal Configured Vehicle. Model results are compared with those obtained in a large, fixed-base simulation.

  15. GIS based location optimization for mobile produced water treatment facilities in shale gas operations

    NASA Astrophysics Data System (ADS)

    Kitwadkar, Amol Hanmant

    Over 60% of the nation's total energy is supplied by oil and natural gas together and this demand for energy will continue to grow in the future (Radler et al. 2012). The growing demand is pushing the exploration and exploitation of onshore oil and natural gas reservoirs. Hydraulic fracturing has proven to not only create jobs and achieve economic growth, but also has proven to exert a lot of stress on natural resources---such as water. As water is one of the most important factors in the world of hydraulic fracturing, proper fluids management during the development of a field of operation is perhaps the key element to address a lot of these issues. Almost 30% of the water used during hydraulic fracturing comes out of the well in the form of flowback water during the first month after the well is fractured (Bai et. al. 2012). Handling this large amount of water coming out of the newly fractured wells is one of the major issues as the volume of the water after this period drops off and remains constant for a long time (Bai et. al. 2012) and permanent facilities can be constructed to take care of the water over a longer period. This paper illustrates development of a GIS based tool for optimizing the location of a mobile produced water treatment facility while development is still occurring. A methodology was developed based on a multi criteria decision analysis (MCDA) to optimize the location of the mobile treatment facilities. The criteria for MCDA include well density, ease of access (from roads considering truck hauls) and piping minimization if piping is used and water volume produced. The area of study is 72 square miles east of Greeley, CO in the Wattenberg Field in northeastern Colorado that will be developed for oil and gas production starting in the year 2014. A quarterly analysis is done so that we can observe the effect of future development plans and current circumstances on the location as we move from quarter to quarter. This will help the operators to

  16. Operational optimization of irrigation scheduling for citrus trees using an ensemble based data assimilation approach

    NASA Astrophysics Data System (ADS)

    Hendricks Franssen, H.; Han, X.; Martinez, F.; Jimenez, M.; Manzano, J.; Chanzy, A.; Vereecken, H.

    2013-12-01

    Data assimilation (DA) techniques, like the local ensemble transform Kalman filter (LETKF) not only offer the opportunity to update model predictions by assimilating new measurement data in real time, but also provide an improved basis for real-time (DA-based) control. This study focuses on the optimization of real-time irrigation scheduling for fields of citrus trees near Picassent (Spain). For three selected fields the irrigation was optimized with DA-based control, and for other fields irrigation was optimized on the basis of a more traditional approach where reference evapotranspiration for citrus trees was estimated using the FAO-method. The performance of the two methods is compared for the year 2013. The DA-based real-time control approach is based on ensemble predictions of soil moisture profiles, using the Community Land Model (CLM). The uncertainty in the model predictions is introduced by feeding the model with weather predictions from an ensemble prediction system (EPS) and uncertain soil hydraulic parameters. The model predictions are updated daily by assimilating soil moisture data measured by capacitance probes. The measurement data are assimilated with help of LETKF. The irrigation need was calculated for each of the ensemble members, averaged, and logistic constraints (hydraulics, energy costs) were taken into account for the final assigning of irrigation in space and time. For the operational scheduling based on this approach only model states and no model parameters were updated by the model. Other, non-operational simulation experiments for the same period were carried out where (1) neither ensemble weather forecast nor DA were used (open loop), (2) Only ensemble weather forecast was used, (3) Only DA was used, (4) also soil hydraulic parameters were updated in data assimilation and (5) both soil hydraulic and plant specific parameters were updated. The FAO-based and DA-based real-time irrigation control are compared in terms of soil moisture

  17. Dirac point and transconductance of top-gated graphene field-effect transistors operating at elevated temperature

    SciTech Connect

    Hopf, T.; Vassilevski, K. V. Escobedo-Cousin, E.; King, P. J.; Wright, N. G.; O'Neill, A. G.; Horsfall, A. B.; Goss, J. P.; Wells, G. H.; Hunt, M. R. C.

    2014-10-21

    Top-gated graphene field-effect transistors (GFETs) have been fabricated using bilayer epitaxial graphene grown on the Si-face of 4H-SiC substrates by thermal decomposition of silicon carbide in high vacuum. Graphene films were characterized by Raman spectroscopy, Atomic Force Microscopy, Scanning Tunnelling Microscopy, and Hall measurements to estimate graphene thickness, morphology, and charge transport properties. A 27 nm thick Al₂O₃ gate dielectric was grown by atomic layer deposition with an e-beam evaporated Al seed layer. Electrical characterization of the GFETs has been performed at operating temperatures up to 100 °C limited by deterioration of the gate dielectric performance at higher temperatures. Devices displayed stable operation with the gate oxide dielectric strength exceeding 4.5 MV/cm at 100 °C. Significant shifting of the charge neutrality point and an increase of the peak transconductance were observed in the GFETs as the operating temperature was elevated from room temperature to 100 °C.

  18. On the choice of the optimal periodic operation for a continuous fermentation process.

    PubMed

    D'Avino, G; Crescitelli, S; Maffettone, P L; Grosso, M

    2010-01-01

    In this contribution we investigate the impact of the forcing waveform on the productivity of a continuous bioreactor governed by an unstructured, nonlinear kinetic model. The (periodic) forcing is applied on the substrate concentration in the feed. To this end, some alternative waveforms commonly encountered in practice are evaluated and their performance is compared. An analytical/numerical approach is used. The preliminary analytical step is based on the π-criterion that gives useful information for small amplitudes. The extension to larger amplitudes, when significant improvements are expected, is then performed through a continuation-optimization procedure. It is found that the choice of the specific waveform has an impact on the performance of the process and there is no unique best forcing for any process condition, but its choice depends on the operating parameters and the forcing amplitude and frequency values. Further, the influence of the waveform functions on the wash-out conditions are extensively examined. The analysis shows that all the waveforms examined in this work may lead to significant enlargement of the nontrivial regime with respect to a steady state operation. In particular, square-wave forcing leads in practice to the extinction of the wash-out conditions for any feed substrate concentration and for a well defined choice of the forcing parameters.

  19. Parameter Optimization and Operating Strategy of a TEG System for Railway Vehicles

    NASA Astrophysics Data System (ADS)

    Heghmanns, A.; Wilbrecht, S.; Beitelschmidt, M.; Geradts, K.

    2016-03-01

    A thermoelectric generator (TEG) system demonstrator for diesel electric locomotives with the objective of reducing the mechanical load on the thermoelectric modules (TEM) is developed and constructed to validate a one-dimensional thermo-fluid flow simulation model. The model is in good agreement with the measurements and basis for the optimization of the TEG's geometry by a genetic multi objective algorithm. The best solution has a maximum power output of approx. 2.7 kW and does not exceed the maximum back pressure of the diesel engine nor the maximum TEM hot side temperature. To maximize the reduction of the fuel consumption, an operating strategy regarding the system power output for the TEG system is developed. Finally, the potential consumption reduction in passenger and freight traffic operating modes is estimated under realistic driving conditions by means of a power train and lateral dynamics model. The fuel savings are between 0.5% and 0.7%, depending on the driving style.

  20. Optimization of European call options considering physical delivery network and reservoir operation rules

    NASA Astrophysics Data System (ADS)

    Cheng, Wei-Chen; Hsu, Nien-Sheng; Cheng, Wen-Ming; Yeh, William W.-G.

    2011-10-01

    This paper develops alternative strategies for European call options for water purchase under hydrological uncertainties that can be used by water resources managers for decision making. Each alternative strategy maximizes its own objective over a selected sequence of future hydrology that is characterized by exceedance probability. Water trade provides flexibility and enhances water distribution system reliability. However, water trade between two parties in a regional water distribution system involves many issues, such as delivery network, reservoir operation rules, storage space, demand, water availability, uncertainty, and any existing contracts. An option is a security giving the right to buy or sell an asset; in our case, the asset is water. We extend a flow path-based water distribution model to include reservoir operation rules. The model simultaneously considers both the physical distribution network as well as the relationships between water sellers and buyers. We first test the model extension. Then we apply the proposed optimization model for European call options to the Tainan water distribution system in southern Taiwan. The formulation lends itself to a mixed integer linear programming model. We use the weighing method to formulate a composite function for a multiobjective problem. The proposed methodology provides water resources managers with an overall picture of water trade strategies and the consequence of each strategy. The results from the case study indicate that the strategy associated with a streamflow exceedence probability of 50% or smaller should be adopted as the reference strategy for the Tainan water distribution system.

  1. Simultaneous spectrophotometric determination of synthetic dyes in food samples after cloud point extraction using multiple response optimizations.

    PubMed

    Heidarizadi, Elham; Tabaraki, Reza

    2016-01-01

    A sensitive cloud point extraction method for simultaneous determination of trace amounts of sunset yellow (SY), allura red (AR) and brilliant blue (BB) by spectrophotometry was developed. Experimental parameters such as Triton X-100 concentration, KCl concentration and initial pH on extraction efficiency of dyes were optimized using response surface methodology (RSM) with a Doehlert design. Experimental data were evaluated by applying RSM integrating a desirability function approach. The optimum condition for extraction efficiency of SY, AR and BB simultaneously were: Triton X-100 concentration 0.0635 mol L(-1), KCl concentration 0.11 mol L(-1) and pH 4 with maximum overall desirability D of 0.95. Correspondingly, the maximum extraction efficiency of SY, AR and BB were 100%, 92.23% and 95.69%, respectively. At optimal conditions, extraction efficiencies were 99.8%, 92.48% and 95.96% for SY, AR and BB, respectively. These values were only 0.2%, 0.25% and 0.27% different from the predicted values, suggesting that the desirability function approach with RSM was a useful technique for simultaneously dye extraction. Linear calibration curves were obtained in the range of 0.02-4 for SY, 0.025-2.5 for AR and 0.02-4 μg mL(-1) for BB under optimum condition. Detection limit based on three times the standard deviation of the blank (3Sb) was 0.009, 0.01 and 0.007 μg mL(-1) (n=10) for SY, AR and BB, respectively. The method was successfully used for the simultaneous determination of the dyes in different food samples. PMID:26653445

  2. Simultaneous spectrophotometric determination of synthetic dyes in food samples after cloud point extraction using multiple response optimizations.

    PubMed

    Heidarizadi, Elham; Tabaraki, Reza

    2016-01-01

    A sensitive cloud point extraction method for simultaneous determination of trace amounts of sunset yellow (SY), allura red (AR) and brilliant blue (BB) by spectrophotometry was developed. Experimental parameters such as Triton X-100 concentration, KCl concentration and initial pH on extraction efficiency of dyes were optimized using response surface methodology (RSM) with a Doehlert design. Experimental data were evaluated by applying RSM integrating a desirability function approach. The optimum condition for extraction efficiency of SY, AR and BB simultaneously were: Triton X-100 concentration 0.0635 mol L(-1), KCl concentration 0.11 mol L(-1) and pH 4 with maximum overall desirability D of 0.95. Correspondingly, the maximum extraction efficiency of SY, AR and BB were 100%, 92.23% and 95.69%, respectively. At optimal conditions, extraction efficiencies were 99.8%, 92.48% and 95.96% for SY, AR and BB, respectively. These values were only 0.2%, 0.25% and 0.27% different from the predicted values, suggesting that the desirability function approach with RSM was a useful technique for simultaneously dye extraction. Linear calibration curves were obtained in the range of 0.02-4 for SY, 0.025-2.5 for AR and 0.02-4 μg mL(-1) for BB under optimum condition. Detection limit based on three times the standard deviation of the blank (3Sb) was 0.009, 0.01 and 0.007 μg mL(-1) (n=10) for SY, AR and BB, respectively. The method was successfully used for the simultaneous determination of the dyes in different food samples.

  3. Optimization of an Optical Inspection System Based on the Taguchi Method for Quantitative Analysis of Point-of-Care Testing

    PubMed Central

    Yeh, Chia-Hsien; Zhao, Zi-Qi; Shen, Pi-Lan; Lin, Yu-Cheng

    2014-01-01

    This study presents an optical inspection system for detecting a commercial point-of-care testing product and a new detection model covering from qualitative to quantitative analysis. Human chorionic gonadotropin (hCG) strips (cut-off value of the hCG commercial product is 25 mIU/mL) were the detection target in our study. We used a complementary metal-oxide semiconductor (CMOS) sensor to detect the colors of the test line and control line in the specific strips and to reduce the observation errors by the naked eye. To achieve better linearity between the grayscale and the concentration, and to decrease the standard deviation (increase the signal to noise ratio, S/N), the Taguchi method was used to find the optimal parameters for the optical inspection system. The pregnancy test used the principles of the lateral flow immunoassay, and the colors of the test and control line were caused by the gold nanoparticles. Because of the sandwich immunoassay model, the color of the gold nanoparticles in the test line was darkened by increasing the hCG concentration. As the results reveal, the S/N increased from 43.48 dB to 53.38 dB, and the hCG concentration detection increased from 6.25 to 50 mIU/mL with a standard deviation of less than 10%. With the optimal parameters to decrease the detection limit and to increase the linearity determined by the Taguchi method, the optical inspection system can be applied to various commercial rapid tests for the detection of ketamine, troponin I, and fatty acid binding protein (FABP). PMID:25256108

  4. Design optimization for plasma performance and assessment of operation regimes in JT-60SA

    NASA Astrophysics Data System (ADS)

    Fujita, T.; Tamai, H.; Matsukawa, M.; Kurita, G.; Bialek, J.; Aiba, N.; Tsuchiya, K.; Sakurai, S.; Suzuki, Y.; Hamamatsu, K.; Hayashi, N.; Oyama, N.; Suzuki, T.; Navratil, G. A.; Kamada, Y.; Miura, Y.; Takase, Y.; Campbell, D.; Pamela, J.; Romanelli, F.; Kikuchi, M.

    2007-11-01

    The design of the modification of JT-60U, JT-60SA has been optimized from the viewpoint of plasma performance, and operation regimes have been evaluated with the latest design. Upper and lower divertors with different geometries will be prepared for flexibility of the plasma shape, which will enable both low aspect ratio (A ~ 2.65) and ITER shape (A = 3.1) configurations. The beam lines of negative-ion neutral beam injection will be shifted downwards by ~0.6 m for the off-axis current drive (CD), in order to obtain a weak/reversed shear plasma, as well as having the capability of heating the central region. The feedback control coils along the openings in the stabilizing plate are found effective in suppressing the resistive wall mode and sustaining high βN close to the ideal wall limit. Sustainment of plasma current of 3-3.5 MA for 100 s will be possible in ELMy H-mode plasmas with moderate heating power, βN, and density within an available flux swing. It is also expected that higher βN, high-density ELMy H-mode plasmas will be maintained for 100 s with higher heating power. The expected regime of full CD operation has been extended with upgraded heating and CD power. Full CD operation for 100 s with reactor-relevant high values of normalized beta and bootstrap current fraction (Ip = 2.4 MA, βN = 4.3, fBS = 0.69, \\bar{n}_{\\rme}/n_GW = 0.86 , HH98y2 = 1.3) is expected in a highly-shaped low-aspect-ratio configuration (A = 2.65).

  5. SU-E-T-539: Fixed Versus Variable Optimization Points in Combined-Mode Modulated Arc Therapy Planning

    SciTech Connect

    Kainz, K; Prah, D; Ahunbay, E; Li, X

    2014-06-01

    Purpose: A novel modulated arc therapy technique, mARC, enables superposition of step-and-shoot IMRT segments upon a subset of the optimization points (OPs) of a continuous-arc delivery. We compare two approaches to mARC planning: one with the number of OPs fixed throughout optimization, and another where the planning system determines the number of OPs in the final plan, subject to an upper limit defined at the outset. Methods: Fixed-OP mARC planning was performed for representative cases using Panther v. 5.01 (Prowess, Inc.), while variable-OP mARC planning used Monaco v. 5.00 (Elekta, Inc.). All Monaco planning used an upper limit of 91 OPs; those OPs with minimal MU were removed during optimization. Plans were delivered, and delivery times recorded, on a Siemens Artiste accelerator using a flat 6MV beam with 300 MU/min rate. Dose distributions measured using ArcCheck (Sun Nuclear Corporation, Inc.) were compared with the plan calculation; the two were deemed consistent if they agreed to within 3.5% in absolute dose and 3.5 mm in distance-to-agreement among > 95% of the diodes within the direct beam. Results: Example cases included a prostate and a head-and-neck planned with a single arc and fraction doses of 1.8 and 2.0 Gy, respectively. Aside from slightly more uniform target dose for the variable-OP plans, the DVHs for the two techniques were similar. For the fixed-OP technique, the number of OPs was 38 and 39, and the delivery time was 228 and 259 seconds, respectively, for the prostate and head-and-neck cases. For the final variable-OP plans, there were 91 and 85 OPs, and the delivery time was 296 and 440 seconds, correspondingly longer than for fixed-OP. Conclusion: For mARC, both the fixed-OP and variable-OP approaches produced comparable-quality plans whose delivery was successfully verified. To keep delivery time per fraction short, a fixed-OP planning approach is preferred.

  6. Canine sense and sensibility: tipping points and response latency variability as an optimism index in a canine judgement bias assessment.

    PubMed

    Starling, Melissa J; Branson, Nicholas; Cody, Denis; Starling, Timothy R; McGreevy, Paul D

    2014-01-01

    Recent advances in animal welfare science used judgement bias, a type of cognitive bias, as a means to objectively measure an animal's affective state. It is postulated that animals showing heightened expectation of positive outcomes may be categorised optimistic, while those showing heightened expectations of negative outcomes may be considered pessimistic. This study pioneers the use of a portable, automated apparatus to train and test the judgement bias of dogs. Dogs were trained in a discrimination task in which they learned to touch a target after a tone associated with a lactose-free milk reward and abstain from touching the target after a tone associated with water. Their judgement bias was then probed by presenting tones between those learned in the discrimination task and measuring their latency to respond by touching the target. A Cox's Proportional Hazards model was used to analyse censored response latency data. Dog and Cue both had a highly significant effect on latency and risk of touching a target. This indicates that judgement bias both exists in dogs and differs between dogs. Test number also had a significant effect, indicating that dogs were less likely to touch the target over successive tests. Detailed examination of the response latencies revealed tipping points where average latency increased by 100% or more, giving an indication of where dogs began to treat ambiguous cues as predicting more negative outcomes than positive ones. Variability scores were calculated to provide an index of optimism using average latency and standard deviation at cues after the tipping point. The use of a mathematical approach to assessing judgement bias data in animal studies offers a more detailed interpretation than traditional statistical analyses. This study provides proof of concept for the use of an automated apparatus for measuring cognitive bias in dogs.

  7. Canine Sense and Sensibility: Tipping Points and Response Latency Variability as an Optimism Index in a Canine Judgement Bias Assessment

    PubMed Central

    Starling, Melissa J.; Branson, Nicholas; Cody, Denis; Starling, Timothy R.; McGreevy, Paul D.

    2014-01-01

    Recent advances in animal welfare science used judgement bias, a type of cognitive bias, as a means to objectively measure an animal's affective state. It is postulated that animals showing heightened expectation of positive outcomes may be categorised optimistic, while those showing heightened expectations of negative outcomes may be considered pessimistic. This study pioneers the use of a portable, automated apparatus to train and test the judgement bias of dogs. Dogs were trained in a discrimination task in which they learned to touch a target after a tone associated with a lactose-free milk reward and abstain from touching the target after a tone associated with water. Their judgement bias was then probed by presenting tones between those learned in the discrimination task and measuring their latency to respond by touching the target. A Cox's Proportional Hazards model was used to analyse censored response latency data. Dog and Cue both had a highly significant effect on latency and risk of touching a target. This indicates that judgement bias both exists in dogs and differs between dogs. Test number also had a significant effect, indicating that dogs were less likely to touch the target over successive tests. Detailed examination of the response latencies revealed tipping points where average latency increased by 100% or more, giving an indication of where dogs began to treat ambiguous cues as predicting more negative outcomes than positive ones. Variability scores were calculated to provide an index of optimism using average latency and standard deviation at cues after the tipping point. The use of a mathematical approach to assessing judgement bias data in animal studies offers a more detailed interpretation than traditional statistical analyses. This study provides proof of concept for the use of an automated apparatus for measuring cognitive bias in dogs. PMID:25229458

  8. Canine sense and sensibility: tipping points and response latency variability as an optimism index in a canine judgement bias assessment.

    PubMed

    Starling, Melissa J; Branson, Nicholas; Cody, Denis; Starling, Timothy R; McGreevy, Paul D

    2014-01-01

    Recent advances in animal welfare science used judgement bias, a type of cognitive bias, as a means to objectively measure an animal's affective state. It is postulated that animals showing heightened expectation of positive outcomes may be categorised optimistic, while those showing heightened expectations of negative outcomes may be considered pessimistic. This study pioneers the use of a portable, automated apparatus to train and test the judgement bias of dogs. Dogs were trained in a discrimination task in which they learned to touch a target after a tone associated with a lactose-free milk reward and abstain from touching the target after a tone associated with water. Their judgement bias was then probed by presenting tones between those learned in the discrimination task and measuring their latency to respond by touching the target. A Cox's Proportional Hazards model was used to analyse censored response latency data. Dog and Cue both had a highly significant effect on latency and risk of touching a target. This indicates that judgement bias both exists in dogs and differs between dogs. Test number also had a significant effect, indicating that dogs were less likely to touch the target over successive tests. Detailed examination of the response latencies revealed tipping points where average latency increased by 100% or more, giving an indication of where dogs began to treat ambiguous cues as predicting more negative outcomes than positive ones. Variability scores were calculated to provide an index of optimism using average latency and standard deviation at cues after the tipping point. The use of a mathematical approach to assessing judgement bias data in animal studies offers a more detailed interpretation than traditional statistical analyses. This study provides proof of concept for the use of an automated apparatus for measuring cognitive bias in dogs. PMID:25229458

  9. High-Precision Lunar Ranging and Gravitational Parameter Estimation With the Apache Point Observatory Lunar Laser-ranging Operation

    NASA Astrophysics Data System (ADS)

    Johnson, Nathan H.

    This dissertation is concerned with several problems of instrumentation and data analysis encountered by the Apache Point Observatory Lunar Laser-ranging Operation. Chapter 2 considers crosstalk between elements of a single-photon avalanche photodiode detector. Experimental and analytic methods were developed to determine crosstalk rates, and empirical findings are presented. Chapter 3 details electronics developments that have improved the quality of data collected by detectors of the same type. Chapter 4 explores the challenges of estimating gravitational parameters on the basis of ranging data collected by this and other experiments and presents resampling techniques for the derivation of standard errors for estimates of such parameters determined by the Planetary Ephemeris Program (PEP), a solar-system model and data-fitting code. Possible directions for future work are discussed in Chapter 5. A manual of instructions for working with PEP is presented as an appendix.

  10. Sensitivity and alternative operating point studies on a high charge CW FEL injector test stand at CEBAF

    SciTech Connect

    Liu, H.; Kehne, D.; Benson, S.

    1995-12-31

    A high charge CW FEL injector test stand is being built at CEBAF based on a 500 kV DC laser gun, a 1500 MHz room-temperature buncher, and a high-gradient ({approx}10 MV/m) CEBAF cryounit containing two 1500 MHz CEBAF SRF cavities. Space-charge-dominated beam dynamics simulations show that this injector should be an excellent high-brightness electron beam source for CW UV FELs if the nominal parameters assigned to each component of the system are experimentally achieved. Extensive sensitivity and alternative operating point studies have been conducted numerically to establish tolerances on the parameters of various injector system components. The consequences of degraded injector performance, due to failure to establish and/or maintain the nominal system design parameters, on the performance of the main accelerator and the FEL itself are discussed.

  11. Modeling of delamination in carbon/epoxy composite laminates under four point bending for damage detection and sensor placement optimization

    NASA Astrophysics Data System (ADS)

    Adu, Stephen Aboagye

    Laminated carbon fiber-reinforced polymer composites (CFRPs) possess very high specific strength and stiffness and this has accounted for their wide use in structural applications, most especially in the aerospace industry, where the trade-off between weight and strength is critical. Even though they possess much larger strength ratio as compared to metals like aluminum and lithium, damage in the metals mentioned is rather localized. However, CFRPs generate complex damage zones at stress concentration, with damage progression in the form of matrix cracking, delamination and fiber fracture or fiber/matrix de-bonding. This thesis is aimed at performing; stiffness degradation analysis on composite coupons, containing embedded delamination using the Four-Point Bend Test. The Lamb wave-based approach as a structural health monitoring (SHM) technique is used for damage detection in the composite coupons. Tests were carried-out on unidirectional composite coupons, obtained from panels manufactured with pre-existing defect in the form of embedded delamination in a laminate of stacking sequence [06/904/0 6]T. Composite coupons were obtained from panels, fabricated using vacuum assisted resin transfer molding (VARTM), a liquid composite molding (LCM) process. The discontinuity in the laminate structure due to the de-bonding of the middle plies caused by the insertion of a 0.3 mm thick wax, in-between the middle four (4) ninety degree (90°) plies, is detected using lamb waves generated by surface mounted piezoelectric (PZT) actuators. From the surface mounted piezoelectric sensors, response for both undamaged (coupon with no defect) and damaged (delaminated coupon) is obtained. A numerical study of the embedded crack propagation in the composite coupon under four-point and three-point bending was carried out using FEM. Model validation was then carried out comparing the numerical results with the experimental. Here, surface-to-surface contact property was used to model the

  12. Point of optimal kinematic error: improvement of the instantaneous helical pivot method for locating centers of rotation.

    PubMed

    De Rosario, Helios; Page, Alvaro; Mata, Vicente

    2014-05-01

    This paper proposes a variation of the instantaneous helical pivot technique for locating centers of rotation. The point of optimal kinematic error (POKE), which minimizes the velocity at the center of rotation, may be obtained by just adding a weighting factor equal to the square of angular velocity in Woltring׳s equation of the pivot of instantaneous helical axes (PIHA). Calculations are simplified with respect to the original method, since it is not necessary to make explicit calculations of the helical axis, and the effect of accidental errors is reduced. The improved performance of this method was validated by simulations based on a functional calibration task for the gleno-humeral joint center. Noisy data caused a systematic dislocation of the calculated center of rotation towards the center of the arm marker cluster. This error in PIHA could even exceed the effect of soft tissue artifacts associated to small and medium deformations, but it was successfully reduced by the POKE estimation. PMID:24650972

  13. Live Operation Data Collection Optimization and Communication for the Domestic Nuclear Detection Office’s Rail Test Center

    SciTech Connect

    Gelston, Gariann M.

    2010-04-06

    For the Domestic Nuclear Detection Office’s Rail Test Center (i.e., DNDO’s RTC), port operation knowledge with flexible collection tools and technique are essential in both technology testing design and implementation intended for live operational settings. Increased contextual data, flexibility in procedures, and rapid availability of information are keys to addressing the challenges of optimization, validation, and analysis within live operational setting data collection. These concepts need to be integrated into technology testing designs, data collection, validation, and analysis processes. A modified data collection technique with a two phased live operation test method is proposed.

  14. Operator Influence on Blinded Diagnostic Accuracy of Point-of-Care Antigen Testing for Group A Streptococcal Pharyngitis.

    PubMed

    Penney, Carla; Porter, Robert; O'Brien, Mary; Daley, Peter

    2016-01-01

    Background. Acute pharyngitis caused by Group A Streptococcus (GAS) is a common presentation to pediatric emergency departments (ED). Diagnosis with conventional throat culture requires 18-24 hours, which prevents point-of-care treatment decisions. Rapid antigen detection tests (RADT) are faster, but previous reports demonstrate significant operator influence on performance. Objective. To measure operator influence on the diagnostic accuracy of a RADT when performed by pediatric ED nurses and clinical microbiology laboratory technologists, using conventional culture as the reference standard. Methods. Children presenting to a pediatric ED with suspected acute pharyngitis were recruited. Three pharyngeal swabs were collected at once. One swab was used to perform the RADT in the ED, and two were sent to the clinical microbiology laboratory for RADT and conventional culture testing. Results. The RADT when performed by technologists compared to nurses had a 5.1% increased sensitivity (81.4% versus 76.3%) (p = 0.791) (95% CI for difference between technologists and nurses = -11% to +21%) but similar specificity (97.7% versus 96.6%). Conclusion. The performance of the RADT was similar between technologists and ED nurses, although adequate power was not achieved. RADT may be employed in the ED without clinically significant loss of sensitivity. PMID:27579047

  15. Operator Influence on Blinded Diagnostic Accuracy of Point-of-Care Antigen Testing for Group A Streptococcal Pharyngitis

    PubMed Central

    O'Brien, Mary

    2016-01-01

    Background. Acute pharyngitis caused by Group A Streptococcus (GAS) is a common presentation to pediatric emergency departments (ED). Diagnosis with conventional throat culture requires 18–24 hours, which prevents point-of-care treatment decisions. Rapid antigen detection tests (RADT) are faster, but previous reports demonstrate significant operator influence on performance. Objective. To measure operator influence on the diagnostic accuracy of a RADT when performed by pediatric ED nurses and clinical microbiology laboratory technologists, using conventional culture as the reference standard. Methods. Children presenting to a pediatric ED with suspected acute pharyngitis were recruited. Three pharyngeal swabs were collected at once. One swab was used to perform the RADT in the ED, and two were sent to the clinical microbiology laboratory for RADT and conventional culture testing. Results. The RADT when performed by technologists compared to nurses had a 5.1% increased sensitivity (81.4% versus 76.3%) (p = 0.791) (95% CI for difference between technologists and nurses = −11% to +21%) but similar specificity (97.7% versus 96.6%). Conclusion. The performance of the RADT was similar between technologists and ED nurses, although adequate power was not achieved. RADT may be employed in the ED without clinically significant loss of sensitivity. PMID:27579047

  16. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    NASA Astrophysics Data System (ADS)

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward

    2016-06-01

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty information on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.

  17. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    DOE PAGESBeta

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward

    2016-06-10

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty informationmore » on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.« less

  18. Dynamic emulation modelling for the optimal operation of water systems: an overview

    NASA Astrophysics Data System (ADS)

    Castelletti, A.; Galelli, S.; Giuliani, M.

    2014-12-01

    Despite sustained increase in computing power over recent decades, computational limitations remain a major barrier to the effective and systematic use of large-scale, process-based simulation models in rational environmental decision-making. Whereas complex models may provide clear advantages when the goal of the modelling exercise is to enhance our understanding of the natural processes, they introduce problems of model identifiability caused by over-parameterization and suffer from high computational burden when used in management and planning problems. As a result, increasing attention is now being devoted to emulation modelling (or model reduction) as a way of overcoming these limitations. An emulation model, or emulator, is a low-order approximation of the process-based model that can be substituted for it in order to solve high resource-demanding problems. In this talk, an overview of emulation modelling within the context of the optimal operation of water systems will be provided. Particular emphasis will be given to Dynamic Emulation Modelling (DEMo), a special type of model complexity reduction in which the dynamic nature of the original process-based model is preserved, with consequent advantages in a wide range of problems, particularly feedback control problems. This will be contrasted with traditional non-dynamic emulators (e.g. response surface and surrogate models) that have been studied extensively in recent years and are mainly used for planning purposes. A number of real world numerical experiences will be used to support the discussion ranging from multi-outlet water quality control in water reservoir through erosion/sedimentation rebalancing in the operation of run-off-river power plants to salinity control in lake and reservoirs.

  19. Advanced treatment of municipal wastewater by nanofiltration: Operational optimization and membrane fouling analysis.

    PubMed

    Li, Kun; Wang, Jianxing; Liu, Jibao; Wei, Yuansong; Chen, Meixue

    2016-05-01

    Municipal sewage from an oxidation ditch was treated for reuse by nanofiltration (NF) in this study. The NF performance was optimized, and its fouling characteristics after different operational durations (i.e., 48 and 169hr) were analyzed to investigate the applicability of nanofiltration for water reuse. The optimum performance was achieved when transmembrane pressure=12bar, pH=4 and flow rate=8L/min using a GE membrane. The permeate water quality could satisfy the requirements of water reclamation for different uses and local standards for water reuse in Beijing. Flux decline in the fouling experiments could be divided into a rapid flux decline and a quasi-steady state. The boundary flux theory was used to predict the evolution of permeate flux. The expected operational duration based on the 169-hr experiment was 392.6hr which is 175% longer than that of the 48-hr one. High molecular weight (MW) protein-like substances were suggested to be the dominant foulants after an extended period based on the MW distribution and the fluorescence characteristics. The analyses of infrared spectra and extracellular polymeric substances revealed that the roles of both humic- and polysaccharide-like substances were diminished, while that of protein-like substances were strengthened in the contribution of membrane fouling with time prolonged. Inorganic salts were found to have marginally influence on membrane fouling. Additionally, alkali washing was more efficient at removing organic foulants in the long term, and a combination of water flushing and alkali washing was appropriate for NF fouling control in municipal sewage treatment. PMID:27155415

  20. High-fidelity two-qubit gates via dynamical decoupling of local 1 /f noise at the optimal point

    NASA Astrophysics Data System (ADS)

    D'Arrigo, A.; Falci, G.; Paladino, E.

    2016-08-01

    We investigate the possibility of achieving high-fidelity universal two-qubit gates by supplementing optimal tuning of individual qubits with dynamical decoupling (DD) of local 1 /f noise. We consider simultaneous local pulse sequences applied during the gate operation and compare the efficiencies of periodic, Carr-Purcell, and Uhrig DD with hard π pulses along two directions (πz /y pulses). We present analytical perturbative results (Magnus expansion) in the quasistatic noise approximation combined with numerical simulations for realistic 1 /f noise spectra. The gate efficiency is studied as a function of the gate duration, of the number n of pulses, and of the high-frequency roll-off. We find that the gate error is nonmonotonic in n , decreasing as n-α in the asymptotic limit, α ≥2 , depending on the DD sequence. In this limit πz-Urhig is the most efficient scheme for quasistatic 1 /f noise, but it is highly sensitive to the soft UV cutoff. For small number of pulses, πz control yields anti-Zeno behavior, whereas πy pulses minimize the error for a finite n . For the current noise figures in superconducting qubits, two-qubit gate errors ˜10-6 , meeting the requirements for fault-tolerant quantum computation, can be achieved. The Carr-Purcell-Meiboom-Gill sequence is the most efficient procedure, stable for 1 /f noise with UV cutoff up to gigahertz.

  1. Microfabricated torsion levers optimized for low force and high-frequency operation in fluids.

    PubMed

    Beyder, Arthur; Sachs, Frederick

    2006-01-01

    We developed a mass production fabrication process for making symmetrically supported torsion cantilevers/oscillators with highly compliant springs. These torsion probes offer advantages in atomic force microscopy (AFM) because they are small, have high optical gain, do not warp and can be made with two independent axes. Compared to traditional AFM cantilevers, these probes have higher frequency response, higher Q, lower noise, better optics (since the mirror does not bend) and two data channels. Soft small levers with sub-pN force resolution can resonate cleanly above 10 kHz in water. When fabricated with a ferromagnetic coating on the rigid reflecting pad, they can be driven magnetically or serve as high-resolution magnetometers. Asymmetric levers can be tapping mode probes or high-resolution accelerometers. The dual axis gimbaled probes with two orthogonal axes can operate on a standard AFM with single beam illumination. These probes can be used as self-referencing, drift free, cantilevers where one axis senses the substrate position and the other the sample position. These levers can be optimized for differential contrast or high-resolution friction imaging.

  2. The application of the gradient-based adjoint multi-point optimization of single and double shock control bumps for transonic airfoils

    NASA Astrophysics Data System (ADS)

    Mazaheri, K.; Nejati, A.; Chaharlang Kiani, K.; Taheri, R.

    2016-07-01

    A shock control bump (SCB) is a flow control method that uses local small deformations in a flexible wing surface to considerably reduce the strength of shock waves and the resulting wave drag in transonic flows. Most of the reported research is devoted to optimization in a single flow condition. Here, we have used a multi-point adjoint optimization scheme to optimize shape and location of the SCB. Practically, this introduces transonic airfoils equipped with the SCB that are simultaneously optimized for different off-design transonic flight conditions. Here, we use this optimization algorithm to enhance and optimize the performance of SCBs in two benchmark airfoils, i.e., RAE-2822 and NACA-64-A010, over a wide range of off-design Mach numbers. All results are compared with the usual single-point optimization. We use numerical simulation of the turbulent viscous flow and a gradient-based adjoint algorithm to find the optimum location and shape of the SCB. We show that the application of SCBs may increase the aerodynamic performance of an RAE-2822 airfoil by 21.9 and by 22.8 % for a NACA-64-A010 airfoil compared to the no-bump design in a particular flight condition. We have also investigated the simultaneous usage of two bumps for the upper and the lower surfaces of the airfoil. This has resulted in a 26.1 % improvement for the RAE-2822 compared to the clean airfoil in one flight condition.

  3. Optimal Operation of Variable Speed Pumping System in China's Eastern Route Project of S-to-N Water Diversion Project

    NASA Astrophysics Data System (ADS)

    Cheng, Jilin; Zhang, Lihua; Zhang, Rentian; Gong, Yi; Zhu, Honggeng; Deng, Dongsheng; Feng, Xuesong; Qiu, Jinxian

    2010-06-01

    A dynamic planning model for optimizing operation of variable speed pumping system, aiming at minimum power consumption, was proposed to achieve economic operation. The No. 4 Jiangdu Pumping Station, a source pumping station in China's Eastern Route of South-to-North Water Diversion Project, is taken as a study case. Since the sump water level of Jiangdu Pumping Station is affected by the tide of Yangtze River, the daily-average heads of the pumping system varies yearly from 3.8m to 7.8m and the tide level difference in one day up to 1.2m. Comparisons of operation electricity cost between optimized variable speed and fixed speed operations of pumping system were made. When the full load operation mode is adopted, whether or not electricity prices in peak-valley periods are considered, the benefits of variable speed operation cannot compensate the energy consumption of the VFD. And when the pumping system operates in part load and the peak-valley electricity prices are considered, the pumping system should cease operation or lower its rotational speed in peak load hours since the electricity price are much higher, and to the contrary the pumping system should raise its rotational speed in valley load hours to pump more water. The computed results show that if the pumping system operates in 80% or 60% loads, the energy consumption cost of specified volume of water will save 14.01% and 26.69% averagely by means of optimal variable speed operation, and the investment on VFD will be paid back in 2 or 3 years. However, if the pumping system operates in 80% or 60% loads and the energy cost is calculated in non peak-valley electricity price, the repayment will be lengthened up to 18 years. In China's S-to-N Water Diversion Project, when the market operation and peak-valley electricity prices are taken into effect to supply water and regulate water levels in regulation reservoirs as Hongzehu Lake, Luomahu Lake, etc. the economic operation of water-diversion pumping stations

  4. A Time Scheduling Model of Logistics Service Supply Chain Based on the Customer Order Decoupling Point: A Perspective from the Constant Service Operation Time

    PubMed Central

    Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng

    2014-01-01

    In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818

  5. A time scheduling model of logistics service supply chain based on the customer order decoupling point: a perspective from the constant service operation time.

    PubMed

    Liu, Weihua; Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng

    2014-01-01

    In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC.

  6. A time scheduling model of logistics service supply chain based on the customer order decoupling point: a perspective from the constant service operation time.

    PubMed

    Liu, Weihua; Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng

    2014-01-01

    In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818

  7. Optimal operating rules definition in complex water resource systems combining fuzzy logic, expert criteria and stochastic programming

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel

    2016-04-01

    This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to

  8. A game theory-reinforcement learning (GT-RL) method to develop optimal operation policies for multi-operator reservoir systems

    NASA Astrophysics Data System (ADS)

    Madani, Kaveh; Hooshyar, Milad

    2014-11-01

    Reservoir systems with multiple operators can benefit from coordination of operation policies. To maximize the total benefit of these systems the literature has normally used the social planner's approach. Based on this approach operation decisions are optimized using a multi-objective optimization model with a compound system's objective. While the utility of the system can be increased this way, fair allocation of benefits among the operators remains challenging for the social planner who has to assign controversial weights to the system's beneficiaries and their objectives. Cooperative game theory provides an alternative framework for fair and efficient allocation of the incremental benefits of cooperation. To determine the fair and efficient utility shares of the beneficiaries, cooperative game theory solution methods consider the gains of each party in the status quo (non-cooperation) as well as what can be gained through the grand coalition (social planner's solution or full cooperation) and partial coalitions. Nevertheless, estimation of the benefits of different coalitions can be challenging in complex multi-beneficiary systems. Reinforcement learning can be used to address this challenge and determine the gains of the beneficiaries for different levels of cooperation, i.e., non-cooperation, partial cooperation, and full cooperation, providing the essential input for allocation based on cooperative game theory. This paper develops a game theory-reinforcement learning (GT-RL) method for determining the optimal operation policies in multi-operator multi-reservoir systems with respect to fairness and efficiency criteria. As the first step to underline the utility of the GT-RL method in solving complex multi-agent multi-reservoir problems without a need for developing compound objectives and weight assignment, the proposed method is applied to a hypothetical three-agent three-reservoir system.

  9. Optimizing the order of operations for movement scrubbing: Comment on Power et al.

    PubMed

    Carp, Joshua

    2013-08-01

    A recent study by Power and colleagues shows that BOLD artifacts induced by head movement can substantially alter patterns of resting-state functional connectivity and proposes a novel procedure for reducing these artifacts by deleting (or "scrubbing") movement-contaminated volumes. The authors acknowledge that this work is descriptive and not prescriptive, and note that future studies may refine the proposed scrubbing method. Nevertheless, it is worth pointing out that this method can be improved substantially by a single transposition in the order of operations. Temporal filtering is known to introduce ringing artifacts that emanate from sharp transitions in signal intensity. The method proposed in the target article applies temporal filtering before deleting contaminated volumes-in effect, spreading movement-related artifacts backwards and forwards in time, but deleting only the originally contaminated data. Using simulated data, we show that deleting and replacing contaminated volumes before temporal filtering removes a greater proportion of artifactual signal while retaining a greater proportion of the original data.

  10. Synergistic gains from the multi-objective optimal operation of cascade reservoirs in the Upper Yellow River basin

    NASA Astrophysics Data System (ADS)

    Bai, Tao; Chang, Jian-xia; Chang, Fi-John; Huang, Qiang; Wang, Yi-min; Chen, Guang-sheng

    2015-04-01

    The Yellow River, known as China's "mother river", originates from the Qinghai-Tibet Plateau and flows through nine provinces with a basin area of 0.75 million km2 and an annual runoff of 53.5 billion m3. In the last decades, a series of reservoirs have been constructed and operated along the Upper Yellow River for hydropower generation, flood and ice control, and water resources management. However, these reservoirs are managed by different institutions, and the gains owing to the joint operation of reservoirs are neither clear nor recognized, which prohibits the applicability of reservoir joint operation. To inspire the incentive of joint operation, the contribution of reservoirs to joint operation needs to be quantified. This study investigates the synergistic gains from the optimal joint operation of two pivotal reservoirs (i.e., Longyangxia and Liujiaxia) along the Upper Yellow River. Synergistic gains of optimal joint operation are analyzed based on three scenarios: (1) neither reservoir participates in flow regulation; (2) one reservoir (i.e., Liujiaxia) participates in flow regulation; and (3) both reservoirs participate in flow regulation. We develop a multi-objective optimal operation model of cascade reservoirs by implementing the Progressive Optimality Algorithm-Dynamic Programming Successive Approximation (POA-DPSA) method for estimating the gains of reservoirs based on long series data (1987-2010). The results demonstrate that the optimal joint operation of both reservoirs can increase the amount of hydropower generation to 1.307 billion kW h/year (about 594 million USD) and increase the amount of water supply to 36.57 billion m3/year (about 15% improvement). Furthermore both pivotal reservoirs play an extremely essential role to ensure the safety of downstream regions for ice and flood management, and to significantly increase the minimum flow in the Upper Yellow River during dry periods. Therefore, the synergistic gains of both reservoirs can be

  11. Optimization and risk analyses for rule curves of reservoir operation: application to Tien-Hua-Hu Reservoir in Taiwan.

    PubMed

    Kuo, J T; Hsu, N S; Chiu, S K

    2006-01-01

    Tien-Hua-Hu Reservoir is currently under planning by the Water Resources Agency, Taiwan to meet the increasing water demands of central Taiwan arising from rapid growth of domestic water supply, and high-tech industrial parks. This study develops a simulation model for the ten-day period reservoir operation to calculate the ten-day water shortage index under varying rule curves. A genetic algorithm is coupled to the simulation model to find the optimal rule curves using the minimum ten-day water shortage index as an objective function. This study generates many sets of synthetic streamflows for risk, reliability, resiliency, and vulnerability analyses of reservoir operation. ARMA and disaggregation models are developed and applied to the synthetic streamflow generation. The optimal rule curves obtained from this study perform better in the ten-day shortage index when compared to the originally designed rule curves from a previous study. The optimal rule curves are also superior to the originally designed rule curves in terms of vulnerability. However, in terms of reliability and resiliency, the optimal rule curves are inferior to the those originally designed. Results from this study have provided in general a set of improved rule curves for operation of the Tien-Hua-Hu Reservoir. Furthermore, results from reliability, resiliency and vulnerability analyses offer much useful information for decision making in reservoir operation.

  12. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    PubMed Central

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Rocca, David Della; Rocca, Robert C Della; Andron, Aleza; Jain, Vandana

    2015-01-01

    Objective: To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery. PMID:26655001

  13. Pressure-independent point in current-voltage characteristics of coplanar electrode microplasma devices operated in neon

    SciTech Connect

    Meng Lingguo; Lin Zhaojun; Xing Jianping; Liang Zhihu; Liu Chunliang

    2010-05-10

    We introduce the idea of a pressure-independent point (PIP) in a group of current-voltage curves for the coplanar electrode microplasma device (CEMPD) at neon pressures ranging from 15 to 95 kPa. We studied four samples of CEMPDs with different sizes of the microcavity and observed the PIP phenomenon for each sample. The PIP voltage depends on the area of the microcavity and is independent of the height of the microcavity. The PIP discharge current, I{sub PIP}, is proportional to the volume (Vol) of the microcavity and can be expressed by the formula I{sub PIP}=I{sub PIP0}+DxVol. For our samples, I{sub PIP0} (the discharge current when Vol is zero) is about zero and D (discharge current density) is about 3.95 mA mm{sup -3}. The error in D is 0.411 mA mm{sup -3} (less than 11% of D). When the CEMPD operates at V{sub PIP}, the discharge current is quite stable under different neon pressures.

  14. Do Woody Plants Operate Near the Point of Catastrophic Xylem Dysfunction Caused by Dynamic Water Stress? 1

    PubMed Central

    Tyree, Melvin T.; Sperry, John S.

    1988-01-01

    We discuss the relationship between the dynamically changing tension gradients required to move water rapidly through the xylem conduits of plants and the proportion of conduits lost through embolism as a result of water tension. We consider the implications of this relationship to the water relations of trees. We have compiled quantitative data on the water relations, hydraulic architecture and vulnerability of embolism of four widely different species: Rhizophora mangle, Cassipourea elliptica, Acer saccharum, and Thuja occidentalis. Using these data, we modeled the dynamics of water flow and xylem blockage for these species. The model is specifically focused on the conditions required to generate `runaway embolism,' whereby the blockage of xylem conduits through embolism leads to reduced hydraulic conductance causing increased tension in the remaining vessels and generating more tension in a vicious circle. The model predicted that all species operate near the point of catastrophic xylem failure due to dynamic water stress. The model supports Zimmermann's plant segmentation hypothesis. Zimmermann suggested that plants are designed hydraulically to sacrifice highly vulnerable minor branches and thus improve the water balance of remaining parts. The model results are discussed in terms of the morphology, hydraulic architecture, eco-physiology, and evolution of woody plants. PMID:16666351

  15. Using military friendships to optimize postdeployment reintegration for male Operation Iraqi Freedom/Operation Enduring Freedom veterans.

    PubMed

    Hinojosa, Ramon; Hinojosa, Melanie Sberna

    2011-01-01

    Social relationships are important to health out comes. The postdeployment family reintegration literature focuses on the role of the civilian family in facilitating the transition from Active Duty military deployment to civilian society. The focus on the civilian family relationship may miss other important personal connections in veterans' lives. One such connection is the relationship many veterans have with former military unit members who served with them when deployed. Drawing on interviews with male Operation Iraqi Freedom/Operation Enduring Freedom veterans conducted from 2008 to 2009, we argue that the members of a military unit, especially during armed conflict, should be considered a resource to help the "family" reintegration process rather than impede it. This research has implications for current reintegration policy and how best to assist veterans transitioning into civilian society.

  16. Optimizing the operation of a high resolution vertical Johann spectrometer using a high energy fluorescer x-ray source

    SciTech Connect

    Haugh, Michael; Stewart, Richard

    2010-10-15

    This paper describes the operation and testing for a vertical Johann spectrometer (VJS) operating in the 13 keV range. The spectrometer is designed to use thin curved mica crystals or thick germanium crystals. The VJS must have a resolution of E/{Delta}E=3000 or better to measure the Doppler broadening of highly ionized krypton and operate at a small x-ray angle in order to be used as a diagnostic in a laser plasma target chamber. The VJS was aligned, tested, and optimized using a fluorescer type high energy x-ray (HEX) source located at National Security Technologies (NSTec), LLC, in Livermore, CA. The HEX uses a 160 kV x-ray tube to excite fluorescence from various targets. Both rubidium and bismuth fluorescers were used for this effort. This presentation describes the NSTec HEX system and the methods used to optimize and characterize the VJS performance.

  17. Optimizing the Operation of a Vertical Johann Spectrometer Using a High Energy Fluorescer X-ray Source

    SciTech Connect

    Haugh, Michael; Stewart, Richard

    2010-10-01

    This paper describes the operation and testing for a Vertical Johann Spectrometer (VJS) operating in the 13 keV range. The spectrometer is designed to use thin curved mica crystals or thick germanium crystals. The VJS must have a resolution E/ΔE=3000 or better to measure Doppler broadening of highly ionized krypton and operate at a small X-ray angle in order to be used as a diagnostic in a laser plasma target chamber. The VJS was aligned, tested, and optimized using a fluorescer type high energy X-ray (HEX) source located at National Security Technologies, LLC (NSTec), in Livermore, California. The HEX uses a 160 kV X-ray tube to excite fluorescence from various targets. Both rubidium and bismuth fluorescers were used for this effort. This presentation describes the NSTec HEX system and the methods used to optimize and characterize the VJS performance.

  18. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  19. 76 FR 32994 - Nine Mile Point 3 Nuclear Project, LLC and Unistar Nuclear Operating Services, LLC; Combined...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-07

    ... License Application for Nine Mile Point 3 Nuclear Power Plant; Exemption 1.0 Background Nine Mile Point 3... 52, ``Licenses, Certifications, and Approvals for Nuclear Power Plants.'' This reactor is to be identified as Nine Mile Point 3 Nuclear Power Plant (NMP3NPP), and located adjacent to the current Nine...

  20. Minimizing the health and climate impacts of emissions from heavy-duty public transportation bus fleets through operational optimization.

    PubMed

    Gouge, Brian; Dowlatabadi, Hadi; Ries, Francis J

    2013-04-16

    In contrast to capital control strategies (i.e., investments in new technology), the potential of operational control strategies (e.g., vehicle scheduling optimization) to reduce the health and climate impacts of the emissions from public transportation bus fleets has not been widely considered. This case study demonstrates that heterogeneity in the emission levels of different bus technologies and the exposure potential of bus routes can be exploited though optimization (e.g., how vehicles are assigned to routes) to minimize these impacts as well as operating costs. The magnitude of the benefits of the optimization depend on the specific transit system and region. Health impacts were found to be particularly sensitive to different vehicle assignments and ranged from worst to best case assignment by more than a factor of 2, suggesting there is significant potential to reduce health impacts. Trade-offs between climate, health, and cost objectives were also found. Transit agencies that do not consider these objectives in an integrated framework and, for example, optimize for costs and/or climate impacts alone, risk inadvertently increasing health impacts by as much as 49%. Cost-benefit analysis was used to evaluate trade-offs between objectives, but large uncertainties make identifying an optimal solution challenging.

  1. On the optimization of operating pressure for a nuclear pumped laser excited by 3He(n, p) 3H reaction products

    NASA Astrophysics Data System (ADS)

    Çetin, Füsun

    2006-09-01

    In the nuclear pumped laser, passage of the energetic nuclear fragments through gas causes a non-uniform energy deposition. This spatial non-uniformity induces gas motion, which results in density hence, refractive index gradients. Since the refractive index gradient of the gas determines the degree of beam refraction as it propagates through the cavity, refractive index gradient adversely affects the resonator stability and beam quality. Therefore, optimal gas parameters should improve optical homogeneity in addition to output power. Refractive index gradient are here considered to be a measure of optical inhomogeneity and its variations with tube parameter are examined to ensure the necessary optical quality of the supplied gas. Spatial and temporal variations of normalized refractive index gradients in the 3He gas excited by 3He(n, p) 3H reactions are calculated by using the density field obtained from the previously reported dynamic model for energy deposition for various operating pressures and tube radii. Additionally, variation of power deposition per pulse with the operating pressure and variation of average power deposition density with tube diameter are calculated and used in determining optimal parameters, as a measure for improving the output power. The optimal operating pressure and tube size, from the point of view of power deposition and optical homogeneity, are determined for the present conditions. Calculated results are obtained for a closed 3He-filled cylindrical laser tube, with a maximum thermal neutron flux of 8 × 10 16 n/cm 2 sn, by using characteristics of the TRIGA Mark II Reactor at Istanbul Technical University (ITU).

  2. Business-objective-directed, constraint-based multivariate optimization of high-performance liquid chromatography operational parameters.

    PubMed

    Chester, T L

    2003-10-24

    The goal of a separation can be defined in terms of business needs. One goal often used is to provide the required separation in minimum time, but many other goals are also possible. These include maximizing resolution within an analysis-time limit, or minimizing the overall cost. The remaining requirements of the separation can be applied as constraints in the optimization of the goal. We will present a flexible, business-objective-based approach for optimizing the operational parameters of high performance liquid chromatography (HPLC) methods. After selecting the stationary phase and the mobile-phase components, several isocratic experiments are required to build a retention model. Multivariate optimization is performed, within the model, to find the best combination of the parameters being varied so that the result satisfies the goal to the fullest extent possible within the constraints. Interdependencies of parameters can be revealed by plotting the loci of optimal variable values or the function being optimized against a constraint. We demonstrate the concepts with a model separation originally requiring a 54 min analysis time. Multivariate optimization reduces the predicted analysis time to as short as 8 min, depending on the goals and constraints specified. PMID:14601838

  3. Extension of the hybrid linear programming method to optimize simultaneously the design and operation of groundwater utilization systems

    NASA Astrophysics Data System (ADS)

    Bostan, Mohamad; Hadi Afshar, Mohamad; Khadem, Majed

    2015-04-01

    This article proposes a hybrid linear programming (LP-LP) methodology for the simultaneous optimal design and operation of groundwater utilization systems. The proposed model is an extension of an earlier LP-LP model proposed by the authors for the optimal operation of a set of existing wells. The proposed model can be used to optimally determine the number, configuration and pumping rates of the operational wells out of potential wells with fixed locations to minimize the total cost of utilizing a two-dimensional confined aquifer under steady-state flow conditions. The model is able to take into account the well installation, piping and pump installation costs in addition to the operational costs, including the cost of energy and maintenance. The solution to the problem is defined by well locations and their pumping rates, minimizing the total cost while satisfying a downstream demand, lower/upper bound on the pumping rates, and lower/upper bound on the water level drawdown at the wells. A discretized version of the differential equation governing the flow is first embedded into the model formulation as a set of additional constraints. The resulting mixed-integer highly constrained nonlinear optimization problem is then decomposed into two subproblems with different sets of decision variables, one with a piezometric head and the other with the operational well locations and the corresponding pumping rates. The binary variables representing the well locations are approximated by a continuous variable leading to two LP subproblems. Having started with a random value for all decision variables, the two subproblems are solved iteratively until convergence is achieved. The performance and ability of the proposed method are tested against a hypothetical problem from the literature and the results are presented and compared with those obtained using a mixed-integer nonlinear programming method. The results show the efficiency and effectiveness of the proposed method for

  4. Optimal Trajectories and Control Strategies for the Helicopter in One-Engine-Inoperative Terminal-Area Operations

    NASA Technical Reports Server (NTRS)

    Chen, Robert T. N.; Zhao, Yi-Yuan; Aiken, Edwin W. (Technical Monitor)

    1995-01-01

    Engine failure represents a major safety concern to helicopter operations, especially in the critical flight phases of takeoff and landing from/to small, confined areas. As a result, the JAA and FAA both certificate a transport helicopter as either Category-A or Category-B according to the ability to continue its operations following engine failures. A Category-B helicopter must be able to land safely in the event of one or all engine failures. There is no requirement, however, for continued flight capability. In contrast, Category-A certification, which applies to multi-engine transport helicopters with independent engine systems, requires that they continue the flight with one engine inoperative (OEI). These stringent requirements, while permitting its operations from rooftops and oil rigs and flight to areas where no emergency landing sites are available, restrict the payload of a Category-A transport helicopter to a value safe for continued flight as well as for landing with one engine inoperative. The current certification process involves extensive flight tests, which are potentially dangerous, costly, and time consuming. These tests require the pilot to simulate engine failures at increasingly critical conditions, Flight manuals based on these tests tend to provide very conservative recommendations with regard to maximum takeoff weight or required runway length. There are very few theoretical studies on this subject to identify the fundamental parameters and tradeoff factors involved. Furthermore, a capability for real-time generation of OEI optimal trajectories is very desirable for providing timely cockpit display guidance to assist the pilot in reducing his workload and to increase safety in a consistent and reliable manner. A joint research program involving NASA Ames Research Center, the FAA, and the University of Minnesota is being conducted to determine OEI optimal control strategies and the associated optimal,trajectories for continued takeoff (CTO

  5. A new optimization algorithm based on a combination of particle swarm optimization, convergence and divergence operators for single-objective and multi-objective problems

    NASA Astrophysics Data System (ADS)

    Mahmoodabadi, M. J.; Bagheri, A.; Nariman-zadeh, N.; Jamali, A.

    2012-10-01

    Particle swarm optimization (PSO) is a randomized and population-based optimization method that was inspired by the flocking behaviour of birds and human social interactions. In this work, multi-objective PSO is modified in two stages. In the first stage, PSO is combined with convergence and divergence operators. Here, this method is named CDPSO. In the second stage, to produce a set of Pareto optimal solutions which has good convergence, diversity and distribution, two mechanisms are used. In the first mechanism, a new leader selection method is defined, which uses the periodic iteration and the concept of the particle's neighbour number. This method is named periodic multi-objective algorithm. In the second mechanism, an adaptive elimination method is employed to limit the number of non-dominated solutions in the archive, which has influences on computational time, convergence and diversity of solution. Single-objective results show that CDPSO performs very well on the complex test functions in terms of solution accuracy and convergence speed. Furthermore, some benchmark functions are used to evaluate the performance of periodic multi-objective CDPSO. This analysis demonstrates that the proposed algorithm operates better in three metrics through comparison with three well-known elitist multi-objective evolutionary algorithms. Finally, the algorithm is used for Pareto optimal design of a two-degree of freedom vehicle vibration model. The conflicting objective functions are sprung mass acceleration and relative displacement between sprung mass and tyre. The feasibility and efficiency of periodic multi-objective CDPSO are assessed in comparison with multi-objective modified NSGAII.

  6. Simulation and Optimization Methods for Assessing the Impact of Aviation Operations on the Environment

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Chen, Neil; Ng, Hok K.

    2010-01-01

    There is increased awareness of anthropogenic factors affecting climate change and urgency to slow the negative impact. Greenhouse gases, oxides of Nitrogen and contrails resulting from aviation affect the climate in different and uncertain ways. This paper develops a flexible simulation and optimization software architecture to study the trade-offs involved in reducing emissions. The software environment is used to conduct analysis of two approaches for avoiding contrails using the concepts of contrail frequency index and optimal avoidance trajectories.

  7. Trajectory design to L4 and L5 libiration point in the Earth-Moon system using lunar gravity assistance and orbit optimization

    NASA Astrophysics Data System (ADS)

    Zhang, ZhengTao; Tang, Jingshi; Liu, Lin

    There has some application prospects of the stable libration point L4 and L5 of the Earth-Moon system in deep space exploration,such as VLBI.The transfer strategy is from LEO to L4 or L5 libration point with the lunar gravity assistance,which saves energy compared to the traditional Hohmann transfer strategy.The high-order analysis solution of period orbit around L4 libration point is applied to express the target orbit.Then by changing the velocity of a given point on the target orbit and doing reverse integration the probe reaches the perilune patched by a Hohmann transfer orbit from LEO with different velocity.By utilizing the global optimization method PSO and local SQP method,we optimize the transfer orbit. This powered lunar gravity assistance method is applied in the transfer from L2 to L4 and L5 libration point with invariant manifolds,which sloves the problem that the unstable manifold of L2 cannot reach L4 and L5.

  8. The OPALS Plan for Operations: Use of ISS Trajectory and Attitude Models in the OPALS Pointing Strategy

    NASA Technical Reports Server (NTRS)

    Abrahamson, Matthew J.; Oaida, Bogdan; Erkmen, Baris

    2013-01-01

    This paper will discuss the OPALS pointing strategy, focusing on incorporation of ISS trajectory and attitude models to build pointing predictions. Methods to extrapolate an ISS prediction based on past data will be discussed and will be compared to periodically published ISS predictions and Two-Line Element (TLE) predictions. The prediction performance will also be measured against GPS states available in telemetry. The performance of the pointing products will be compared to the allocated values in the OPALS pointing budget to assess compliance with requirements.

  9. Optimization of operation of a three-electrode gyrotron with the use of a flow-type calorimeter

    SciTech Connect

    Kharchev, Nikolay K.; Batanov, German M.; Kolik, Leonid V.; Malakhov, Dmitrii V.; Petrov, Aleksandr Ye.; Sarksyan, Karen A.; Skvortsova, Nina N.; Stepakhin, Vladimir D.; Belousov, Vladimir I.; Malygin, Sergei A.; Tai, Yevgenii M.

    2013-01-15

    Results are presented for measurements of microwave power of the Borets-75/0.8 gyrotron with recovery of residual electron energy, which were performed by a flow-type calorimeter. This gyrotron is a part of the ECR plasma heating complex put into operation in 2010 at the L-2M stellarator. The new calorimeter is capable of measuring microwave power up to 0.5 MW. Monitoring of the microwave power makes it possible to control the parameters of the gyrotron power supply unit (its voltage and current) and the magnetic field of the cryomagnet in order to optimize the gyrotron operation and arrive at maximum efficiency.

  10. Optimization of operation of a three-electrode gyrotron with the use of a flow-type calorimeter

    NASA Astrophysics Data System (ADS)

    Kharchev, Nikolay K.; Batanov, German M.; Kolik, Leonid V.; Malakhov, Dmitrii V.; Petrov, Aleksandr Ye.; Sarksyan, Karen A.; Skvortsova, Nina N.; Stepakhin, Vladimir D.; Belousov, Vladimir I.; Malygin, Sergei A.; Tai, Yevgenii M.

    2013-01-01

    Results are presented for measurements of microwave power of the Borets-75/0.8 gyrotron with recovery of residual electron energy, which were performed by a flow-type calorimeter. This gyrotron is a part of the ECR plasma heating complex put into operation in 2010 at the L-2M stellarator. The new calorimeter is capable of measuring microwave power up to 0.5 MW. Monitoring of the microwave power makes it possible to control the parameters of the gyrotron power supply unit (its voltage and current) and the magnetic field of the cryomagnet in order to optimize the gyrotron operation and arrive at maximum efficiency.

  11. Optimization of operation of a three-electrode gyrotron with the use of a flow-type calorimeter.

    PubMed

    Kharchev, Nikolay K; Batanov, German M; Kolik, Leonid V; Malakhov, Dmitrii V; Petrov, Aleksandr Ye; Sarksyan, Karen A; Skvortsova, Nina N; Stepakhin, Vladimir D; Belousov, Vladimir I; Malygin, Sergei A; Tai, Yevgenii M

    2013-01-01

    Results are presented for measurements of microwave power of the Borets-75/0.8 gyrotron with recovery of residual electron energy, which were performed by a flow-type calorimeter. This gyrotron is a part of the ECR plasma heating complex put into operation in 2010 at the L-2M stellarator. The new calorimeter is capable of measuring microwave power up to 0.5 MW. Monitoring of the microwave power makes it possible to control the parameters of the gyrotron power supply unit (its voltage and current) and the magnetic field of the cryomagnet in order to optimize the gyrotron operation and arrive at maximum efficiency.

  12. Optimizing stability, transport, and divertor operation through plasma shaping for steady-state scenario development in DIII-D

    SciTech Connect

    Holcomb, C T; Ferron, J R; Luce, T C; Petrie, T W; Politzer, P A; Rhodes, T L; Doyle, E J; Makowski, M A; Kessel, C; DeBoo, J C; Groebner, R J; Osborne, T H; Snyder, P B; Greenfield, C M; La Haye, R J; Murakami, M; Hyatt, A W; Challis, C; Prater, R; Jackson, G L; Park, J; Reimerdes, H; Turnbull, A D; McKee, G R; Shafer, M W; Groth, M; Porter, G D; West, W P

    2008-12-19

    Recent studies on the DIII-D tokamak [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] have elucidated key aspects of the dependence of stability, confinement, and density control on the plasma magnetic configuration, leading to the demonstration of nearly noninductive operation for >1 s with pressure 30% above the ideal no-wall stability limit. Achieving fully noninductive tokamak operation requires high pressure, good confinement, and density control through divertor pumping. Plasma geometry affects all of these. Ideal magnetohydrodynamics modeling of external kink stability suggests that it may be optimized by adjusting the shape parameter known as squareness ({zeta}). Optimizing kink stability leads to an increase in the maximum stable pressure. Experiments confirm that stability varies strongly with {zeta}, in agreement with the modeling. Optimization of kink stability via {zeta} is concurrent with an increase in the H-mode edge pressure pedestal stability. Global energy confinement is optimized at the lowest {zeta} tested, with increased pedestal pressure and lower core transport. Adjusting the magnetic divertor balance about a double-null configuration optimizes density control for improved noninductive auxiliary current drive. The best density control is obtained with a slight imbalance toward the divertor opposite the ion grad(B) drift direction, consistent with modeling of these effects. These optimizations have been combined to achieve noninductive current fractions near unity for over 1 s with normalized pressure of 3.5<{beta}{sub N}<3.9, bootstrap current fraction of >65%, and a normalized confinement factor of H{sub 98(y,2)}{approx}1.5.

  13. Optimal operation management of fuel cell/wind/photovoltaic power sources connected to distribution networks

    NASA Astrophysics Data System (ADS)

    Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid

    2011-10-01

    In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.

  14. Operation costs and pollutant emissions reduction by definition of new collection scheduling and optimization of MSW collection routes using GIS. The case study of Barreiro, Portugal.

    PubMed

    Zsigraiova, Zdena; Semiao, Viriato; Beijoco, Filipa

    2013-04-01

    This work proposes an innovative methodology for the reduction of the operation costs and pollutant emissions involved in the waste collection and transportation. Its innovative feature lies in combining vehicle route optimization with that of waste collection scheduling. The latter uses historical data of the filling rate of each container individually to establish the daily circuits of collection points to be visited, which is more realistic than the usual assumption of a single average fill-up rate common to all the system containers. Moreover, this allows for the ahead planning of the collection scheduling, which permits a better system management. The optimization process of the routes to be travelled makes recourse to Geographical Information Systems (GISs) and uses interchangeably two optimization criteria: total spent time and travelled distance. Furthermore, rather than using average values, the relevant parameters influencing fuel consumption and pollutant emissions, such as vehicle speed in different roads and loading weight, are taken into consideration. The established methodology is applied to the glass-waste collection and transportation system of Amarsul S.A., in Barreiro. Moreover, to isolate the influence of the dynamic load on fuel consumption and pollutant emissions a sensitivity analysis of the vehicle loading process is performed. For that, two hypothetical scenarios are tested: one with the collected volume increasing exponentially along the collection path; the other assuming that the collected volume decreases exponentially along the same path. The results evidence unquestionable beneficial impacts of the optimization on both the operation costs (labor and vehicles maintenance and fuel consumption) and pollutant emissions, regardless the optimization criterion used. Nonetheless, such impact is particularly relevant when optimizing for time yielding substantial improvements to the existing system: potential reductions of 62% for the total

  15. Optimal detector locations for HOV Lane Operations. Interim report, September 1993-August 1994

    SciTech Connect

    Woods, D.L.

    1994-12-01

    Operating a high occupancy vehicle (HOV) lane within a relatively narrow roadway has the potential for a total blockage of the roadway which an incident occurs. The fact places special requirements on the information system for operation of the HOV facilities. The report combines the finding of other phases of this research with the special requirements of HOV facilities and recommends detector placement that will effectively meet HOV lane operational needs.

  16. Self-Organizing Hierarchical Particle Swarm Optimization with Time-Varying Acceleration Coefficients for Economic Dispatch with Valve Point Effects and Multifuel Options

    NASA Astrophysics Data System (ADS)

    Polprasert, Jirawadee; Ongsakul, Weerakorn; Dieu, Vo Ngoc

    2011-06-01

    This paper proposes a self-organizing hierarchical particle swarm optimization (SPSO) with time-varying acceleration coefficients (TVAC) for solving economic dispatch (ED) problem with non-smooth functions including multiple fuel options (MFO) and valve-point loading effects (VPLE). The proposed SPSO with TVAC is the new approach optimizer and good performance for solving ED problems. It can handle the premature convergence of the problem by re-initialization of velocity whenever particles are stagnated in the search space. To properly control both local and global explorations of the swarm during the optimization process, the performance of TVAC is included. The proposed method is tested in different ED problems with non-smooth cost functions and the obtained results are compared to those from many other methods in the literature. The results have revealed that the proposed SPSO with TVAC is effective in finding higher quality solutions for non-smooth ED problems than many other methods.

  17. Properties and Cycle Performance of Refrigerant Blends Operating Near and Above the Refrigerant Critical Point, Task 1: Refrigerant Properties

    SciTech Connect

    Mark O. McLinden; Arno Laesecke; Eric W. Lemmon; Joseph W. Magee; Richard A. Perkins

    2002-08-30

    The main goal of this project was to investigate and compare the performance of an R410A air conditioner to that of an R22 air conditioner, with specific interest in performance at high ambient temperatures at which the condenser of the R410A system may be operating above the refrigerant's critical point. Part 1 of this project consisted of measuring thermodynamic properties R125, R410A and R507A, measuring viscosity and thermal conductivity of R410A and R507A and comparing data to mixture models in NIST REFPROP database. For R125, isochoric (constant volume) heat capacity was measured over a temperature range of 305 to 397 K (32 to 124 C) at pressures up to 20 MPa. For R410A, isochoric heat capacity was measured along 8 isochores with a temperature range of 303 to 397 K (30 to 124 C) at pressures up to 18 MPa. Pressure-density-temperature was also measured along 14 isochores over a temperature range of 200 to 400 K (-73 to 127 C) at pressures up to 35 MPa and thermal conductivity along 6 isotherms over a temperature range of 301 to 404 K (28 to 131 C) with pressures to 38 MPa. For R507A, viscosity was measured along 5 isotherms over a temperature range of 301 to 421 K (28 to 148 C) at pressures up to 83 MPa and thermal conductivity along 6 isotherms over a temperature range of 301 to 404 K (28 to 131 C) with pressures to 38 MPa. Mixture models were developed to calculate the thermodynamic properties of HFC refrigerant mixtures containing R32, R125, R134a and/or R125. The form of the model is the same for all the blends considered, but blend-specific mixing functions are required for the blends R32/125 (R410 blends) and R32/134a (a constituent binary of R407 blends). The systems R125/134a, R125/143a, R134a/143a, and R134a/152a share a common, generalized mixing function. The new equation of state for R125 is believed to be the most accurate and comprehensive formulation of the properties for that fluid. Likewise, the mixture model developed in this work is the

  18. Efficiency of operation of wind turbine rotors optimized by the Glauert and Betz methods

    NASA Astrophysics Data System (ADS)

    Okulov, V. L.; Mikkelsen, R.; Litvinov, I. V.; Naumov, I. V.

    2015-11-01

    The models of two types of rotors with blades constructed using different optimization methods are compared experimentally. In the first case, the Glauert optimization by the pulsed method is used, which is applied independently for each individual blade cross section. This method remains the main approach in designing rotors of various duties. The construction of the other rotor is based on the Betz idea about optimization of rotors by determining a special distribution of circulation over the blade, which ensures the helical structure of the wake behind the rotor. It is established for the first time as a result of direct experimental comparison that the rotor constructed using the Betz method makes it possible to extract more kinetic energy from the homogeneous incoming flow.

  19. Memory and energy optimization strategies for multithreaded operating system on the resource-constrained wireless sensor node.

    PubMed

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng

    2015-01-01

    Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core "context aware" and multi-core "power-off/wakeup" energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264

  20. Memory and Energy Optimization Strategies for Multithreaded Operating System on the Resource-Constrained Wireless Sensor Node

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng

    2015-01-01

    Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264

  1. The Development of Equipment for the Disposal of Solid Organic Waste and Optimization of Its Operation

    NASA Astrophysics Data System (ADS)

    Sadrtdinov, Almaz R.; Safin, Rushan G.; Timerbaev, Nail F.; Ziatdinova, Dilyara F.; Saprykina, Natalya A.

    2016-08-01

    The paper describes the developed system for the thermal utilization of solid organic waste, which can simultaneously process the paper, wood, rubber, plastic, etc. A method for improving the efficiency of the equipment, due to optimization of the gas extraction system was proposed. The influence of the characteristics of installed equipment and modes of their work on energy savings and efficiency of the gas extraction system was also determined. Optimization work which includes the introduction into the exhauster control system of the frequency converter, can save up to 70% of electricity and increases the life of the equipment.

  2. Optimal Stabilization of Social Welfare under Small Variation of Operating Condition with Bifurcation Analysis

    NASA Astrophysics Data System (ADS)

    Chanda, Sandip; De, Abhinandan

    2015-07-01

    A social welfare optimization technique has been proposed in this paper with a developed state space based model and bifurcation analysis to offer substantial stability margin even in most inadvertent states of power system networks. The restoration of the power market dynamic price equilibrium has been negotiated in this paper, by forming Jacobian of the sensitivity matrix to regulate the state variables for the standardization of the quality of solution in worst possible contingencies of the network and even with co-option of intermittent renewable energy sources. The model has been tested in IEEE 30 bus system and illustrious particle swarm optimization has assisted the fusion of the proposed model and methodology.

  3. Insertion of operation-and-indicate instructions for optimized SIMD code

    DOEpatents

    Eichenberger, Alexander E; Gara, Alan; Gschwind, Michael K

    2013-06-04

    Mechanisms are provided for inserting indicated instructions for tracking and indicating exceptions in the execution of vectorized code. A portion of first code is received for compilation. The portion of first code is analyzed to identify non-speculative instructions performing designated non-speculative operations in the first code that are candidates for replacement by replacement operation-and-indicate instructions that perform the designated non-speculative operations and further perform an indication operation for indicating any exception conditions corresponding to special exception values present in vector register inputs to the replacement operation-and-indicate instructions. The replacement is performed and second code is generated based on the replacement of the at least one non-speculative instruction. The data processing system executing the compiled code is configured to store special exception values in vector output registers, in response to a speculative instruction generating an exception condition, without initiating exception handling.

  4. Optimization of cryogenic chilldown and loading operation using SINDA/FLUINT

    NASA Astrophysics Data System (ADS)

    Kashani, Ali; Luchinskiy, Dmitry G.; Ponizovskaya-Devine, Ekaterina; Khasin, Michael; Timucin, Dogan; Sass, Jared; Perotti, Jose; Brown, Barbara

    2015-12-01

    A cryogenic advanced propellant loading system is currently being developed at NASA. A wide range of applications and variety of loading regimes call for the development of computer assisted design and optimization methods that will reduce time and cost and improve the reliability of the APL performance. A key aspect of development of such methods is modeling and optimization of non-equilibrium two-phase cryogenic flow in the transfer line. Here we report on the development of such optimization methods using commercial SINDA/FLUINT software. The model is based on the solution of two-phase flow conservation equations in one dimension and a full set of correlations for flow patterns, losses, and heat transfer in the pipes, valves, and other system components. We validate this model using experimental data obtained from chilldown and loading of a cryogenic testbed at NASA Kennedy Space Center. We analyze sensitivity of this model with respect to the variation of the key control parameters including pressure in the tanks, openings of the control and dump valves, and insulation. We discuss the formulation of multi-objective optimization problem and provide an example of the solution of such problem.

  5. Optimizing Blocking and Nonblocking Reduction Operations for Multicore Systems: Hierarchical Design and Implementation

    SciTech Connect

    Gorentla Venkata, Manjunath; Shamis, Pavel; Graham, Richard L; Ladd, Joshua S; Sampath, Rahul S

    2013-01-01

    Many scientific simulations, using the Message Passing Interface (MPI) programming model, are sensitive to the performance and scalability of reduction collective operations such as MPI Allreduce and MPI Reduce. These operations are the most widely used abstractions to perform mathematical operations over all processes that are part of the simulation. In this work, we propose a hierarchical design to implement the reduction operations on multicore systems. This design aims to improve the efficiency of reductions by 1) tailoring the algorithms and customizing the implementations for various communication mechanisms in the system 2) providing the ability to configure the depth of hierarchy to match the system architecture, and 3) providing the ability to independently progress each of this hierarchy. Using this design, we implement MPI Allreduce and MPI Reduce operations (and its nonblocking variants MPI Iallreduce and MPI Ireduce) for all message sizes, and evaluate on multiple architectures including InfiniBand and Cray XT5. We leverage and enhance our existing infrastructure, Cheetah, which is a framework for implementing hierarchical collective operations to implement these reductions. The experimental results show that the Cheetah reduction operations outperform the production-grade MPI implementations such as Open MPI default, Cray MPI, and MVAPICH2, demonstrating its efficiency, flexibility and portability. On Infini- Band systems, with a microbenchmark, a 512-process Cheetah nonblocking Allreduce and Reduce achieves a speedup of 23x and 10x, respectively, compared to the default Open MPI reductions. The blocking variants of the reduction operations also show similar performance benefits. A 512-process nonblocking Cheetah Allreduce achieves a speedup of 3x, compared to the default MVAPICH2 Allreduce implementation. On a Cray XT5 system, a 6144-process Cheetah Allreduce outperforms the Cray MPI by 145%. The evaluation with an application kernel, Conjugate

  6. Full-zone spectral envelope function formalism for the optimization of line and point tunnel field-effect transistors

    SciTech Connect

    Verreck, Devin Groeseneken, Guido; Verhulst, Anne S.; Mocuta, Anda; Collaert, Nadine; Thean, Aaron; Van de Put, Maarten; Magnus, Wim; Sorée, Bart

    2015-10-07

    Efficient quantum mechanical simulation of tunnel field-effect transistors (TFETs) is indispensable to allow for an optimal configuration identification. We therefore present a full-zone 15-band quantum mechanical solver based on the envelope function formalism and employing a spectral method to reduce computational complexity and handle spurious solutions. We demonstrate the versatility of the solver by simulating a 40 nm wide In{sub 0.53}Ga{sub 0.47}As lineTFET and comparing it to p-n-i-n configurations with various pocket and body thicknesses. We find that the lineTFET performance is not degraded compared to semi-classical simulations. Furthermore, we show that a suitably optimized p-n-i-n TFET can obtain similar performance to the lineTFET.

  7. Adaptable structural synthesis using advanced analysis and optimization coupled by a computer operating system

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Bhat, R. B.

    1979-01-01

    A finite element program is linked with a general purpose optimization program in a 'programing system' which includes user supplied codes that contain problem dependent formulations of the design variables, objective function and constraints. The result is a system adaptable to a wide spectrum of structural optimization problems. In a sample of numerical examples, the design variables are the cross-sectional dimensions and the parameters of overall shape geometry, constraints are applied to stresses, displacements, buckling and vibration characteristics, and structural mass is the objective function. Thin-walled, built-up structures and frameworks are included in the sample. Details of the system organization and characteristics of the component programs are given.

  8. Optimal laser wavelength for efficient laser power converter operation over temperature

    NASA Astrophysics Data System (ADS)

    Höhn, O.; Walker, A. W.; Bett, A. W.; Helmers, H.

    2016-06-01

    A temperature dependent modeling study is conducted on a GaAs laser power converter to identify the optimal incident laser wavelength for optical power transmission. Furthermore, the respective temperature dependent maximal conversion efficiencies in the radiative limit as well as in a practically achievable limit are presented. The model is based on the transfer matrix method coupled to a two-diode model, and is calibrated to experimental data of a GaAs photovoltaic device over laser irradiance and temperature. Since the laser wavelength does not strongly influence the open circuit voltage of the laser power converter, the optimal laser wavelength is determined to be in the range where the external quantum efficiency is maximal, but weighted by the photon flux of the laser.

  9. The human operator in manual preview tracking /an experiment and its modeling via optimal control/

    NASA Technical Reports Server (NTRS)

    Tomizuka, M.; Whitney, D. E.

    1976-01-01

    A manual preview tracking experiment and its results are presented. The preview drastically improves the tracking performance compared to zero-preview tracking. Optimal discrete finite preview control is applied to determine the structure of a mathematical model of the manual preview tracking experiment. Variable parameters in the model are adjusted to values which are consistent to the published data in manual control. The model with the adjusted parameters is found to be well correlated to the experimental results.

  10. Optimizing Web-Based Instruction: A Case Study Using Poultry Processing Unit Operations

    ERIC Educational Resources Information Center

    O' Bryan, Corliss A.; Crandall, Philip G.; Shores-Ellis, Katrina; Johnson, Donald M.; Ricke, Steven C.; Marcy, John

    2009-01-01

    Food companies and supporting industries need inexpensive, revisable training methods for large numbers of hourly employees due to continuing improvements in Hazard Analysis Critical Control Point (HACCP) programs, new processing equipment, and high employee turnover. HACCP-based food safety programs have demonstrated their value by reducing the…

  11. Simulation-optimization model for water management in hydraulic fracturing operations

    NASA Astrophysics Data System (ADS)

    Hernandez, E. A.; Uddameri, V.

    2015-09-01

    A combined simulation-optimization model was developed to minimize the freshwater footprint at multi-well hydraulic fracturing sites. The model seeks to reduce freshwater use by blending it with brackish groundwater and recovered water. Time-varying water quality and quantity mass balance expressions and drawdown calculations using the Theis solution along with the superposition principle were embedded into the optimization model and solved using genetic algorithms. The model was parameterized for representative conditions in the Permian Basin oil and gas play region with the Dockum Formation serving as the brackish water source (Texas, USA). The results indicate that freshwater use can be reduced by 25-30 % by blending. Recovered water accounted for 2-3 % of the total blend or 10-15 % of total water recovered on-site. The concentration requirements of sulfate and magnesium limited blending. The evaporation in the frac pit constrained the amount blended during summer, while well yield of the brackish (Dockum) aquifer constrained the blending during winter. The Edwards-Trinity aquifer provided the best quality water compared to the Ogallala and Pecos Valley aquifers. However, the aquifer has low diffusivity causing the drawdown impacts to be felt over large areas. Speciation calculations carried out using PHREEQC indicated that precipitation of barium and strontium minerals is unlikely in the blended water. Conversely, the potential for precipitation of iron minerals is high. The developed simulation-optimization modeling framework is flexible and easily adapted for water management at other fracturing sites.

  12. Short-term scheduling of crude oil operations in refinery with high-fusion-point oil and two transportation pipelines

    NASA Astrophysics Data System (ADS)

    Wu, NaiQi; Zhu, MengChu; Bai, LiPing; Li, ZhiWu

    2016-07-01

    In some refineries, storage tanks are located at two different sites, one for low-fusion-point crude oil and the other for high one. Two pipelines are used to transport different oil types. Due to the constraints resulting from the high-fusion-point oil transportation, it is challenging to schedule such a system. This work studies the scheduling problem from a control-theoretic perspective. It proposes to use a hybrid Petri net method to model the system. It then finds the schedulability conditions by analysing the dynamic behaviour of the net model. Next, it proposes an efficient scheduling method to minimize the cost of high-fusion-point oil transportation. Finally, it gives a complex industrial case study to show its application.

  13. 77 FR 66492 - Entergy Nuclear Operations, Inc., Entergy Nuclear Indian Point 2, LLC, and Entergy Nuclear Indian...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-05

    ... accident at the Fukushima Dai- ichi nuclear power plant in Japan resulting from the March 11, 2011, Great T hoku Earthquake and questioned whether plant operators would be physically capable of performing...

  14. Is it safe? Voles in an unfamiliar dark open-field divert from optimal security by abandoning a familiar shelter and not visiting a central start point.

    PubMed

    Eilam, David

    2010-01-01

    Open-field behavior is a common tool in studying exploration and navigation, as well as emotions and motivations. However, it has been suggested that this behavior might be parsimoniously interpreted as directed to optimize security, with no need to interpret the animal's mental state. This latter view was challenged here by providing voles with presumably sense of optimal security. For this, voles were introduced into a dark open-field inside a familiar shelter in which they previously lived in their home cage. Voles then emerged either to locomote only in the vicinity of the shelter, or to travel further out to explore the entire arena and only later to return to the shelter. While their staying near the shelter confirms the notion of optimizing security, their traveling further out along the perimeter negates this notion. This divergence of behavior under the same security conditions illustrates that open-field behavior, which is a multi-faceted and dynamic process, is also affected by an emotional component. That is, safety is a subjective emotional state dictated by various inputs and, therefore, the resulting dynamic behavior, which is the ultimate output of the central nervous system, may vary beyond the possibility of being parsimoniously interpreted by only one factor. In a similar vein, we show that the impact of the start point on the paths of locomotion is not an intrinsic property of that point, but depends on its physical location. PMID:19744526

  15. Enhancing artificial bee colony algorithm with self-adaptive searching strategy and artificial immune network operators for global optimization.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  16. Enhancing Artificial Bee Colony Algorithm with Self-Adaptive Searching Strategy and Artificial Immune Network Operators for Global Optimization

    PubMed Central

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  17. Optimization of Sinter Plant Operating Conditions Using Advanced Multivariate Statistics: Intelligent Data Processing

    NASA Astrophysics Data System (ADS)

    Fernández-González, Daniel; Martín-Duarte, Ramón; Ruiz-Bustinza, Íñigo; Mochón, Javier; González-Gasca, Carmen; Verdeja, Luis Felipe

    2016-08-01

    Blast furnace operators expect to get sinter with homogenous and regular properties (chemical and mechanical), necessary to ensure regular blast furnace operation. Blends for sintering also include several iron by-products and other wastes that are obtained in different processes inside the steelworks. Due to their source, the availability of such materials is not always consistent, but their total production should be consumed in the sintering process, to both save money and recycle wastes. The main scope of this paper is to obtain the least expensive iron ore blend for the sintering process, which will provide suitable chemical and mechanical features for the homogeneous and regular operation of the blast furnace. The systematic use of statistical tools was employed to analyze historical data, including linear and partial correlations applied to the data and fuzzy clustering based on the Sugeno Fuzzy Inference System to establish relationships among the available variables.

  18. Optimization of Sinter Plant Operating Conditions Using Advanced Multivariate Statistics: Intelligent Data Processing

    NASA Astrophysics Data System (ADS)

    Fernández-González, Daniel; Martín-Duarte, Ramón; Ruiz-Bustinza, Íñigo; Mochón, Javier; González-Gasca, Carmen; Verdeja, Luis Felipe

    2016-06-01

    Blast furnace operators expect to get sinter with homogenous and regular properties (chemical and mechanical), necessary to ensure regular blast furnace operation. Blends for sintering also include several iron by-products and other wastes that are obtained in different processes inside the steelworks. Due to their source, the availability of such materials is not always consistent, but their total production should be consumed in the sintering process, to both save money and recycle wastes. The main scope of this paper is to obtain the least expensive iron ore blend for the sintering process, which will provide suitable chemical and mechanical features for the homogeneous and regular operation of the blast furnace. The systematic use of statistical tools was employed to analyze historical data, including linear and partial correlations applied to the data and fuzzy clustering based on the Sugeno Fuzzy Inference System to establish relationships among the available variables.

  19. Optimization of operating parameters for efficient photocatalytic inactivation of Escherichia coli based on a statistical design of experiments.

    PubMed

    Feilizadeh, Mehrzad; Alemzadeh, Iran; Delparish, Amin; Estahbanati, M R Karimi; Soleimani, Mahdi; Jangjou, Yasser; Vosoughi, Amin

    2015-01-01

    In this work, the individual and interaction effects of three key operating parameters of the photocatalytic disinfection process were evaluated and optimized using response surface methodology (RSM) for the first time. The chosen operating parameters were: reaction temperature, initial pH of the reaction mixture and TiO2 P-25 photocatalyst loading. Escherichia coli concentration, after 90 minutes irradiation of UV-A light, was selected as the response. Twenty sets of photocatalytic disinfection experiments were conducted by adjusting operating parameters at five levels using the central composite design. Based on the experimental data, a semi-empirical expression was established and applied to predict the response. Analysis of variance revealed a strong correlation between predicted and experimental values of the response. The optimum values of the reaction temperature, initial pH of the reaction mixture and photocatalyst loading were found to be 40.3 °C, 5.9 g/L, and 1.0 g/L, respectively. Under the optimized conditions, E. coli concentration was observed to reduce from 10(7) to about 11 CFU/mL during the photocatalytic process. Moreover, all these results showed the great significance of the RSM in developing high performance processes for photocatalytic water disinfection.

  20. Choosing Your Poison: Optimizing Simulator Visual System Selection as a Function of Operational Tasks

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.; Kaiser, Mary K.

    2013-01-01

    Although current technology simulator visual systems can achieve extremely realistic levels they do not completely replicate the experience of a pilot sitting in the cockpit, looking at the outside world. Some differences in experience are due to visual artifacts, or perceptual features that would not be present in a naturally viewed scene. Others are due to features that are missing from the simulated scene. In this paper, these differences will be defined and discussed. The significance of these differences will be examined as a function of several particular operational tasks. A framework to facilitate the choice of visual system characteristics based on operational task requirements will be proposed.