Science.gov

Sample records for optimal operating points

  1. Engineering to Control Noise, Loading, and Optimal Operating Points

    SciTech Connect

    Mitchell R. Swartz

    2000-11-12

    Successful engineering of low-energy nuclear systems requires control of noise, loading, and optimum operating point (OOP) manifolds. The latter result from the biphasic system response of low-energy nuclear reaction (LENR)/cold fusion systems, and their ash production rate, to input electrical power. Knowledge of the optimal operating point manifold can improve the reproducibility and efficacy of these systems in several ways. Improved control of noise, loading, and peak production rates is available through the study, and use, of OOP manifolds. Engineering of systems toward the OOP-manifold drive-point peak may, with inclusion of geometric factors, permit more accurate uniform determinations of the calibrated activity of these materials/systems.

  2. Nonlinear Burn Control and Operating Point Optimization in ITER

    NASA Astrophysics Data System (ADS)

    Boyer, Mark; Schuster, Eugenio

    2013-10-01

    Control of the fusion power through regulation of the plasma density and temperature will be essential for achieving and maintaining desired operating points in fusion reactors and burning plasma experiments like ITER. In this work, a volume averaged model for the evolution of the density of energy, deuterium and tritium fuel ions, alpha-particles, and impurity ions is used to synthesize a multi-input multi-output nonlinear feedback controller for stabilizing and modulating the burn condition. Adaptive control techniques are used to account for uncertainty in model parameters, including particle confinement times and recycling rates. The control approach makes use of the different possible methods for altering the fusion power, including adjusting the temperature through auxiliary heating, modulating the density and isotopic mix through fueling, and altering the impurity density through impurity injection. Furthermore, a model-based optimization scheme is proposed to drive the system as close as possible to desired fusion power and temperature references. Constraints are considered in the optimization scheme to ensure that, for example, density and beta limits are avoided, and that optimal operation is achieved even when actuators reach saturation. Supported by the NSF CAREER award program (ECCS-0645086).

  3. Optimal operating points of oscillators using nonlinear resonators.

    PubMed

    Kenig, Eyal; Cross, M C; Villanueva, L G; Karabalin, R B; Matheny, M H; Lifshitz, Ron; Roukes, M L

    2012-11-01

    We demonstrate an analytical method for calculating the phase sensitivity of a class of oscillators whose phase does not affect the time evolution of the other dynamic variables. We show that such oscillators possess the possibility for complete phase noise elimination. We apply the method to a feedback oscillator which employs a high Q weakly nonlinear resonator and provide explicit parameter values for which the feedback phase noise is completely eliminated and others for which there is no amplitude-phase noise conversion. We then establish an operational mode of the oscillator which optimizes its performance by diminishing the feedback noise in both quadratures, thermal noise, and quality factor fluctuations. We also study the spectrum of the oscillator and provide specific results for the case of 1/f noise sources.

  4. Physical constraints, fundamental limits, and optimal locus of operating points for an inverted pendulum based actuated dynamic walker.

    PubMed

    Patnaik, Lalit; Umanand, Loganathan

    2015-10-26

    The inverted pendulum is a popular model for describing bipedal dynamic walking. The operating point of the walker can be specified by the combination of initial mid-stance velocity (v0) and step angle (φm) chosen for a given walk. In this paper, using basic mechanics, a framework of physical constraints that limit the choice of operating points is proposed. The constraint lines thus obtained delimit the allowable region of operation of the walker in the v0-φm plane. A given average forward velocity vx,avg can be achieved by several combinations of v0 and φm. Only one of these combinations results in the minimum mechanical power consumption and can be considered the optimum operating point for the given vx,avg. This paper proposes a method for obtaining this optimal operating point based on tangency of the power and velocity contours. Putting together all such operating points for various vx,avg, a family of optimum operating points, called the optimal locus, is obtained. For the energy loss and internal energy models chosen, the optimal locus obtained has a largely constant step angle with increasing speed but tapers off at non-dimensional speeds close to unity.

  5. [Receiver operating characteristic analysis and the cost--benefit analysis in determination of the optimal cut-off point].

    PubMed

    Vránová, J; Horák, J; Krátká, K; Hendrichová, M; Kovaírková, K

    2009-01-01

    An overview of the use of Receiver Operating Characteristic (ROC) analysis within medicine is provided. A survey of the theory behind the analysis is offered together with a presentation on how to create a ROC curve and how to use Cost--Benefit analysis to determine the optimal cut-off point or threshold. The use of ROC analysis is exemplified in the "Cost--Benefit analysis" section of the paper. In these examples, it can be seen that the determination of the optimal cut-off point is mainly influenced by the prevalence and the severity of the disease, by the risks and adverse events of treatment or the diagnostic testing, by the overall costs of treating true and false positives (TP and FP), and by the risk of deficient or non-treatment of false negative (FN) cases.

  6. LST data management and mission operations concept. [pointing control optimization for maximum data

    NASA Technical Reports Server (NTRS)

    Walker, R.; Hudson, F.; Murphy, L.

    1977-01-01

    A candidate design concept for an LST ground facility is described. The design objectives were to use NASA institutional hardware, software and facilities wherever practical, and to maximize efficiency of telescope use. The pointing control performance requirements of LST are summarized, and the major data interfaces of the candidate ground system are diagrammed.

  7. Optimal control problems with switching points

    NASA Astrophysics Data System (ADS)

    Seywald, Hans

    1991-09-01

    An overview is presented of the problems and difficulties that arise in solving optimal control problems with switching points. A brief discussion of existing optimality conditions is given and a numerical approach for solving the multipoint boundary value problems associated with the first-order necessary conditions of optimal control is presented. Two real-life aerospace optimization problems are treated explicitly. These are altitude maximization for a sounding rocket (Goddard Problem) in the presence of a dynamic pressure limit, and range maximization for a supersonic aircraft flying in the vertical, also in the presence of a dynamic pressure limit. In the second problem singular control appears along arcs with active dynamic pressure limit, which in the context of optimal control, represents a first-order state inequality constraint. An extension of the Generalized Legendre-Clebsch Condition to the case of singular control along state/control constrained arcs is presented and is applied to the aircraft range maximization problem stated above. A contribution to the field of Jacobi Necessary Conditions is made by giving a new proof for the non-optimality of conjugate paths in the Accessory Minimum Problem. Because of its simple and explicit character, the new proof may provide the basis for an extension of Jacobi's Necessary Condition to the case of the trajectories with interior point constraints. Finally, the result that touch points cannot occur for first-order state inequality constraints is extended to the case of vector valued control functions.

  8. Sensitivity analysis, optimization, and global critical points

    SciTech Connect

    Cacuci, D.G. )

    1989-11-01

    The title of this paper suggests that sensitivity analysis, optimization, and the search for critical points in phase-space are somehow related; the existence of such a kinship has been undoubtedly felt by many of the nuclear engineering practitioners of optimization and/or sensitivity analysis. However, a unified framework for displaying this relationship has so far been lacking, especially in a global setting. The objective of this paper is to present such a global and unified framework and to suggest, within this framework, a new direction for future developments for both sensitivity analysis and optimization of the large nonlinear systems encountered in practical problems.

  9. Collocation points distributions for optimal spacecraft trajectories

    NASA Astrophysics Data System (ADS)

    Fumenti, Federico; Circi, Christian; Romagnoli, Daniele

    2013-03-01

    The method of direct collocation with nonlinear programming (DCNLP) is a powerful tool to solve optimal control problems (OCP). In this method the solution time history is approximated with piecewise polynomials, which are constructed using interpolation points deriving from the Jacobi polynomials. Among the Jacobi polynomials family, Legendre and Chebyshev polynomials are the most used, but there is no evidence that they offer the best performance with respect to other family members. By solving different OCPs with interpolation points not only taken within the Jacoby family, the behavior of the Jacobi polynomials in the optimization problems is discussed. This paper focuses on spacecraft trajectories optimization problems. In particular orbit transfers, interplanetary transfers and station keepings are considered.

  10. Optimizing Probability of Detection Point Estimate Demonstration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  11. Infrared image mosaic using point feature operators

    NASA Astrophysics Data System (ADS)

    Huang, Zhen; Sun, Shaoyuan; Shen, Zhenyi; Hou, Junjie; Zhao, Haitao

    2016-10-01

    In this paper, we study infrared image mosaic around a single point of rotation, aiming at expanding the narrow view range of infrared images. We propose an infrared image mosaic method using point feature operators including image registration and image synthesis. Traditional mosaic algorithms usually use global image registration methods to extract the feature points in the global image, which cost too much time as well as considerable matching errors. To address this issue, we first roughly calculate the image shift amount using phase correlation and determine the overlap region between images, and then extract image features in overlap region, which shortens the registration time and increases the quality of feature points. We improve the traditional algorithm through increasing constraints of point matching based on prior knowledge of image shift amount based on which the weighted map is computed using fade in-out method. The experimental results verify that the proposed method has better real time performance and robustness.

  12. Automated design of image operators that detect interest points.

    PubMed

    Trujillo, Leonardo; Olague, Gustavo

    2008-01-01

    This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research.

  13. Linearization: Students Forget the Operating Point

    ERIC Educational Resources Information Center

    Roubal, J.; Husek, P.; Stecha, J.

    2010-01-01

    Linearization is a standard part of modeling and control design theory for a class of nonlinear dynamical systems taught in basic undergraduate courses. Although linearization is a straight-line methodology, it is not applied correctly by many students since they often forget to keep the operating point in mind. This paper explains the topic and…

  14. Optimal Affine-Invariant Point Matching

    NASA Astrophysics Data System (ADS)

    Costa, Mauro S.; Haralick, Robert M.; Phillips, Tsaiyun I.; Shapiro, Linda G.

    1989-03-01

    The affine-transformation matching scheme proposed by Hummel and Wolfson (1988) is very efficient in a model-based matching system, not only in terms of the computational complexity involved, but also in terms of the simplicity of the method. This paper addresses the implementation of the affine-invariant point matching, applied to the problem of recognizing and determining the pose of sheet metal parts. It points out errors that can occur with this method due to quantization, stability, symmetry, and noise problems. By beginning with an explicit noise model which the Hummel and Wolfson technique lacks, we can derive an optimal approach which overcomes these problems. We show that results obtained with the new algorithm are clearly better than the results from the original method.

  15. Maximum power point tracking for optimizing energy harvesting process

    NASA Astrophysics Data System (ADS)

    Akbari, S.; Thang, P. C.; Veselov, D. S.

    2016-10-01

    There has been a growing interest in using energy harvesting techniques for powering wireless sensor networks. The reason for utilizing this technology can be explained by the sensors limited amount of operation time which results from the finite capacity of batteries and the need for having a stable power supply in some applications. Energy can be harvested from the sun, wind, vibration, heat, etc. It is reasonable to develop multisource energy harvesting platforms for increasing the amount of harvesting energy and to mitigate the issue concerning the intermittent nature of ambient sources. In the context of solar energy harvesting, it is possible to develop algorithms for finding the optimal operation point of solar panels at which maximum power is generated. These algorithms are known as maximum power point tracking techniques. In this article, we review the concept of maximum power point tracking and provide an overview of the research conducted in this area for wireless sensor networks applications.

  16. 47 CFR 22.591 - Channels for point-to-point operation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 2 2013-10-01 2013-10-01 false Channels for point-to-point operation. 22.591... PUBLIC MOBILE SERVICES Paging and Radiotelephone Service Point-To-Point Operation § 22.591 Channels for point-to-point operation. The following channels are allocated for assignment to fixed transmitters...

  17. 47 CFR 22.591 - Channels for point-to-point operation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false Channels for point-to-point operation. 22.591... PUBLIC MOBILE SERVICES Paging and Radiotelephone Service Point-To-Point Operation § 22.591 Channels for point-to-point operation. The following channels are allocated for assignment to fixed transmitters...

  18. 47 CFR 22.591 - Channels for point-to-point operation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Channels for point-to-point operation. 22.591... PUBLIC MOBILE SERVICES Paging and Radiotelephone Service Point-To-Point Operation § 22.591 Channels for point-to-point operation. The following channels are allocated for assignment to fixed transmitters...

  19. 47 CFR 22.591 - Channels for point-to-point operation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 2 2012-10-01 2012-10-01 false Channels for point-to-point operation. 22.591... PUBLIC MOBILE SERVICES Paging and Radiotelephone Service Point-To-Point Operation § 22.591 Channels for point-to-point operation. The following channels are allocated for assignment to fixed transmitters...

  20. 47 CFR 22.591 - Channels for point-to-point operation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 2 2014-10-01 2014-10-01 false Channels for point-to-point operation. 22.591... PUBLIC MOBILE SERVICES Paging and Radiotelephone Service Point-To-Point Operation § 22.591 Channels for point-to-point operation. The following channels are allocated for assignment to fixed transmitters...

  1. Operational equations for the five-point rectangle

    SciTech Connect

    Silver, G.L.

    1993-09-15

    Two operational polynomials are demonstrated for the four-point rectangle with center point. The equations are exact on the points and the surfaces they describe ordinarily fit known monotonic surfaces better tan the standard five-point equation, as judged by the L{sub 2} norm test. Equations for fitting the five-point rectangle by sines and cosines are presented.

  2. Transmittance-optimized, point-focus Fresnel lens solar concentrator

    SciTech Connect

    Oneill, M.J.; Goldberg, V.R.; Muzzy, D.B.

    1982-07-01

    The development of a point-focus Fresnel lens solar concentrator for high-temperature solar thermal energy system applications is discussed. The concentrator utilizes a transmittance-optimized, short-focal-length, dome-shaped refractive Fresnel lens as the optical element. This concentrator combines both good optical performance and a large tolerance for manufacturing, deflection, and tracking errors. The conceptual design of an 11-meter diameter concentrator which should provide an overall collector efficiency of about 70% at an 815 C (1500 F) receiver operating temperature and a 1500X geometric concentration ratio (lens aperture area/receiver aperture area) was completed. Results of optical and thermal analyses of the collector, a discussion of manufacturing methods for making the large lens, and an update on the current status and future plans of the development program are included.

  3. Transmittance-optimized, point-focus Fresnel lens solar concentrator

    SciTech Connect

    Oneill, M.J.

    1984-03-01

    The development of a point-focus Fresnel lens solar concentrator for high-temperature solar thermal energy system applications is discussed. The concentrator utilizes a transmittance-optimized, short-focal-length, dome-shaped refractive Fresnel lens as the optical element. This concentrator combines both good optical performance and a large tolerance for manufacturing, deflection, and tracking errors. The conceptual design of an 11-meter diameter concentrator which should provide an overall collector efficiency of about 70% at an 815 C (1500 F) receiver operating temperature and a 1500X geometric concentration ratio (lens aperture area/receiver aperture area) was completed. Results of optical and thermal analyses of the collector, a discussion of manufacturing methods for making the large lens, and an update on the current status and future plans of the development program are included.

  4. Biomechanical constraints and optimal posture of a human operator

    SciTech Connect

    Riffard, V.; Chedmail, P.

    1995-12-31

    In complex mechanical systems, an important feature of concurrent engineering is to take into account the operators accessibility for assembly operations and maintenance checking in the earliest phases of assembly design. Accessibility can be viewed from geometric and biomechanical points of view. The first one was described in a previous paper. The object of this paper is to integrate the biomechanical aspects of finding optimal postures of a human operator in an encumbered environment. Research on mechanical modeling of human operators deals with (1) geometric and kinematics models; (2) inertial characterizations; (3) static and muscular efforts; and (4) human-gesture characterization.

  5. A Study on Optimal Operation of Power Generation by Waste

    NASA Astrophysics Data System (ADS)

    Sugahara, Hideo; Aoyagi, Yoshihiro; Kato, Masakazu

    This paper proposes the optimal operation of power generation by waste. Refuse is taken as a new energy resource of biomass. Although some fossil fuel origin refuse like plastic may be mixed in, CO2 emission is not counted up except for above fossil fuel origin refuse for the Kyoto Protocol. Incineration is indispensable for refuse disposal and power generation by waste is environment-friendly and power system-friendly using synchronous generators. Optimal planning is a key point to make much of this merit. The optimal plan includes refuse incinerator operation plan with refuse collection and maintenance scheduling of refuse incinerator plant. In this paper, it has been made clear that the former plan increases generation energy through numerical simulations. Concerning the latter plan, a method to determine the maintenance schedule using genetic algorithm has been established. In addition, taking environmental load of CO2 emission into account, this is expected larger merits from environment and energy resource points of view.

  6. 47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....

  7. 47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....

  8. 47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....

  9. 47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....

  10. 47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations....

  11. Point Shifts in Rational Interpolation with Optimized Denominator

    DTIC Science & Technology

    2001-07-01

    estimate hr - f’Ik. and hr - f"Il. Schneider and Werner [14] have noticed that every rational interpolant R E lZNN, written in its barycentric form R...UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP013753 TITLE: Point Shifts in Rational Interpolation with Optimized...report: ADP013708 thru ADP013761 UNCLASSIFIED Point shifts in rational interpolation with optimized denominator Jean-Paul Berrut D~partement de

  12. Optimal PGU operation strategy in CHP systems

    NASA Astrophysics Data System (ADS)

    Yun, Kyungtae

    Traditional power plants only utilize about 30 percent of the primary energy that they consume, and the rest of the energy is usually wasted in the process of generating or transmitting electricity. On-site and near-site power generation has been considered by business, labor, and environmental groups to improve the efficiency and the reliability of power generation. Combined heat and power (CHP) systems are a promising alternative to traditional power plants because of the high efficiency and low CO2 emission achieved by recovering waste thermal energy produced during power generation. A CHP operational algorithm designed to optimize operational costs must be relatively simple to implement in practice such as to minimize the computational requirements from the hardware to be installed. This dissertation focuses on the following aspects pertaining the design of a practical CHP operational algorithm designed to minimize the operational costs: (a) real-time CHP operational strategy using a hierarchical optimization algorithm; (b) analytic solutions for cost-optimal power generation unit operation in CHP Systems; (c) modeling of reciprocating internal combustion engines for power generation and heat recovery; (d) an easy to implement, effective, and reliable hourly building load prediction algorithm.

  13. Optimization of the bank's operating portfolio

    NASA Astrophysics Data System (ADS)

    Borodachev, S. M.; Medvedev, M. A.

    2016-06-01

    The theory of efficient portfolios developed by Markowitz is used to optimize the structure of the types of financial operations of a bank (bank portfolio) in order to increase the profit and reduce the risk. The focus of this paper is to check the stability of the model to errors in the original data.

  14. Optimal allocation of point-count sampling effort

    USGS Publications Warehouse

    Barker, R.J.; Sauer, J.R.; Link, W.A.

    1993-01-01

    Both unlimited and fixedradius point counts only provide indices to population size. Because longer count durations lead to counting a higher proportion of individuals at the point, proper design of these surveys must incorporate both count duration and sampling characteristics of population size. Using information about the relationship between proportion of individuals detected at a point and count duration, we present a method of optimizing a pointcount survey given a fixed total time for surveying and travelling between count points. The optimization can be based on several quantities that measure precision, accuracy, or power of tests based on counts, including (1) meansquare error of estimated population change; (2) mean-square error of average count; (3) maximum expected total count; or (4) power of a test for differences in average counts. Optimal solutions depend on a function that relates count duration at a point to the proportion of animals detected. We model this function using exponential and Weibull distributions, and use numerical techniques to conduct the optimization. We provide an example of the procedure in which the function is estimated from data of cumulative number of individual birds seen for different count durations for three species of Hawaiian forest birds. In the example, optimal count duration at a point can differ greatly depending on the quantities that are optimized. Optimization of the mean-square error or of tests based on average counts generally requires longer count durations than does estimation of population change. A clear formulation of the goals of the study is a critical step in the optimization process.

  15. 77 FR 7184 - Entergy Nuclear Indian Point 2, LLC; Entergy Nuclear Operations, Inc.; Indian Point Nuclear...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-10

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Indian Point 2, LLC; Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating Unit No. 2; Exemption 1.0 Background Entergy Nuclear Operations, Inc. (Entergy or the licensee)...

  16. Analysis of Performance and Optimization of Point Cloud Conversion in Spatial Databases

    NASA Astrophysics Data System (ADS)

    Chrόszcz, Aleksandra; Łukasik, Piotr; Lupa, Michał

    2016-10-01

    This article compares popular relational database management systems and one nonrelational database in the context of storage of cloud points from LIDAR. The authors examine the efficient storage of cloud points in the database and optimization of query execution time. Additionally, there is also a comparison of the impact of SSD and traditional HDD technology on the time taken to perform operations on a point cloud.

  17. Time optimal robotic manipulator motions and work places for point to point tasks

    NASA Technical Reports Server (NTRS)

    Dubowsky, S.; Blubaugh, T. D.

    1985-01-01

    High productivity requires that manipulators perform complex tasks quickly. Recently, optimal control algorithms have been developed which enable manipulators to move quickly, but only for simple motions. A method is presented here which combines simple time optimal motions in an optimal manner to yield the minimum time motions for an important class of complex manipulator tasks composed of point to point moves, such as assembly, electronic component insertion and spot welding. This method can also be used to design manipulator actions and work places so that tasks can be completd in minimum time. The method has been implemented in a CAD software package. Examples are presented which show the methods effectiveness.

  18. Optimizing robot placement for visit-point tasks

    SciTech Connect

    Hwang, Y.K.; Watterberg, P.A.

    1996-06-01

    We present a manipulator placement algorithm for minimizing the length of the manipulator motion performing a visit-point task such as spot welding. Given a set of points for the tool of a manipulator to visit, our algorithm finds the shortest robot motion required to visit the points from each possible base configuration. The base configurations resulting in the shortest motion is selected as the optimal robot placement. The shortest robot motion required for visiting multiple points from a given base configuration is computed using a variant of the traveling salesman algorithm in the robot joint space and a point-to-point path planner that plans collision free robot paths between two configurations. Our robot placement algorithm is expected to reduce the robot cycle time during visit- point tasks, as well as speeding up the robot set-up process when building a manufacturing line.

  19. An Efficient Globally Optimal Algorithm for Asymmetric Point Matching.

    PubMed

    Lian, Wei; Zhang, Lei; Yang, Ming-Hsuan

    2016-08-29

    Although the robust point matching algorithm has been demonstrated to be effective for non-rigid registration, there are several issues with the adopted deterministic annealing optimization technique. First, it is not globally optimal and regularization on the spatial transformation is needed for good matching results. Second, it tends to align the mass centers of two point sets. To address these issues, we propose a globally optimal algorithm for the robust point matching problem where each model point has a counterpart in scene set. By eliminating the transformation variables, we show that the original matching problem is reduced to a concave quadratic assignment problem where the objective function has a low rank Hessian matrix. This facilitates the use of large scale global optimization techniques. We propose a branch-and-bound algorithm based on rectangular subdivision where in each iteration, multiple rectangles are used to increase the chances of subdividing the one containing the global optimal solution. In addition, we present an efficient lower bounding scheme which has a linear assignment formulation and can be efficiently solved. Extensive experiments on synthetic and real datasets demonstrate the proposed algorithm performs favorably against the state-of-the-art methods in terms of robustness to outliers, matching accuracy, and run-time.

  20. Radar antenna pointing for optimized signal to noise ratio.

    SciTech Connect

    Doerry, Armin Walter; Marquette, Brandeis

    2013-01-01

    The Signal-to-Noise Ratio (SNR) of a radar echo signal will vary across a range swath, due to spherical wavefront spreading, atmospheric attenuation, and antenna beam illumination. The antenna beam illumination will depend on antenna pointing. Calculations of geometry are complicated by the curved earth, and atmospheric refraction. This report investigates optimizing antenna pointing to maximize the minimum SNR across the range swath.

  1. Optimal Pumping Strategy with Conjunctive Operational Rules

    NASA Astrophysics Data System (ADS)

    Hsieh, C.-H.; Tan, C.-C.; Lin, C.-Y.; Tung, C.-P.

    2012-04-01

    Traditionally, the conjunctive use of surface water and groundwater often supplies surface water first. If surface water is insufficient, then groundwater is used. The traditional operation strategy may cause the problem that the pumped groundwater is excessively centralized. In this study, we proposed a new strategy and conjunctive operational rules to manage both surface and groundwater and to allow pumping groundwater during the non-drought periods. We link the groundwater simulation model with the management model, and use the global optimization algorithm to simultaneously optimize the spatial and temporal distribution curve which subject to the constraints of available surface water and the safety yield of groundwater. The Lanyang River watershed located in northeastern Taiwan is chosen as a study area. The trends of the historical weather records show that the probabilities of higher intensity rainfall and longer non-rainfall periods are increasing in the Lanyang River watershed. There is no reservoir in the Lanyang River watershed, and thus it may be more vulnerable to water shortage. In the study, we expect the conjunctive operational rule curve of surface water and groundwater can reduce the water shortage effectively comparing to utilizing the surface water only. Keywords: Conjunctive Uses, Water Supply, Groundwater, Optimization, Water Management

  2. On Motivating Operations at the Point of Online Purchase Setting

    ERIC Educational Resources Information Center

    Fagerstrom, Asle; Arntzen, Erik

    2013-01-01

    Consumer behavior analysis can be applied over a wide range of economic topics in which the main focus is the contingencies that influence the behavior of the economic agent. This paper provides an overview on the work that has been done on the impact from motivating operations at the point of online purchase situation. Motivating operations, a…

  3. Optimizing Integrated Terminal Airspace Operations Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Bosson, Christabelle; Xue, Min; Zelinski, Shannon

    2014-01-01

    In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.

  4. Martian Aerocapture Terminal Point Guidance: A Reference Path Optimization Study

    NASA Technical Reports Server (NTRS)

    Ro, Theodore U.; Queen, Eric M.; Striepe, Scott A.

    1999-01-01

    An effective method of terminal point guidance is to employ influence coefficients, which are solved from a set of differential equations adjoint to the linearized perturbations of the equations of motion about a reference trajectory. Hence, to optimize this type of guidance, one must first optimize the reference trajectory that the guidance is based upon. This study concentrates on various methods to optimize a reference trajectory for a Martian aerocapture maneuver, including a parametric analysis and first order gradient method. Resulting reference trajectories were tested in separate 2000 6-DOF Monte Carlo runs, using the Atmospheric Guidance Algorithm Testbed for the Mars Surveyor Program 2001 (MSP '01) Orbiter. These results were compared to an August 1998 study using the same terminal point control guidance algorithm and simulation testbed. Satisfactory improvements over the 1998 study are amply demonstrated.

  5. Planning time-optimal robotic manipulator motions and work places for point-to-point tasks

    NASA Technical Reports Server (NTRS)

    Dubowsky, S.; Blubaugh, T. D.

    1989-01-01

    A method is presented which combines simple time-optimal motions in an optimal manner to yield the minimum-time motions for an important class of complex manipulator tasks composed of point-to-point moves such as assembly, electronic component insertion, and spot welding. This method can also be used to design manipulator actions and work places so that tasks can be completed in minimum time. The method has been implemented in a computer-aided design software system. Several examples are presented. Experimental results show the method's validity and utility.

  6. Optimization of Regression Models of Experimental Data Using Confirmation Points

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2010-01-01

    A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.

  7. Robust stochastic optimization for reservoir operation

    NASA Astrophysics Data System (ADS)

    Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin

    2015-01-01

    Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.

  8. Testing single point incremental forming molds for thermoforming operations

    NASA Astrophysics Data System (ADS)

    Afonso, Daniel; de Sousa, Ricardo Alves; Torcato, Ricardo

    2016-10-01

    Low pressure polymer processing processes as thermoforming or rotational molding use much simpler molds then high pressure processes like injection. However, despite the low forces involved with the process, molds manufacturing for this operations is still a very material, energy and time consuming operation. The goal of the research is to develop and validate a method for manufacturing plastically formed sheets metal molds by single point incremental forming (SPIF) operation for thermoforming operation. Stewart platform based SPIF machines allow the forming of thick metal sheets, granting the required structural stiffness for the mold surface, and keeping the short lead time manufacture and low thermal inertia.

  9. Design optimization of phase-shifting point diffraction interferometer

    NASA Astrophysics Data System (ADS)

    Liu, Ke; Li, Yanqiu

    2008-12-01

    The phase-shifting point diffraction interferometer (PS/PDI) is so far the most accurate measurement tool in atwavelength interferometry of projection optics for extreme ultraviolet lithography (EUVL). The complicate interrelationships between configuration parameters of PS/PDI call for an optimization to achieve high accuracy of PS/PDI. In this paper, a novel system-level modeling approach is proposed to optimize the parameters of PS/PDI designed for visible light (λ=632.8nm) concept proof experiment. The optimal reference pinhole size selection is performed by modeling pinhole spatial filtering effect using Diffraction-Based Beam Propagation (BPR) module of CODE V and in house software. The result shows that various pinhole diameters ranging from 1.6um to 2.2um should be used in our PS/PDI experiment. The test window size and grating duty cycle optimization, which is based on the spatial frequency domain analysis of PS/PDI, is conducted by modeling the entire PS/PDI system using Physical Optics Propagation (POP) module of Zemax and in house software. The optimal window size is approximately 62um for a given window-pinhole separation of 63.3um. The optimal duty cycle of grating is calculated to be 83% to obtain the maximum fringe contrast of 0.879.

  10. Improving Small Signal Stability through Operating Point Adjustment

    SciTech Connect

    Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.; Chen, Yousu; Trudnowski, Daniel J.; Mittelstadt, William; Hauer, John F.; Dagle, Jeffery E.

    2010-09-30

    ModeMeter techniques for real-time small signal stability monitoring continue to mature, and more and more phasor measurements are available in power systems. It has come to the stage to bring modal information into real-time power system operation. This paper proposes to establish a procedure for Modal Analysis for Grid Operations (MANGO). Complementary to PSS’s and other traditional modulation-based control, MANGO aims to provide suggestions such as increasing generation or decreasing load for operators to mitigate low-frequency oscillations. Different from modulation-based control, the MANGO procedure proactively maintains adequate damping for all time, instead of reacting to disturbances when they occur. Effect of operating points on small signal stability is presented in this paper. Implementation with existing operating procedures is discussed. Several approaches for modal sensitivity estimation are investigated to associate modal damping and operating parameters. The effectiveness of the MANGO procedure is confirmed through simulation studies of several test systems.

  11. Optimal Hedging Rule for Reservoir Refill Operation

    NASA Astrophysics Data System (ADS)

    Wan, W.; Zhao, J.; Lund, J. R.; Zhao, T.; Lei, X.; Wang, H.

    2015-12-01

    This paper develops an optimal reservoir Refill Hedging Rule (RHR) for combined water supply and flood operation using mathematical analysis. A two-stage model is developed to formulate the trade-off between operations for conservation benefit and flood damage in the reservoir refill season. Based on the probability distribution of the maximum refill water availability at the end of the second stage, three zones are characterized according to the relationship among storage capacity, expected storage buffer (ESB), and maximum safety excess discharge (MSED). The Karush-Kuhn-Tucker conditions of the model show that the optimality of the refill operation involves making the expected marginal loss of conservation benefit from unfilling (i.e., ending storage of refill period less than storage capacity) as nearly equal to the expected marginal flood damage from levee overtopping downstream as possible while maintaining all constraints. This principle follows and combines the hedging rules for water supply and flood management. A RHR curve is drawn analogously to water supply hedging and flood hedging rules, showing the trade-off between the two objectives. The release decision result has a linear relationship with the current water availability, implying the linearity of RHR for a wide range of water conservation functions (linear, concave, or convex). A demonstration case shows the impacts of factors. Larger downstream flood conveyance capacity and empty reservoir capacity allow a smaller current release and more water can be conserved. Economic indicators of conservation benefit and flood damage compete with each other on release, the greater economic importance of flood damage is, the more water should be released in the current stage, and vice versa. Below a critical value, improving forecasts yields less water release, but an opposing effect occurs beyond this critical value. Finally, the Danjiangkou Reservoir case study shows that the RHR together with a rolling

  12. Multiple tipping points and optimal repairing in interacting networks

    PubMed Central

    Majdandzic, Antonio; Braunstein, Lidia A.; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Eugene Stanley, H.; Havlin, Shlomo

    2016-01-01

    Systems composed of many interacting dynamical networks—such as the human body with its biological networks or the global economic network consisting of regional clusters—often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread and recovery. Here we develop a model for such systems and find a very rich phase diagram that becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions and two ‘forbidden' transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyse an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model. PMID:26926803

  13. Multiple tipping points and optimal repairing in interacting networks

    NASA Astrophysics Data System (ADS)

    Majdandzic, Antonio; Braunstein, Lidia A.; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Eugene Stanley, H.; Havlin, Shlomo

    2016-03-01

    Systems composed of many interacting dynamical networks--such as the human body with its biological networks or the global economic network consisting of regional clusters--often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread and recovery. Here we develop a model for such systems and find a very rich phase diagram that becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions and two `forbidden' transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyse an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model.

  14. Detector characterization, optimization, and operation for ACTPol

    NASA Astrophysics Data System (ADS)

    Grace, Emily Ann

    2016-01-01

    Measurements of the temperature anisotropies of the Cosmic Microwave Background (CMB) have provided the foundation for much of our current knowledge of cosmology. Observations of the polarization of the CMB have already begun to build on this foundation and promise to illuminate open cosmological questions regarding the first moments of the universe and the properties of dark energy. The primary CMB polarization signal contains the signature of early universe physics including the possible imprint of inflationary gravitational waves, while a secondary signal arises due to late-time interactions of CMB photons which encode information about the formation and evolution of structure in the universe. The Atacama Cosmology Telescope Polarimeter (ACTPol), located at an elevation of 5200 meters in Chile and currently in its third season of observing, is designed to probe these signals with measurements of the CMB in both temperature and polarization from arcminute to degree scales. To measure the faint CMB polarization signal, ACTPol employs large, kilo-pixel detector arrays of transition edge sensor (TES) bolometers, which are cooled to a 100 mK operating temperature with a dilution refrigerator. Three such arrays are currently deployed, two with sensitivity to 150 GHz radiation and one dichroic array with 90 GHz and 150 GHz sensitivity. The operation of these large, monolithic detector arrays presents a number of challenges for both assembly and characterization. This thesis describes the design and assembly of the ACTPol polarimeter arrays and outlines techniques for their rapid characterization. These methods are employed to optimize the design and operating conditions of the detectors, select wafers for deployment, and evaluate the baseline array performance. The results of the application of these techniques to wafers from all three ACTPol arrays is described, including discussion of the measured thermal properties and time constants. Finally, aspects of the

  15. Post-operative consequences of hemodynamic optimization.

    PubMed

    Lazkani, A; Lebuffe, G

    2016-12-01

    Hemodynamic optimization begins with a medical assessment to identify the high-risk patients. This stratification is needed to customize the choice of hemodynamic support that is best adapted to the patient's level of risk, integrating the use of the least invasive procedures. The macro-circulatory hemodynamic approach aims to maintain a balance between oxygen supply (DO2) and oxygen demand (VO2). Volume replacement plays a crucial role based on the titration of fluid boluses according to their effect on measured stroke volume or indices of preload dependency. Good function of the microcirculatory system is the best guarantee to achieve this goal. An assessment of the DO2/VO2 ratio is needed for guidance in critical situations where tissue hypoxia may occur. Overall, all of these strategies are based on objective criteria to guide vascular replacement and/or tissue oxygenation in order to improve the patient's post-operative course by decreasing morbidity and hospital stay.

  16. Searching for the Optimal Working Point of the MEIC at JLab Using an Evolutionary Algorithm

    SciTech Connect

    Balsa Terzic, Matthew Kramer, Colin Jarvis

    2011-03-01

    The Medium-energy Electron Ion Collider (MEIC), a proposed medium-energy ring-ring electron-ion collider based on CEBAF at Jefferson Lab. The collider luminosity and stability are sensitive to the choice of a working point - the betatron and synchrotron tunes of the two colliding beams. Therefore, a careful selection of the working point is essential for stable operation of the collider, as well as for achieving high luminosity. Here we describe a novel approach for locating an optimal working point based on evolutionary algorithm techniques.

  17. Optimization-based multiple-point geostatistics: A sparse way

    NASA Astrophysics Data System (ADS)

    Kalantari, Sadegh; Abdollahifard, Mohammad Javad

    2016-10-01

    In multiple-point simulation the image should be synthesized consistent with the given training image and hard conditioning data. Existing sequential simulation methods usually lead to error accumulation which is hardly manageable in future steps. Optimization-based methods are capable of handling inconsistencies by iteratively refining the simulation grid. In this paper, the multiple-point stochastic simulation problem is formulated in an optimization-based framework using a sparse model. Sparse model allows each patch to be constructed as a superposition of a few atoms of a dictionary formed using training patterns, leading to a significant increase in the variability of the patches. To control the creativity of the model, a local histogram matching method is proposed. Furthermore, effective solutions are proposed for different issues arisen in multiple-point simulation. In order to handle hard conditioning data a weighted matching pursuit method is developed in this paper. Moreover, a simple and efficient thresholding method is developed which allows working with categorical variables. The experiments show that the proposed method produces acceptable realizations in terms of pattern reproduction, increases the variability of the realizations, and properly handles numerous conditioning data.

  18. Applications of operational calculus: equations for the five-point rectangle and robust center point estimators

    SciTech Connect

    Silver, Gary L

    2009-01-01

    Equations for interpolating five data in rectangular array are seldom encountered in textbooks. This paper describes a new method that renders polynomial and exponential equations for the design. Operational center point estimators are often more more resistant to the effects of an outlying datum than the mean.

  19. Optimal periodic control for spacecraft pointing and attitude determination

    NASA Technical Reports Server (NTRS)

    Pittelkau, Mark E.

    1993-01-01

    A new approach to autonomous magnetic roll/yaw control of polar-orbiting, nadir-pointing momentum bias spacecraft is considered as the baseline attitude control system for the next Tiros series. It is shown that the roll/yaw dynamics with magnetic control are periodically time varying. An optimal periodic control law is then developed. The control design features a state estimator that estimates attitude, attitude rate, and environmental torque disturbances from Earth sensor and sun sensor measurements; no gyros are needed. The state estimator doubles as a dynamic attitude determination and prediction function. In addition to improved performance, the optimal controller allows a much smaller momentum bias than would otherwise be necessary. Simulation results are given.

  20. Attitude Control Optimization for ROCSAT-2 Operation

    NASA Astrophysics Data System (ADS)

    Chern, Jeng-Shing; Wu, A.-M.

    The second satellite of the Republic of China is named ROCSAT-2. It is a small satellite with total mass of 750 kg for remote sensing and scientific purposes. The Remote Sensing Instrument (RSI) has resolutions of 2 m for panchromatic and 8 m for multi-spectral bands, respectively. It is mainly designed for disaster monitoring and rescue, environment and pollution monitoring, forest and agriculture planning, city and country planning, etc. for Taiwan and its surrounding islands and oceans. In order to monitor Taiwan area constantly for a long time, the orbit is designed to be sun-synchronous with 14 revolutions per day. As to the scientific payload, it is an Imager of Sprite, the Upper Atmospheric Lightening (ISUAL). Since it is a small satellite, the RSI, ISUAL, and solar panel are all body-fixed. Consequently, the satellite has to maneuver as a whole body so that either RSI or ISUAL or solar panel can be pointing to the desired direction. When ROCSAT-2 rises from the horizon and catches the sunlight, it has to maneuver to face the sun for the battery to be charged. As soon as it flies to Taiwan area, several maneuvers must be made to cover the whole area for remote sensing mission. Since the swath of ROCSAT-2 is 24 km, it needs four stripes to form the mosaic of Taiwan area. Usually, four maneuvers are required to fulfill the mission in one flight path. The sequence is very important from the point of view of saving energy. However, in some cases, we may need to sacrifice energy in order to obtain good remote sensing data at a particularly specified ground region. After that mission, its solar panel has to face the sun again. Then when ROCSAT-2 sets the horizon, it has to maneuver to point the ISUAL in the specified direction for sprite imaging mission. It is the direction where scientists predict the sprite is most probable to exist. Further maneuver may be required for the down loading of onboard data. When ROCSAT-2 rises from the horizon again, it completes

  1. Matrix product density operators: Renormalization fixed points and boundary theories

    NASA Astrophysics Data System (ADS)

    Cirac, J. I.; Pérez-García, D.; Schuch, N.; Verstraete, F.

    2017-03-01

    We consider the tensors generating matrix product states and density operators in a spin chain. For pure states, we revise the renormalization procedure introduced in (Verstraete et al., 2005) and characterize the tensors corresponding to the fixed points. We relate them to the states possessing zero correlation length, saturation of the area law, as well as to those which generate ground states of local and commuting Hamiltonians. For mixed states, we introduce the concept of renormalization fixed points and characterize the corresponding tensors. We also relate them to concepts like finite correlation length, saturation of the area law, as well as to those which generate Gibbs states of local and commuting Hamiltonians. One of the main result of this work is that the resulting fixed points can be associated to the boundary theories of two-dimensional topological states, through the bulk-boundary correspondence introduced in (Cirac et al., 2011).

  2. PIV study of the wake of a model wind turbine transitioning between operating set points

    NASA Astrophysics Data System (ADS)

    Houck, Dan; Cowen, Edwin (Todd)

    2016-11-01

    Wind turbines are ideally operated at their most efficient tip speed ratio for a given wind speed. There is increasing interest, however, in operating turbines at other set points to increase the overall power production of a wind farm. Specifically, Goit and Meyers (2015) used LES to examine a wind farm optimized by unsteady operation of its turbines. In this study, the wake of a model wind turbine is measured in a water channel using PIV. We measure the wake response to a change in operational set point of the model turbine, e.g., from low to high tip speed ratio or vice versa, to examine how it might influence a downwind turbine. A modified torque transducer after Kang et al. (2010) is used to calibrate in situ voltage measurements of the model turbine's generator operating across a resistance to the torque on the generator. Changes in operational set point are made by changing the resistance or the flow speed, which change the rotation rate measured by an encoder. Single camera PIV on vertical planes reveals statistics of the wake at various distances downstream as the turbine transitions from one set point to another. From these measurements, we infer how the unsteady operation of a turbine may affect the performance of a downwind turbine as its incoming flow. National Science Foundation and the Atkinson Center for a Sustainable Future.

  3. Multi-resolution imaging with an optimized number and distribution of sampling points.

    PubMed

    Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo

    2014-05-05

    We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis.

  4. Optimization of the NIF Polar-Drive Ignition Point Design

    NASA Astrophysics Data System (ADS)

    Collins, T. J. B.; Delettrez, J. A.; Marozas, J. A.; Anderson, K. S.; McKenty, P. W.; Shvydky, A.; Cao, D.; Chenhall, J.; Prochaska, A.; Moses, G.

    2013-10-01

    Polar drive (PD) allows one to conduct direct-drive-ignition experiments at the National Ignition Facility while the facility is configured for x-ray drive. A PD-ignition design has been developed achieving high gain in simulations including single- and multibeam nonuniformities, and ice and outer-surface roughness. This design was optimized with Telios to reduce the in-flight aspect ratio (IFAR) and implosion speed, increasing target stability while maintaining moderately high thermonuclear gains. With the recent advent of new numerical models treating the effects of nonlocal thermal transport and cross-beam energy transfer, the design has undergone a re-evaluation. Results describing the effects of these processes on the drive and implosion uniformity of the design and the overall target gain will be described. Optimization of both polar and azimuthal beam pointing angles has also been investigated using the optimizer Telios. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  5. Optimization of wastewater treatment plant operation for greenhouse gas mitigation.

    PubMed

    Kim, Dongwook; Bowen, James D; Ozelkan, Ertunga C

    2015-11-01

    This study deals with the determination of optimal operation of a wastewater treatment system for minimizing greenhouse gas emissions, operating costs, and pollution loads in the effluent. To do this, an integrated performance index that includes three objectives was established to assess system performance. The ASMN_G model was used to perform system optimization aimed at determining a set of operational parameters that can satisfy three different objectives. The complex nonlinear optimization problem was simulated using the Nelder-Mead Simplex optimization algorithm. A sensitivity analysis was performed to identify influential operational parameters on system performance. The results obtained from the optimization simulations for six scenarios demonstrated that there are apparent trade-offs among the three conflicting objectives. The best optimized system simultaneously reduced greenhouse gas emissions by 31%, reduced operating cost by 11%, and improved effluent quality by 2% compared to the base case operation.

  6. Research on automatic optimization of ground control points in image geometric rectification based on Voronoi diagram

    NASA Astrophysics Data System (ADS)

    Li, Ying; Cheng, Bo

    2009-10-01

    With the development of remote sensing satellites, the data quantity of remote sensing image is increasing tremendously, which brings a huge workload to the image geometric rectification through manual ground control point (GCP) selections. GCP database is one of the effective methods to cut down manual operation. The GCP loaded from database is generally redundant, which may result in a rectification slowdown. How to automatically optimize these ground control points is a problem that should be resolved urgently. According to the basic theory of geometric rectification and the principle of GCP selection, this paper deeply comprehends some existing methods about automatic optimization of GCP, and puts forward a new method of automatic optimization of GCP based on voronoi diagram to filter ground control points from the overfull ones without manual subjectivity for better accuracy. The paper is organized as follows: First, it clarifies the basic theory of remote sensing image multinomial geometric rectification and the arithmetic of how to get the GCP error. Second, it particularly introduces the voronoi diagram including its origin, development and characteristics, especially the creating process. Third, considering the deficiencies of existing methods about automatic optimization of GCP, the paper presents the idea of applying voronoi diagram to filter GCP in order to complete automatic optimization. During this process, it advances the conception of single GCP's importance value based on voronoi diagram. Then by integrating the GCP error and GCP's importance value, the paper gives the theory and the flow of automatic optimization of GCPs as well. It also presents an example of the application of this method. In the conclusion, it points out the advantages of automatic optimization of GCP based on the voronoi diagram.

  7. Classification and uptake of reservoir operation optimization methods

    NASA Astrophysics Data System (ADS)

    Dobson, Barnaby; Pianosi, Francesca; Wagener, Thorsten

    2016-04-01

    Reservoir operation optimization algorithms aim to improve the quality of reservoir release and transfer decisions. They achieve this by creating and optimizing the reservoir operating policy; a function that returns decisions based on the current system state. A range of mathematical optimization algorithms and techniques has been applied to the reservoir operation problem of policy optimization. In this work, we propose a classification of reservoir optimization approaches by focusing on the formulation of the water management problem rather than the optimization algorithm type. We believe that decision makers and operators will find it easier to navigate a classification system based on the problem characteristics, something they can clearly define, rather than the optimization algorithm. Part of this study includes an investigation regarding the extent of algorithm uptake and the possible reasons that limit real world application.

  8. Process Parameters Optimization in Single Point Incremental Forming

    NASA Astrophysics Data System (ADS)

    Gulati, Vishal; Aryal, Ashmin; Katyal, Puneet; Goswami, Amitesh

    2016-04-01

    This work aims to optimize the formability and surface roughness of parts formed by the single-point incremental forming process for an Aluminium-6063 alloy. The tests are based on Taguchi's L18 orthogonal array selected on the basis of DOF. The tests have been carried out on vertical machining center (DMC70V); using CAD/CAM software (SolidWorks V5/MasterCAM). Two levels of tool radius, three levels of sheet thickness, step size, tool rotational speed, feed rate and lubrication have been considered as the input process parameters. Wall angle and surface roughness have been considered process responses. The influential process parameters for the formability and surface roughness have been identified with the help of statistical tool (response table, main effect plot and ANOVA). The parameter that has the utmost influence on formability and surface roughness is lubrication. In the case of formability, lubrication followed by the tool rotational speed, feed rate, sheet thickness, step size and tool radius have the influence in descending order. Whereas in surface roughness, lubrication followed by feed rate, step size, tool radius, sheet thickness and tool rotational speed have the influence in descending order. The predicted optimal values for the wall angle and surface roughness are found to be 88.29° and 1.03225 µm. The confirmation experiments were conducted thrice and the value of wall angle and surface roughness were found to be 85.76° and 1.15 µm respectively.

  9. Multi-point optimization of recirculation flow type casing treatment in centrifugal compressors

    NASA Astrophysics Data System (ADS)

    Tun, Min Thaw; Sakaguchi, Daisaku

    2016-06-01

    High-pressure ratio and wide operating range are highly required for a turbocharger in diesel engines. A recirculation flow type casing treatment is effective for flow range enhancement of centrifugal compressors. Two ring grooves on a suction pipe and a shroud casing wall are connected by means of an annular passage and stable recirculation flow is formed at small flow rates from the downstream groove toward the upstream groove through the annular bypass. The shape of baseline recirculation flow type casing is modified and optimized by using a multi-point optimization code with a metamodel assisted evolutionary algorithm embedding a commercial CFD code CFX from ANSYS. The numerical optimization results give the optimized design of casing with improving adiabatic efficiency in wide operating flow rate range. Sensitivity analysis of design parameters as a function of efficiency has been performed. It is found that the optimized casing design provides optimized recirculation flow rate, in which an increment of entropy rise is minimized at grooves and passages of the rotating impeller.

  10. Solar array pointing control for the International Space Station electrical power subsystem to optimize power delivery

    SciTech Connect

    Hill, R.C.

    1998-07-01

    Precise orientation control of the International Space Station (ISS) Electrical Power System (EPS) photovoltaic (PV) solar arrays is required for a number of reasons, including the optimization of power delivery to ISS system loads and payloads. To maximize power generation and delivery in general, the PV arrays are pointed directly at the sun with some allowance for inaccuracies in determination of where to point and in the actuation of pointing the PV arrays. Control of PV array orientation in this sun pointing mode is performed automatically by onboard hardware and software. During certain conditions, maximum power cannot be generated in automatic sun tracking mode due to shadowing of the PV arrays cast by other ISS structures, primarily adjacent PV arrays. In order to maximize the power generated, the PV arrays must be pointed away from the ideal sun pointing targets to reduce the amount of shadowing. The amount of off-pointing to maximize power is a function of many parameters such as the physical configuration of the ISS structures during the assembly timeframe, the solar beta angle and vehicle attitude. Thus the off-pointing cannot be controlled automatically and must be determined by ground operators. This paper presents an overview of ISS PV array orientation control, PV array power performance under shadowed and off-pointing conditions, and a methodology to maximize power under those same conditions.

  11. Assembly, checkout, and operation optimization analysis technique for complex systems

    NASA Technical Reports Server (NTRS)

    1968-01-01

    Computerized simulation model of a launch vehicle/ground support equipment system optimizes assembly, checkout, and operation of the system. The model is used to determine performance parameters in three phases or modes - /1/ systems optimization techniques, /2/ operation analysis methodology, and /3/ systems effectiveness analysis technique.

  12. Charcoal bed operation for optimal organic carbon removal

    SciTech Connect

    Merritt, C.M.; Scala, F.R.

    1995-05-01

    Historically, evaporation, reverse osmosis or charcoal-demineralizer systems have been used to remove impurities in liquid radwaste processing systems. At Nine Mile point, we recently replaced our evaporators with charcoal-demineralizer systems to purify floor drain water. A comparison of the evaporator to the charcoal-demineralizer system has shown that the charcoal-demineralizer system is more effective in organic carbon removal. We also show the performance data of the Granulated Activated Charcoal (GAC) vessel as a mechanical filter. Actual data showing that frequent backflushing and controlled flow rates through the GAC vessel dramatically increases Total Organic Carbon (TOC) removal efficiency. GAC vessel dramatically increases Total Organic Carbon (TOC) removal efficiency. Recommendations are provided for operating the GAC vessel to ensure optimal performance.

  13. Optimal experimental design with the sigma point method.

    PubMed

    Schenkendorf, R; Kremling, A; Mangold, M

    2009-01-01

    Using mathematical models for a quantitative description of dynamical systems requires the identification of uncertain parameters by minimising the difference between simulation and measurement. Owing to the measurement noise also, the estimated parameters possess an uncertainty expressed by their variances. To obtain highly predictive models, very precise parameters are needed. The optimal experimental design (OED) as a numerical optimisation method is used to reduce the parameter uncertainty by minimising the parameter variances iteratively. A frequently applied method to define a cost function for OED is based on the inverse of the Fisher information matrix. The application of this traditional method has at least two shortcomings for models that are nonlinear in their parameters: (i) it gives only a lower bound of the parameter variances and (ii) the bias of the estimator is neglected. Here, the authors show that by applying the sigma point (SP) method a better approximation of characteristic values of the parameter statistics can be obtained, which has a direct benefit on OED. An additional advantage of the SP method is that it can also be used to investigate the influence of the parameter uncertainties on the simulation results. The SP method is demonstrated for the example of a widely used biological model.

  14. Earth-Moon Libration Point Orbit Stationkeeping: Theory, Modeling and Operations

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Pavlak, Thomas A.; Haapala, Amanda F.; Howell, Kathleen C.; Woodard, Mark A.

    2013-01-01

    Collinear Earth-Moon libration points have emerged as locations with immediate applications. These libration point orbits are inherently unstable and must be maintained regularly which constrains operations and maneuver locations. Stationkeeping is challenging due to relatively short time scales for divergence effects of large orbital eccentricity of the secondary body, and third-body perturbations. Using the Acceleration Reconnection and Turbulence and Electrodynamics of the Moon's Interaction with the Sun (ARTEMIS) mission orbit as a platform, the fundamental behavior of the trajectories is explored using Poincare maps in the circular restricted three-body problem. Operational stationkeeping results obtained using the Optimal Continuation Strategy are presented and compared to orbit stability information generated from mode analysis based in dynamical systems theory.

  15. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  16. Exact confidence interval estimation for the Youden index and its corresponding optimal cut-point.

    PubMed

    Lai, Chin-Ying; Tian, Lili; Schisterman, Enrique F

    2012-05-01

    In diagnostic studies, the receiver operating characteristic (ROC) curve and the area under the ROC curve are important tools in assessing the utility of biomarkers in discriminating between non-diseased and diseased populations. For classifying a patient into the non-diseased or diseased group, an optimal cut-point of a continuous biomarker is desirable. Youden's index (J), defined as the maximum vertical distance between the ROC curve and the diagonal line, serves as another global measure of overall diagnostic accuracy and can be used in choosing an optimal cut-point. The proposed approach is to make use of a generalized approach to estimate the confidence intervals of the Youden index and its corresponding optimal cut-point. Simulation results are provided for comparing the coverage probabilities of the confidence intervals based on the proposed method with those based on the large sample method and the parametric bootstrap method. Finally, the proposed method is illustrated via an application to a data set from a study on Duchenne muscular dystrophy (DMD).

  17. Applying Dynamical Systems Theory to Optimize Libration Point Orbit Stationkeeping Maneuvers for WIND

    NASA Technical Reports Server (NTRS)

    Brown, Jonathan M.; Petersen, Jeremy D.

    2014-01-01

    NASA's WIND mission has been operating in a large amplitude Lissajous orbit in the vicinity of the interior libration point of the Sun-Earth/Moon system since 2004. Regular stationkeeping maneuvers are required to maintain the orbit due to the instability around the collinear libration points. Historically these stationkeeping maneuvers have been performed by applying an incremental change in velocity, or (delta)v along the spacecraft-Sun vector as projected into the ecliptic plane. Previous studies have shown that the magnitude of libration point stationkeeping maneuvers can be minimized by applying the (delta)v in the direction of the local stable manifold found using dynamical systems theory. This paper presents the analysis of this new maneuver strategy which shows that the magnitude of stationkeeping maneuvers can be decreased by 5 to 25 percent, depending on the location in the orbit where the maneuver is performed. The implementation of the optimized maneuver method into operations is discussed and results are presented for the first two optimized stationkeeping maneuvers executed by WIND.

  18. Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds

    PubMed Central

    Radecký, Michal; Snášel, Václav

    2016-01-01

    The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds. PMID:27974884

  19. Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds.

    PubMed

    Uher, Vojtěch; Gajdoš, Petr; Radecký, Michal; Snášel, Václav

    2016-01-01

    The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds.

  20. 24 CFR 902.47 - Management operations portion of total PHAS points.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Management operations portion of... Operations § 902.47 Management operations portion of total PHAS points. Of the total 100 points available for a PHAS score, a PHA may receive up to 30 points based on the Management Operations Indicator....

  1. 75 FR 3856 - Drawbridge Operation Regulations; Great Egg Harbor Bay, Between Beesleys Point and Somers Point, NJ

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-25

    ... SECURITY Coast Guard 33 CFR Part 117 RIN 1625-AA09 Drawbridge Operation Regulations; Great Egg Harbor Bay... Bridge over Great Egg Harbor Bay, at mile 3.5, between Beesleys Point and Somers Point, NJ. This rule... notice of proposed rulemaking (NPRM) entitled Drawbridge Operation Regulations; Great Egg Harbor...

  2. Sliding control of pointing and tracking with operator spline estimation

    NASA Technical Reports Server (NTRS)

    Dwyer, Thomas A. W., III; Fakhreddine, Karray; Kim, Jinho

    1989-01-01

    It is shown how a variable structure control technique could be implemented to achieve precise pointing and good tracking of a deformable structure subject to fast slewing maneuvers. The correction torque that has to be applied to the structure is based on estimates of upper bounds on the model errors. For a rapid rotation of the deformable structure, the elastic response can be modeled by oscillators driven by angular acceleration, and where stiffness and damping coefficients are also angular velocity and acceleration dependent. By transforming this slew-driven elastic dynamics into bilinear form (be regarding the vector made up of the angular velocity, squared angular velocity and angular acceleration components, which appear in the coefficients as the input to the deformation dynamics), an operator spline can be constructed, that gives a low order estimate of the induced disturbance. Moreover, a worst case error bound between the estimated deformation and the unknown exact deformation is also generated, which can be used where required in the sliding control correction.

  3. Beam pointing angle optimization and experiments for vehicle laser Doppler velocimetry

    NASA Astrophysics Data System (ADS)

    Fan, Zhe; Hu, Shuling; Zhang, Chunxi; Nie, Yanju; Li, Jun

    2015-10-01

    Beam pointing angle (BPA) is one of the key parameters that affects the operation performance of the laser Doppler velocimetry (LDV) system. By considering velocity sensitivity and echo power, for the first time, the optimized BPA of vehicle LDV is analyzed. Assuming mounting error is within ±1.0 deg, the reflectivity and roughness are variable for different scenarios, the optimized BPA is obtained in the range from 29 to 43 deg. Therefore, velocity sensitivity is in the range of 1.25 to 1.76 MHz/(m/s), and the percentage of normalized echo power at optimized BPA with respect to that at 0 deg is greater than 53.49%. Laboratory experiments with a rotating table are done with different BPAs of 10, 35, and 66 deg, and the results coincide with the theoretical analysis. Further, vehicle experiment with optimized BPA of 35 deg is conducted by comparison with microwave radar (accuracy of ±0.5% full scale output). The root-mean-square error of LDV's results is smaller than the Microstar II's, 0.0202 and 0.1495 m/s, corresponding to LDV and Microstar II, respectively, and the mean velocity discrepancy is 0.032 m/s. It is also proven that with the optimized BPA both high velocity sensitivity and acceptable echo power can simultaneously be guaranteed.

  4. Synergy optimization and operation management on syndicate complementary knowledge cooperation

    NASA Astrophysics Data System (ADS)

    Tu, Kai-Jan

    2014-10-01

    The number of multi enterprises knowledge cooperation has grown steadily, as a result of global innovation competitions. I have conducted research based on optimization and operation studies in this article, and gained the conclusion that synergy management is effective means to break through various management barriers and solve cooperation's chaotic systems. Enterprises must communicate system vision and access complementary knowledge. These are crucial considerations for enterprises to exert their optimization and operation knowledge cooperation synergy to meet global marketing challenges.

  5. Operational Optimization of Large-Scale Parallel-Unit SWRO Desalination Plant Using Differential Evolution Algorithm

    PubMed Central

    Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping

    2014-01-01

    A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180

  6. Communicating to Learn: Infants' Pointing Gestures Result in Optimal Learning.

    PubMed

    Lucca, Kelsey; Wilbourn, Makeba Parramore

    2016-12-29

    Infants' pointing gestures are a critical predictor of early vocabulary size. However, it remains unknown precisely how pointing relates to word learning. The current study addressed this question in a sample of 108 infants, testing one mechanism by which infants' pointing may influence their learning. In Study 1, 18-month-olds, but not 12-month-olds, more readily mapped labels to objects if they had first pointed toward those objects than if they had referenced those objects via other communicative behaviors, such as reaching or gaze alternations. In Study 2, when an experimenter labeled a not pointed-to-object, 18-month-olds' pointing was no longer related to enhanced fast mapping. These findings suggest that infants' pointing gestures reflect a readiness and, potentially, a desire to learn.

  7. Improvements in floating point addition/subtraction operations

    DOEpatents

    Farmwald, P.M.

    1984-02-24

    Apparatus is described for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.

  8. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  9. Loading concepts for Hoover Powerplant to optimize plant operating efficiency

    SciTech Connect

    Stitt, S.C.

    1983-08-01

    Plant efficiency gains that could be realized at Hoover Powerplant by the use of an algorithm to optimize plant efficiency are given. Comparisons are shown between the present plant operating conditions modeled on a digital computer, and the plant with the proposed unified bus operating under control of a GELA (Generator Efficiency Loading Algorithm) system. The basic concepts of that algorithm are given.

  10. Nickel-Cadmium Battery Operation Management Optimization Using Robust Design

    NASA Technical Reports Server (NTRS)

    Blosiu, Julian O.; Deligiannis, Frank; DiStefano, Salvador

    1996-01-01

    In recent years following several spacecraft battery anomalies, it was determined that managing the operational factors of NASA flight NiCd rechargeable battery was very important in order to maintain space flight battery nominal performance. The optimization of existing flight battery operational performance was viewed as something new for a Taguchi Methods application.

  11. Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations

    NASA Technical Reports Server (NTRS)

    Chen, Robert T. N.; Zhao, Yi-Yuan

    1997-01-01

    This paper presents a summary of a series of recent analytical studies conducted to investigate one-engine-inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, continued takeoff (CTO), rejected takeoff (RTO), balked landing (BL), and continued landing (CL) are investigated for both vertical-takeoff-and-landing (VTOL) and short-takeoff-and-landing (STOL) terminal-area operations. The formulation of the non-linear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajector optimization studies are presented. In particular, a new balanced-weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.

  12. Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations

    NASA Technical Reports Server (NTRS)

    Zhao, Yiyuan; Chen, Robert T. N.

    1996-01-01

    This paper presents a summary of a series of recent analytical studies conducted to investigate One-Engine-Inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, Continued TakeOff (CTO), Rejected TakeOff (RTO), Balked Landing (BL), and Continued Landing (CL) are investigated for both Vertical-TakeOff-and-Landing (VTOL) and Short-TakeOff-and-Landing (STOL) terminal-area operations. The formulation of the nonlinear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajectory optimization studies are presented. In particular, a new balanced- weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.

  13. Optimizing post-operative Crohn's disease treatment.

    PubMed

    Domènech, Eugeni; Mañosa, Míriam; Lobatón, Triana; Cabré, Eduard

    2014-01-01

    Despite the availability of biological drugs and the widespread and earlier use of immunosuppressants, intestinal resection remains necessary in almost half of the patients with Crohn's disease. The development of new mucosal lesions in previously unaffected intestinal segments (a phenomenon known as post-operative recurrence, POR) occur within the first year in up to 80% if no preventive measure is started soon after resectional surgery, leading to clinical manifestations (clinical recurrence) and even needing new intestinal resection (surgical recurrence) in some patients. That is the reason why endoscopic monitoring has been recommended within 6 to 12 months after surgery. Active smoking is the only indisputable risk factor for early POR development. Among several evaluated drugs, only thiopurine and anti-tumor necrosis factor therapy seem to be effective and feasible in the long-term both for preventing or even treating recurrent lesions, at least in a proportion of patients. However, to date, it is not clear which patients should start with one or another drug right after surgery. It is also not well established how and how often POR should be assessed in patients with a normal ileocolonoscopy within the first 12 months.

  14. Deriving Optimal Operational Rules for Mitigating Inter-area Oscillations

    SciTech Connect

    Diao, Ruisheng; Huang, Zhenyu; Zhou, Ning; Chen, Yousu; Tuffner, Francis K.; Fuller, Jason C.; Jin, Shuangshuang; Dagle, Jeffery E.

    2011-05-23

    This paper introduces a new method to mitigate inter-area oscillations of a large scale interconnected power system by means of generation re-dispatch. The optimal operational control procedures are derived as the shortest distance from the current operating condition to a desired damping ratio of the oscillation mode by adjusting generator outputs. A sensitivity based method is used to select the most effective generators for generation re-dispatch and decision tree is trained to approximate the security boundary in a space characterized by the selected generators. The optimal operational rules can be found by solving an optimization problem where the boundary constraints are provided by the decision tree rules. This method is tested on a Western Electricity Coordinating Council (WECC) 179-bus simplified network model and simulation results have demonstrated the proof of concept and shown promising application in real time operation.

  15. 76 FR 60733 - Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-30

    ... SECURITY Coast Guard 33 CFR Part 117 Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY AGENCY... the Smith Point Bridge, 6.1, across Narrow Bay, between Smith Point and Fire Island, New York. The.... SUPPLEMENTARY INFORMATION: The Smith Point Bridge, across Narrow Bay, mile 6.1, between Smith Point and...

  16. OPTIMIZATION OF THE PHASE ADVANCE BETWEEN RHIC INTERACTION POINTS.

    SciTech Connect

    TOMAS, R.; FISCHER, W.

    2005-05-16

    The authors consider a scenario of having two identical Interaction Points (IPs) in the Relativistic Heavy Ion Collider (RHIC). The strengths of beam-beam resonances strongly depend on the phase advance between these two IPs and therefore certain phase advances could improve beam life-time and luminosity. The authors compute the dynamic aperture (DA) as function of the phase advance between these IPs to find the optimum settings.The beam-beam interaction is treated in the weak-strong approximation and a non-linear model of the lattice is used. For the current RHIC proton working point (0.69.0.685) [1] the design lattice is found to have the optimum phase advance. However this is not the case for other working points.

  17. Optimization of Operations Resources via Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  18. Optimal Compression of Floating-Point FITS Images

    NASA Astrophysics Data System (ADS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2010-12-01

    Lossless compression (e.g., with GZIP) of floating-point format astronomical FITS images is ineffective and typically only reduces the file size by 10% to 30%. We describe a much more effective compression method that is supported by the publicly available fpack and funpack FITS image compression utilities that can compress floating point images by a factor of 10 without loss of significant scientific precision. A “subtractive dithering” technique is described which permits coarser quantization (and thus higher compression) than is possible with simple scaling methods.

  19. Point-process principal components analysis via geometric optimization.

    PubMed

    Solo, Victor; Pasha, Syed Ahmed

    2013-01-01

    There has been a fast-growing demand for analysis tools for multivariate point-process data driven by work in neural coding and, more recently, high-frequency finance. Here we develop a true or exact (as opposed to one based on time binning) principal components analysis for preliminary processing of multivariate point processes. We provide a maximum likelihood estimator, an algorithm for maximization involving steepest ascent on two Stiefel manifolds, and novel constrained asymptotic analysis. The method is illustrated with a simulation and compared with a binning approach.

  20. Application of trajectory optimization principles to minimize aircraft operating costs

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Morello, S. A.; Erzberger, H.

    1979-01-01

    This paper summarizes various applications of trajectory optimization principles that have been or are being devised by both government and industrial researchers to minimize aircraft direct operating costs (DOC). These costs (time and fuel) are computed for aircraft constrained to fly over a fixed range. Optimization theory is briefly outlined, and specific algorithms which have resulted from application of this theory are described. Typical results which demonstrate use of these algorithms and the potential savings which they can produce are given. Finally, need for further trajectory optimization research is presented.

  1. Energy-optimal programming and scheduling of the manufacturing operations

    NASA Astrophysics Data System (ADS)

    Badea, N.; Frumuşanu, G.; Epureanu, A.

    2016-08-01

    The shop floor energy system covers the energy consumed for both the air conditioning and manufacturing processes. At the same time, most of energy consumed in manufacturing processes is converted in heat released in the shop floor interior and has a significant influence on the microclimate. Both these components of the energy consumption have a time variation that can be realistic assessed. Moreover, the consumed energy decisively determines the environmental sustainability of the manufacturing operation, while the expenditure for running the shop floor energy system is a significant component of the manufacturing operations cost. Finally yet importantly, the energy consumption can be fundamentally influenced by properly programming and scheduling of the manufacturing operations. In this paper, we present a method for modeling and energy-optimal programming & scheduling the manufacturing operations. In this purpose, we have firstly identified two optimization targets, namely the environmental sustainability and the economic efficiency. Then, we have defined three optimization criteria, which can assess the degree of achieving these targets. Finally, we have modeled the relationship between the optimization criteria and the parameters of programming and scheduling. In this way, it has been revealed that by adjusting these parameters one can significantly improve the sustainability and efficiency of manufacturing operations. A numerical simulation has proved the feasibility and the efficiency of the proposed method.

  2. Optimizing Reservoir Operation to Adapt to the Climate Change

    NASA Astrophysics Data System (ADS)

    Madadgar, S.; Jung, I.; Moradkhani, H.

    2010-12-01

    Climate change and upcoming variation in flood timing necessitates the adaptation of current rule curves developed for operation of water reservoirs as to reduce the potential damage from either flood or draught events. This study attempts to optimize the current rule curves of Cougar Dam on McKenzie River in Oregon addressing some possible climate conditions in 21th century. The objective is to minimize the failure of operation to meet either designated demands or flood limit at a downstream checkpoint. A simulation/optimization model including the standard operation policy and a global optimization method, tunes the current rule curve upon 8 GCMs and 2 greenhouse gases emission scenarios. The Precipitation Runoff Modeling System (PRMS) is used as the hydrology model to project the streamflow for the period of 2000-2100 using downscaled precipitation and temperature forcing from 8 GCMs and two emission scenarios. An ensemble of rule curves, each associated with an individual scenario, is obtained by optimizing the reservoir operation. The simulation of reservoir operation, for all the scenarios and the expected value of the ensemble, is conducted and performance assessment using statistical indices including reliability, resilience, vulnerability and sustainability is made.

  3. A fixed point theorem for certain operator valued maps

    NASA Technical Reports Server (NTRS)

    Brown, D. R.; Omalley, M. J.

    1978-01-01

    In this paper, we develop a family of Neuberger-like results to find points z epsilon H satisfying L(z)z = z and P(z) = z. This family includes Neuberger's theorem and has the additional property that most of the sequences q sub n converge to idempotent elements of B sub 1(H).

  4. 75 FR 3497 - Entergy Nuclear Operations, Inc., Entergy Nuclear Indian Point 2, LLC, Entergy Nuclear Indian...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-21

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc., Entergy Nuclear Indian Point 2, LLC, Entergy Nuclear Indian Point 3, LLC,: Indian Point Nuclear Generating Unit Nos. 2 and 3; Notice of Consideration of Issuance of Amendment to Facility Operating License...

  5. Optimal Operation of a Thermal Energy Storage Tank Using Linear Optimization

    NASA Astrophysics Data System (ADS)

    Civit Sabate, Carles

    In this thesis, an optimization procedure for minimizing the operating costs of a Thermal Energy Storage (TES) tank is presented. The facility in which the optimization is based is the combined cooling, heating, and power (CCHP) plant at the University of California, Irvine. TES tanks provide the ability of decoupling the demand of chilled water from its generation, over the course of a day, from the refrigeration and air-conditioning plants. They can be used to perform demand-side management, and optimization techniques can help to approach their optimal use. The proposed optimization approach provides a fast and reliable methodology of finding the optimal use of the TES tank to reduce energy costs and provides a tool for future implementation of optimal control laws on the system. Advantages of the proposed methodology are studied using simulation with historical data.

  6. Approximating stationary points of stochastic optimization problems in Banach space

    NASA Astrophysics Data System (ADS)

    Balaji, Ramamurthy; Xu, Huifu

    2008-11-01

    In this paper, we present a uniform strong law of large numbers for random set-valued mappings in separable Banach space and apply it to analyze the sample average approximation of Clarke stationary points of a nonsmooth one stage stochastic minimization problem in separable Banach space. Moreover, under Hausdorff continuity, we show that with probability approaching one exponentially fast with the increase of sample size, the sample average of a convex compact set-valued mapping converges to its expected value uniformly. The result is used to establish exponential convergence of stationary sequence under some metric regularity conditions.

  7. Optimization of block-floating-point realizations for digital controllers with finite-word-length considerations.

    PubMed

    Wu, Jun; Hu, Xie-he; Chen, Sheng; Chu, Jian

    2003-01-01

    The closed-loop stability issue of finite-precision realizations was investigated for digital controllers implemented in block-floating-point format. The controller coefficient perturbation was analyzed resulting from using finite word length (FWL) block-floating-point representation scheme. A block-floating-point FWL closed-loop stability measure was derived which considers both the dynamic range and precision. To facilitate the design of optimal finite-precision controller realizations, a computationally tractable block-floating-point FWL closed-loop stability measure was then introduced and the method of computing the value of this measure for a given controller realization was developed. The optimal controller realization is defined as the solution that maximizes the corresponding measure, and a numerical optimization approach was adopted to solve the resulting optimal realization problem. A numerical example was used to illustrate the design procedure and to compare the optimal controller realization with the initial realization.

  8. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  9. Trajectory optimization for intra-operative nuclear tomographic imaging.

    PubMed

    Vogel, Jakob; Lasser, Tobias; Gardiazabal, José; Navab, Nassir

    2013-10-01

    Diagnostic nuclear imaging modalities like SPECT typically employ gantries to ensure a densely sampled geometry of detectors in order to keep the inverse problem of tomographic reconstruction as well-posed as possible. In an intra-operative setting with mobile freehand detectors the situation changes significantly, and having an optimal detector trajectory during acquisition becomes critical. In this paper we propose an incremental optimization method based on the numerical condition of the system matrix of the underlying iterative reconstruction method to calculate optimal detector positions during acquisition in real-time. The performance of this approach is evaluated using simulations. A first experiment on a phantom using a robot-controlled intra-operative SPECT-like setup demonstrates the feasibility of the approach.

  10. A transmittance-optimized, point-focus Fresnel lens solar concentrator

    NASA Technical Reports Server (NTRS)

    Oneill, M. J.; Goldberg, V. R.; Muzzy, D. B.

    1982-01-01

    The development of a point-focus Fresnel lens solar concentrator for high-temperature solar thermal energy system applications is discussed. The concentrator utilizes a transmittance-optimized, short-focal-length, dome-shaped refractive Fresnel lens as the optical element. This concentrator combines both good optical performance and a large tolerance for manufacturing, deflection, and tracking errors. The conceptual design of an 11-meter diameter concentrator which should provide an overall collector efficiency of about 70% at an 815 C (1500 F) receiver operating temperature and a 1500X geometric concentration ratio (lens aperture area/receiver aperture area) was completed. Results of optical and thermal analyses of the collector, a discussion of manufacturing methods for making the large lens, and an update on the current status and future plans of the development program are included.

  11. Using information Theory in Optimal Test Point Selection for Health Management in NASA's Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Mehr, Ali Farhang; Tumer, Irem

    2005-01-01

    In this paper, we will present a new methodology that measures the "worth" of deploying an additional testing instrument (sensor) in terms of the amount of information that can be retrieved from such measurement. This quantity is obtained using a probabilistic model of RLV's that has been partially developed in the NASA Ames Research Center. A number of correlated attributes are identified and used to obtain the worth of deploying a sensor in a given test point from an information-theoretic viewpoint. Once the information-theoretic worth of sensors is formulated and incorporated into our general model for IHM performance, the problem can be formulated as a constrained optimization problem where reliability and operational safety of the system as a whole is considered. Although this research is conducted specifically for RLV's, the proposed methodology in its generic form can be easily extended to other domains of systems health monitoring.

  12. A Transmittance-optimized, Point-focus Fresnel Lens Solar Concentrator

    NASA Technical Reports Server (NTRS)

    Oneill, M. J.

    1984-01-01

    The development of a point-focus Fresnel lens solar concentrator for high-temperature solar thermal energy system applications is discussed. The concentrator utilizes a transmittance-optimized, short-focal-length, dome-shaped refractive Fresnel lens as the optical element. This concentrator combines both good optical performance and a large tolerance for manufacturing, deflection, and tracking errors. The conceptual design of an 11-meter diameter concentrator which should provide an overall collector efficiency of about 70% at an 815 C (1500 F) receiver operating temperature and a 1500X geometric concentration ratio (lens aperture area/receiver aperture area) was completed. Results of optical and thermal analyses of the collector, a discussion of manufacturing methods for making the large lens, and an update on the current status and future plans of the development program are included.

  13. 78 FR 23845 - Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-23

    ... SECURITY Coast Guard 33 CFR Part 117 Drawbridge Operation Regulations; Narrow Bay, Smith Point, NY AGENCY... the Smith Point Bridge, mile 6.1, across Narrow Bay, between Smith Point and Fire Island, New York. The deviation is necessary to facilitate the Smith Point Triathlon. This deviation allows the...

  14. Near-Optimal Operation of Dual-Fuel Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.; Chou, H. C.; Bowles, J. V.

    1996-01-01

    A near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. Of interest are both the optimal operation of the propulsion system and the optimal flight path. A methodology is developed to investigate the optimal throttle switching of dual-fuel engines. The method is based on selecting propulsion system modes and parameters that maximize a certain performance function. This function is derived from consideration of the energy-state model of the aircraft equations of motion. Because the density of liquid hydrogen is relatively low, the sensitivity of perturbations in volume need to be taken into consideration as well as weight sensitivity. The cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize vehicle empty weight for a given payload mass and volume in orbit.

  15. On Point: The United States Army in Operation Iraqi Freedom

    DTIC Science & Technology

    2004-01-01

    come to grips with what happened and what it meant. For all of these reasons, those of us who wrote On Point depended upon help from many people to...shepherds.88 The two battalions’ pincer movements into the city appeared to force some pro-Saddam forces to flee from the north end of As Samawah. However...the way, both instinctively raising their hands to wave as they turned. Shock and horror gripped the two as they realized they were waving at a pair

  16. Annealing Ant Colony Optimization with Mutation Operator for Solving TSP.

    PubMed

    Mohsen, Abdulqader M

    2016-01-01

    Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality.

  17. Annealing Ant Colony Optimization with Mutation Operator for Solving TSP

    PubMed Central

    2016-01-01

    Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality. PMID:27999590

  18. Na-Faraday rotation filtering: The optimal point

    PubMed Central

    Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja

    2014-01-01

    Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

  19. AN OPTIMIZED 64X64 POINT TWO-DIMENSIONAL FAST FOURIER TRANSFORM

    NASA Technical Reports Server (NTRS)

    Miko, J.

    1994-01-01

    Scientists at Goddard have developed an efficient and powerful program-- An Optimized 64x64 Point Two-Dimensional Fast Fourier Transform-- which combines the performance of real and complex valued one-dimensional Fast Fourier Transforms (FFT's) to execute a two-dimensional FFT and its power spectrum coefficients. These coefficients can be used in many applications, including spectrum analysis, convolution, digital filtering, image processing, and data compression. The program's efficiency results from its technique of expanding all arithmetic operations within one 64-point FFT; its high processing rate results from its operation on a high-speed digital signal processor. For non-real-time analysis, the program requires as input an ASCII data file of 64x64 (4096) real valued data points. As output, this analysis produces an ASCII data file of 64x64 power spectrum coefficients. To generate these coefficients, the program employs a row-column decomposition technique. First, it performs a radix-4 one-dimensional FFT on each row of input, producing complex valued results. Then, it performs a one-dimensional FFT on each column of these results to produce complex valued two-dimensional FFT results. Finally, the program sums the squares of the real and imaginary values to generate the power spectrum coefficients. The program requires a Banshee accelerator board with 128K bytes of memory from Atlanta Signal Processors (404/892-7265) installed on an IBM PC/AT compatible computer (DOS ver. 3.0 or higher) with at least one 16-bit expansion slot. For real-time operation, an ASPI daughter board is also needed. The real-time configuration reads 16-bit integer input data directly into the accelerator board, operating on 64x64 point frames of data. The program's memory management also allows accumulation of the coefficient results. The real-time processing rate to calculate and accumulate the 64x64 power spectrum output coefficients is less than 17.0 mSec. Documentation is included

  20. Optimal Operation of Energy Storage in Power Transmission and Distribution

    NASA Astrophysics Data System (ADS)

    Akhavan Hejazi, Seyed Hossein

    In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider

  1. Robust optimal sun-pointing control of a large solar power satellite

    NASA Astrophysics Data System (ADS)

    Wu, Shunan; Zhang, Kaiming; Peng, Haijun; Wu, Zhigang; Radice, Gianmarco

    2016-10-01

    The robust optimal sun-pointing control strategy for a large geostationary solar power satellite (SPS) is addressed in this paper. The SPS is considered as a huge rigid body, and the sun-pointing dynamics are firstly proposed in the state space representation. The perturbation effects caused by gravity gradient, solar radiation pressure and microwave reaction are investigated. To perform sun-pointing maneuvers, a periodically time-varying robust optimal LQR controller is designed to assess the pointing accuracy and the control inputs. It should be noted that, to reduce the pointing errors, the disturbance rejection technique is combined into the proposed LQR controller. A recursive algorithm is then proposed to solve the optimal LQR control gain. Simulation results are finally provided to illustrate the performance of the proposed closed-loop system.

  2. Optimal line drop compensation parameters under multi-operating conditions

    NASA Astrophysics Data System (ADS)

    Wan, Yuan; Li, Hang; Wang, Kai; He, Zhe

    2017-01-01

    Line Drop Compensation (LDC) is a main function of Reactive Current Compensation (RCC) which is developed to improve voltage stability. While LDC has benefit to voltage, it may deteriorate the small-disturbance rotor angle stability of power system. In present paper, an intelligent algorithm which is combined by Genetic Algorithm (GA) and Backpropagation Neural Network (BPNN) is proposed to optimize parameters of LDC. The objective function proposed in present paper takes consideration of voltage deviation and power system oscillation minimal damping ratio under multi-operating conditions. A simulation based on middle area of Jiangxi province power system is used to demonstrate the intelligent algorithm. The optimization result shows that coordinate optimized parameters can meet the multioperating conditions requirement and improve voltage stability as much as possible while guaranteeing enough damping ratio.

  3. On reductibility of degenerate optimization problems to regular operator equations

    NASA Astrophysics Data System (ADS)

    Bednarczuk, E. M.; Tretyakov, A. A.

    2016-12-01

    We present an application of the p-regularity theory to the analysis of non-regular (irregular, degenerate) nonlinear optimization problems. The p-regularity theory, also known as the p-factor analysis of nonlinear mappings, was developed during last thirty years. The p-factor analysis is based on the construction of the p-factor operator which allows us to analyze optimization problems in the degenerate case. We investigate reducibility of a non-regular optimization problem to a regular system of equations which do not depend on the objective function. As an illustration we consider applications of our results to non-regular complementarity problems of mathematical programming and to linear programming problems.

  4. Physics-Based Prognostics for Optimizing Plant Operation

    SciTech Connect

    Leonard J. Bond; Don B. Jarrell

    2005-03-01

    Scientists at the Pacific Northwest National Laboratory (PNNL) have examined the necessity for optimization of energy plant operation using 'DSOM{reg_sign}'--Decision Support Operation and Maintenance and this has been deployed at several sites. This approach has been expanded to include a prognostics components and tested on a pilot scale service water system, modeled on the design employed in a nuclear power plant. A key element in plant optimization is understanding and controlling the aging process of safety-specific nuclear plant components. This paper reports the development and demonstration of a physics-based approach to prognostic analysis that combines distributed computing, RF data links, the measurement of aging precursor metrics and their correlation with degradation rate and projected machine failure.

  5. Optimization of the operating conditions of CO converters

    SciTech Connect

    Barreto, G.F.; Ferretti, O.A.; Farina, I.H.; Lemcoff, N.O.

    1981-10-01

    The optimization of the operating variables in the conversion of CO from natural gas reforming gases is carried out. The objective function takes into account the increase in production from the conversion of CO, the higher operating costs due to the pressure drop and steam, and the replacement cost of the low temperature catalyst (LTC). The deactivation of the LTC is also considered, and therefore a dynamic optimization problem arises which permits one to obtain an optimum value for the LTC replacement period. The improvement in the process associated with the introduction of a bed guard is also analyzed. The resulting values of the steam to dry gas ratio fall in the range 0.6 to 1.4, while the inlet temperatures of the high and low temperature catalyst beds fall, respectively, in the ranges 365-400/degree/C and 200-245/degree/C. These agree with the actual conditions in industrial plants. 17 refs.

  6. Optimization of operating conditions in tunnel drying of food

    SciTech Connect

    Dong Sun Lee . Dept. of Food Engineering); Yu Ryang Pyun . Dept. of Food Engineering)

    1993-01-01

    A food drying process in a tunnel dryer was modeled from Keey's drying model and experimental drying curve, and optimized in operating conditions consisting of inlet air temperature, air recycle ratio and air flow rate. Radish was chosen as a typical food material to be dried, because it has the typical drying characteristics of food and quality indexes of ascorbic acid destruction and browning during drying. Optimization results of cocurrent and counter current tunnel drying showed higher inlet air temperature, lower recycle ratio and higher air flow rate with shorter total drying time. Compared with cocurrent operation counter current drying used lower air temperature, lower recycle ratio and lower air flow rate, and appeared to be more efficient in energy usage. Most of consumed energy was shown to be used for sir heating and then escaped from the dryer in the form of exhaust air.

  7. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  8. 47 CFR 22.621 - Channels for point-to-multipoint operation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Channels for point-to-multipoint operation. 22... Channels for point-to-multipoint operation. The following channels are allocated for assignment to... service. Unless otherwise indicated, all channels have a bandwidth of 20 kHz and are designated by...

  9. 47 CFR 22.621 - Channels for point-to-multipoint operation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false Channels for point-to-multipoint operation. 22... Channels for point-to-multipoint operation. The following channels are allocated for assignment to... service. Unless otherwise indicated, all channels have a bandwidth of 20 kHz and are designated by...

  10. Optimizing Long-Term Capital Planning for Special Operations Forces

    DTIC Science & Technology

    2015-06-01

    optimization, United States Special Operations Command, capital planning, modernization , Binary Knapsack Model, Bounded Integer Knapsack Model...procurement money over the entire planning horizon. This thesis presents proof-of-principle models to improve the LRCPT by incorporating goal programming...test our models with a 30-year planning horizon including 68 projects and one category of money . We validate the BK model by analyzing the effect of

  11. Optimal recovery of linear operators in non-Euclidean metrics

    SciTech Connect

    Osipenko, K Yu

    2014-10-31

    The paper looks at problems concerning the recovery of operators from noisy information in non-Euclidean metrics. A number of general theorems are proved and applied to recovery problems for functions and their derivatives from the noisy Fourier transform. In some cases, a family of optimal methods is found, from which the methods requiring the least amount of original information are singled out. Bibliography: 25 titles.

  12. Optimizing integrated airport surface and terminal airspace operations under uncertainty

    NASA Astrophysics Data System (ADS)

    Bosson, Christabelle S.

    In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is

  13. 77 FR 56115 - Drawbridge Operation Regulations; Fort Point Channel, Boston, MA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-12

    ... SECURITY Coast Guard 33 CFR Part 117 Drawbridge Operation Regulations; Fort Point Channel, Boston, MA... of the Northern Avenue Bridge, mile 0.1, across the Fort Point Channel, at Boston, Massachusetts...: The Northern Avenue Bridge, across the Fort Point Channel, mile 0.1, has a vertical clearance in...

  14. Point-to-Point! Validation of the Small Aircraft Transportation System Higher Volume Operations Concept

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.

    2006-01-01

    Described is the research process that NASA researchers used to validate the Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) concept. The four phase building-block validation and verification process included multiple elements ranging from formal analysis of HVO procedures to flight test, to full-system architecture prototype that was successfully shown to the public at the June 2005 SATS Technical Demonstration in Danville, VA. Presented are significant results of each of the four research phases that extend early results presented at ICAS 2004. HVO study results have been incorporated into the development of the Next Generation Air Transportation System (NGATS) vision and offer a validated concept to provide a significant portion of the 3X capacity improvement sought after in the United States National Airspace System (NAS).

  15. Multi-objective nested algorithms for optimal reservoir operation

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Solomatine, Dimitri

    2016-04-01

    The optimal reservoir operation is in general a multi-objective problem, meaning that multiple objectives are to be considered at the same time. For solving multi-objective optimization problems there exist a large number of optimization algorithms - which result in a generation of a Pareto set of optimal solutions (typically containing a large number of them), or more precisely, its approximation. At the same time, due to the complexity and computational costs of solving full-fledge multi-objective optimization problems some authors use a simplified approach which is generically called "scalarization". Scalarization transforms the multi-objective optimization problem to a single-objective optimization problem (or several of them), for example by (a) single objective aggregated weighted functions, or (b) formulating some objectives as constraints. We are using the approach (a). A user can decide how many multi-objective single search solutions will generate, depending on the practical problem at hand and by choosing a particular number of the weight vectors that are used to weigh the objectives. It is not guaranteed that these solutions are Pareto optimal, but they can be treated as a reasonably good and practically useful approximation of a Pareto set, albeit small. It has to be mentioned that the weighted-sum approach has its known shortcomings because the linear scalar weights will fail to find Pareto-optimal policies that lie in the concave region of the Pareto front. In this context the considered approach is implemented as follows: there are m sets of weights {w1i, …wni} (i starts from 1 to m), and n objectives applied to single objective aggregated weighted sum functions of nested dynamic programming (nDP), nested stochastic dynamic programming (nSDP) and nested reinforcement learning (nRL). By employing the multi-objective optimization by a sequence of single-objective optimization searches approach, these algorithms acquire the multi-objective properties

  16. Robust Optimization of Fixed Points of Nonlinear Discrete Time Systems with Uncertain Parameters

    NASA Astrophysics Data System (ADS)

    Kastsian, Darya; Monnigmann, Martin

    2010-01-01

    This contribution extends the normal vector method for the optimization of parametrically uncertain dynamical systems to a general class of nonlinear discrete time systems. Essentially, normal vectors are used to state constraints on dynamical properties of fixed points in the optimization of discrete time dynamical systems. In a typical application of the method, a technical dynamical system is optimized with respect to an economic profit function, while the normal vector constraints are used to guarantee the stability of the optimal fixed point. We derive normal vector systems for flip, fold, and Neimark-Sacker bifurcation points, because these bifurcation points constitute the stability boundary of a large class of discrete time systems. In addition, we derive normal vector systems for a related type of critical point that can be used to ensure a user-specified disturbance rejection rate in the optimization of parametrically uncertain systems. We illustrate the method by applying it to the optimization of a discrete time supply chain model and a discretized fermentation process model.

  17. Determination of the Optimal Operating Parameters for Jefferson Laboratory's Cryogenic Cold Compressor Systems

    SciTech Connect

    Wilson, Jr., Joe D.

    2003-01-01

    The technology of Jefferson Laboratory's (JLab) Continuous Electron Beam Accelerator Facility (CEBAF) and Free Electron Laser (FEL) requires cooling from one of the world's largest 2K helium refrigerators known as the Central Helium Liquefier (CHL). The key characteristic of CHL is the ability to maintain a constant low vapor pressure over the large liquid helium inventory using a series of five cold compressors. The cold compressor system operates with a constrained discharge pressure over a range of suction pressures and mass flows to meet the operational requirements of CEBAF and FEL. The research topic is the prediction of the most thermodynamically efficient conditions for the system over its operating range of mass flows and vapor pressures with minimum disruption to JLab operations. The research goal is to find the operating points for each cold compressor for optimizing the overall system at any given flow and vapor pressure.

  18. Optimizing the Point-In-Box Search Algorithm for the Cray Y-MP(TM) Supercomputer

    SciTech Connect

    Attaway, S.W.; Davis, M.E.; Heinstein, M.W.; Swegle, J.S.

    1998-12-23

    Determining the subset of points (particles) in a problem domain that are contained within certain spatial regions of interest can be one of the most time-consuming parts of some computer simulations. Examples where this 'point-in-box' search can dominate the computation time include (1) finite element contact problems; (2) molecular dynamics simulations; and (3) interactions between particles in numerical methods, such as discrete particle methods or smooth particle hydrodynamics. This paper describes methods to optimize a point-in-box search algorithm developed by Swegle that make optimal use of the architectural features of the Cray Y-MP Supercomputer.

  19. Analysis of conditions for operating the S193 Rad/Scat in the solar pointing mode

    NASA Technical Reports Server (NTRS)

    Pintar, J.; Sobti, A.

    1973-01-01

    The S193 Rad/Scat, although initially programmed for operating in the earth pointing mode, can be operated in the solar pointing mode as well. The usual coordinate systems for describing the S193 in orbit are defined. The instructions for the operation of the radiometer and scatterometer are presented in terms of standard Euler angles for these coordinate systems. A sample analysis for the scatterometer is described. The relationships between the various Euler angles and physically meaningful orbit parameters are defined.

  20. Parametric Optimization of Some Critical Operating System Functions--An Alternative Approach to the Study of Operating Systems Design

    ERIC Educational Resources Information Center

    Sobh, Tarek M.; Tibrewal, Abhilasha

    2006-01-01

    Operating systems theory primarily concentrates on the optimal use of computing resources. This paper presents an alternative approach to teaching and studying operating systems design and concepts by way of parametrically optimizing critical operating system functions. Detailed examples of two critical operating systems functions using the…

  1. Optimal reservoir operation policies using novel nested algorithms

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri

    2015-04-01

    Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested

  2. Registration of 2D point sets by complex translation and rotation operations.

    PubMed

    Sahin, Ismet

    2010-01-01

    Alignment of two sets containing two dimensional vectors (2D points) constitutes an important problem in medical imaging, remote sensing, and computer vision. We assume that the points in one set, called the transformed set, are constructed by translating and rotating the points in the other set, called the original set. The points in both sets are represented by complex numbers. In order to translate and then rotate a point, we add a complex constant and then multiply by a complex exponential respectively. We construct a cost function which tries to achieve the least-squares differences between a given transformed set and the set containing transformed points with respect to optimization parameters. We implement the Newton-Raphson optimization algorithm with polynomial line search in order to minimize this cost function. Simulation results with multiple datasets demonstrate that the proposed method aligns two sets efficiently and reliably.

  3. Harmonic component detection: Optimized Spectral Kurtosis for operational modal analysis

    NASA Astrophysics Data System (ADS)

    Dion, J.-L.; Tawfiq, I.; Chevallier, G.

    2012-01-01

    This work is a contribution in the field of Operational Modal Analysis to identify the modal parameters of mechanical structures using only measured responses. The study deals with structural responses coupled with harmonic components amplitude and frequency modulated in a short range, a common combination for mechanical systems with engines and other rotating machines in operation. These harmonic components generate misleading data interpreted erroneously by the classical methods used in OMA. The present work attempts to differentiate maxima in spectra stemming from harmonic components and structural modes. The detection method proposed is based on the so-called Optimized Spectral Kurtosis and compared with others definitions of Spectral Kurtosis described in the literature. After a parametric study of the method, a critical study is performed on numerical simulations and then on an experimental structure in operation in order to assess the method's performance.

  4. Optimization of Sample Points for Monitoring Arable Land Quality by Simulated Annealing while Considering Spatial Variations

    PubMed Central

    Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng

    2016-01-01

    With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051

  5. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

  6. Development of a supervisory control strategy for the optimal operation of grain dryers

    SciTech Connect

    Vasconcelos, L.G.S.; Filho, R.M.

    1998-10-01

    In spite of the importance and especially high energy demands of grain dryers, relatively few studies have been carried out to discover the optimal conditions for their operation. High performance operation can only be achieved if an adequate operating strategy is developed. For its implementation, a reliable control structure is required, and some of the limitations of the conventional control strategies normally used in dryers are observed. These strategies are SISO; the control normally used presents low performance and the disturbance is characterized by several amplitudes and frequencies. A possible way to minimize this difficulty consists of defining the multilevel structure such that each level acts at a given amplitude and frequency. In order to implement this multilevel structure, an optimization problem was developed to function as a supervisory control and a predictive algorithm (DMC) was used for servo or regulatory control. The proposed DMC algorithm presented satisfactory results for the load rejection and set-point variation, only when a small disturbance was applied. For a larger disturbance an optimization procedure was necessary. The routine efficiently maintained the optimal operational conditions of the dryer and could be used in the supervisory control of the system.

  7. Optimized Algorithms for Prediction within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; SunSpiral, Vytas; Allan, Mark B.

    2006-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Laboratory at NASA Johnson Space Center serves as a testbed for human-robot collaboration research and development efforts. One of the primary efforts investigates how adjustable autonomy can provide for a safe and more effective completion of manipulation-based tasks. A predictive algorithm developed in previous work was deployed as part of a software interface that can be used for long-distance tele-operation. In this paper we provide the details of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmic approach. We show that all of the algorithms presented can be optimized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. Judicious feature selection also plays a significant role in the conclusions drawn.

  8. Hubble Space Telescope pointing control system: Designed for performance and mission operations

    NASA Technical Reports Server (NTRS)

    Bradley, A.; Ryan, J.

    1991-01-01

    The Hubble Space Telescope was designed to be an orbiting astronomical observatory which could be operated in the same manner as ground based observatories. The design drivers for the pointing control system's hardware and software were the requirements of an absolute pointing accuracy of 4.8E-8 radians and pointing stability (jitter) of 3.4E-8 radians. Of comparable importance was the objective of providing a flexible command methodology and structure to enable seven day operational planning employing stored program command and real time command capability. The pointing control system hardware, software, safemode control schemes, ground system monitoring capability, and in-orbit results are reviewed.

  9. Excited meson radiative transitions from lattice QCD using variationally optimized operators

    SciTech Connect

    Shultz, Christian J.; Dudek, Jozef J.; Edwards, Robert G.

    2015-06-02

    We explore the use of 'optimized' operators, designed to interpolate only a single meson eigenstate, in three-point correlation functions with a vector-current insertion. These operators are constructed as linear combinations in a large basis of meson interpolating fields using a variational analysis of matrices of two-point correlation functions. After performing such a determination at both zero and non-zero momentum, we compute three-point functions and are able to study radiative transition matrix elements featuring excited state mesons. The required two- and three-point correlation functions are efficiently computed using the distillation framework in which there is a factorization between quark propagation and operator construction, allowing for a large number of meson operators of definite momentum to be considered. We illustrate the method with a calculation using anisotopic lattices having three flavors of dynamical quark all tuned to the physical strange quark mass, considering form-factors and transitions of pseudoscalar and vector meson excitations. In conclusion, the dependence on photon virtuality for a number of form-factors and transitions is extracted and some discussion of excited-state phenomenology is presented.

  10. Excited meson radiative transitions from lattice QCD using variationally optimized operators

    NASA Astrophysics Data System (ADS)

    Shultz, Christian J.; Dudek, Jozef J.; Edwards, Robert G.; Hadron Spectrum Collaboration

    2015-06-01

    We explore the use of "optimized" operators, designed to interpolate only a single meson eigenstate, in three-point correlation functions with a vector-current insertion. These operators are constructed as linear combinations in a large basis of meson interpolating fields using a variational analysis of matrices of two-point correlation functions. After performing such a determination at both zero and nonzero momentum, we compute three-point functions and are able to study radiative transition matrix elements featuring excited-state mesons. The required two- and three-point correlation functions are efficiently computed using the distillation framework in which there is a factorization between quark propagation and operator construction, allowing for a large number of meson operators of definite momentum to be considered. We illustrate the method with a calculation using anisotopic lattices having three flavors of dynamical quark all tuned to the physical strange quark mass, considering form factors and transitions of pseudoscalar and vector meson excitations. The dependence on photon virtuality for a number of form factors and transitions is extracted, and some discussion of excited-state phenomenology is presented.

  11. Optimization of shared autonomy vehicle control architectures for swarm operations.

    PubMed

    Sengstacken, Aaron J; DeLaurentis, Daniel A; Akbarzadeh-T, Mohammad R

    2010-08-01

    The need for greater capacity in automotive transportation (in the midst of constrained resources) and the convergence of key technologies from multiple domains may eventually produce the emergence of a "swarm" concept of operations. The swarm, which is a collection of vehicles traveling at high speeds and in close proximity, will require technology and management techniques to ensure safe, efficient, and reliable vehicle interactions. We propose a shared autonomy control approach, in which the strengths of both human drivers and machines are employed in concert for this management. Building from a fuzzy logic control implementation, optimal architectures for shared autonomy addressing differing classes of drivers (represented by the driver's response time) are developed through a genetic-algorithm-based search for preferred fuzzy rules. Additionally, a form of "phase transition" from a safe to an unsafe swarm architecture as the amount of sensor capability is varied uncovers key insights on the required technology to enable successful shared autonomy for swarm operations.

  12. 78 FR 20144 - Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit 3

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-03

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit 3 AGENCY: Nuclear... for public comment. SUMMARY: The U.S. Nuclear Regulatory Commission (NRC) is reconsidering...

  13. 78 FR 52987 - Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit 3

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-27

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc., Indian Point Nuclear Generating Unit 3 AGENCY: Nuclear.... SUMMARY: The U.S. Nuclear Regulatory Commission (NRC) has concluded that existing exemptions from...

  14. 78 FR 39018 - Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating Unit Nos. 2 and 3

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-28

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating Unit Nos. 2 and 3 AGENCY: Nuclear Regulatory Commission. ACTION: Supplement to Final Supplement 38 to the Generic...

  15. An Efficient Operator for the Change Point Estimation in Partial Spline Model.

    PubMed

    Han, Sung Won; Zhong, Hua; Putt, Mary

    2015-05-01

    In bio-informatics application, the estimation of the starting and ending points of drop-down in the longitudinal data is important. One possible approach to estimate such change times is to use the partial spline model with change points. In order to use estimate change time, the minimum operator in terms of a smoothing parameter has been widely used, but we showed that the minimum operator causes large MSE of change point estimates. In this paper, we proposed the summation operator in terms of a smoothing parameter, and our simulation study showed that the summation operator gives smaller MSE for estimated change points than the minimum one. We also applied the proposed approach to the experiment data, blood flow during photodynamic cancer therapy.

  16. 78 FR 33223 - Drawbridge Operation Regulation; York River, Between Yorktown and Gloucester Point, VA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-04

    ... Gloucester Point and Yorktown, VA. This deviation is necessary to facilitate electrical motor maintenance on... operating regulations set out in 33 CFR 117.1025, to facilitate electric motor maintenance on the...

  17. Optimization of Hydroacoustic Equipment Deployments at Lookout Point and Cougar Dams, Willamette Valley Project, 2010

    SciTech Connect

    Johnson, Gary E.; Khan, Fenton; Ploskey, Gene R.; Hughes, James S.; Fischer, Eric S.

    2010-08-18

    The goal of the study was to optimize performance of the fixed-location hydroacoustic systems at Lookout Point Dam (LOP) and the acoustic imaging system at Cougar Dam (CGR) by determining deployment and data acquisition methods that minimized structural, electrical, and acoustic interference. The general approach was a multi-step process from mount design to final system configuration. The optimization effort resulted in successful deployments of hydroacoustic equipment at LOP and CGR.

  18. Applications of Optimal Building Energy System Selection and Operation

    SciTech Connect

    Marnay, Chris; Stadler, Michael; Siddiqui, Afzal; DeForest, Nicholas; Donadee, Jon; Bhattacharya, Prajesh; Lai, Judy

    2011-04-01

    Berkeley Lab has been developing the Distributed Energy Resources Customer Adoption Model (DER-CAM) for several years. Given load curves for energy services requirements in a building microgrid (u grid), fuel costs and other economic inputs, and a menu of available technologies, DER-CAM finds the optimum equipment fleet and its optimum operating schedule using a mixed integer linear programming approach. This capability is being applied using a software as a service (SaaS) model. Optimisation problems are set up on a Berkeley Lab server and clients can execute their jobs as needed, typically daily. The evolution of this approach is demonstrated by description of three ongoing projects. The first is a public access web site focused on solar photovoltaic generation and battery viability at large commercial and industrial customer sites. The second is a building CO2 emissions reduction operations problem for a University of California, Davis student dining hall for which potential investments are also considered. And the third, is both a battery selection problem and a rolling operating schedule problem for a large County Jail. Together these examples show that optimization of building u grid design and operation can be effectively achieved using SaaS.

  19. Optimal control problems with switching points. Ph.D. Thesis, 1990 Final Report

    NASA Technical Reports Server (NTRS)

    Seywald, Hans

    1991-01-01

    An overview is presented of the problems and difficulties that arise in solving optimal control problems with switching points. A brief discussion of existing optimality conditions is given and a numerical approach for solving the multipoint boundary value problems associated with the first-order necessary conditions of optimal control is presented. Two real-life aerospace optimization problems are treated explicitly. These are altitude maximization for a sounding rocket (Goddard Problem) in the presence of a dynamic pressure limit, and range maximization for a supersonic aircraft flying in the vertical, also in the presence of a dynamic pressure limit. In the second problem singular control appears along arcs with active dynamic pressure limit, which in the context of optimal control, represents a first-order state inequality constraint. An extension of the Generalized Legendre-Clebsch Condition to the case of singular control along state/control constrained arcs is presented and is applied to the aircraft range maximization problem stated above. A contribution to the field of Jacobi Necessary Conditions is made by giving a new proof for the non-optimality of conjugate paths in the Accessory Minimum Problem. Because of its simple and explicit character, the new proof may provide the basis for an extension of Jacobi's Necessary Condition to the case of the trajectories with interior point constraints. Finally, the result that touch points cannot occur for first-order state inequality constraints is extended to the case of vector valued control functions.

  20. Optimal Spectral Regions For Laser Excited Fluorescence Diagnostics For Point Of Care Application

    NASA Astrophysics Data System (ADS)

    Vaitkuviene, A.; Gėgžna, V.; Varanius, D.; Vaitkus, J.

    2011-09-01

    The tissue fluorescence gives the response of light emitting molecule signature, and characterizes the cell composition and peculiarities of metabolism. Both are useful for the biomedical diagnostics, as reported in previous our and others works. The present work demonstrates the results of application of laser excited autofluorescence for diagnostics of pathology in genital tissues, and the feasibility for the bedside at "point of care—off lab" application. A portable device using the USB spectrophotometer, micro laser (355 nm Nd:YAG, 0,5 ns pulse, repetition rate 10 kHz, output power 15 mW), three channel optical fiber and computer with diagnostic program was designed and ready for clinical trial to be used for cytology and biopsy specimen on site diagnostics, and for the endoscopy/puncture procedures. The biopsy and cytology samples, as well as intervertebral disc specimen were evaluated by pathology experts and the fluorescence spectra were investigated in the fresh and preserved specimens. The spectra were recorded in the spectral range 350-900 nm. At the initial stage the Gaussian components of spectra were found and the Mann-Whitney test was used for the groups' differentiation and the spectral regions for optimal diagnostics purpose were found. Then a formal dividing of spectra in the components or the definite width bands, where the main difference of the different group spectra was observed, was used to compare these groups. The ROC analysis based diagnostic algorithms were created for medical prognosis. The positive prognostic values and negative prediction values were determined for cervical Liquid PAP smear supernatant sediment diagnosis of being Cervicitis and Norma versus CIN2+. In a case of intervertebral disc the analysis allows to get the additional information about the disc degeneration status. All these results demonstrated an efficiency of the proposed procedure and the designed device could be tested at the point-of-care site or for

  1. Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas

    2010-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1

  2. Biohydrogen Production from Simple Carbohydrates with Optimization of Operating Parameters.

    PubMed

    Muri, Petra; Osojnik-Črnivec, Ilja Gasan; Djinovič, Petar; Pintar, Albin

    2016-01-01

    Hydrogen could be alternative energy carrier in the future as well as source for chemical and fuel synthesis due to its high energy content, environmentally friendly technology and zero carbon emissions. In particular, conversion of organic substrates to hydrogen via dark fermentation process is of great interest. The aim of this study was fermentative hydrogen production using anaerobic mixed culture using different carbon sources (mono and disaccharides) and further optimization by varying a number of operating parameters (pH value, temperature, organic loading, mixing intensity). Among all tested mono- and disaccharides, glucose was shown as the preferred carbon source exhibiting hydrogen yield of 1.44 mol H(2)/mol glucose. Further evaluation of selected operating parameters showed that the highest hydrogen yield (1.55 mol H(2)/mol glucose) was obtained at the initial pH value of 6.4, T=37 °C and organic loading of 5 g/L. The obtained results demonstrate that lower hydrogen yield at all other conditions was associated with redirection of metabolic pathways from butyric and acetic (accompanied by H(2) production) to lactic (simultaneous H(2) production is not mandatory) acid production. These results therefore represent an important foundation for the optimization and industrial-scale production of hydrogen from organic substrates.

  3. The influence of transducer operating point on distortion generation in the cochlea

    NASA Astrophysics Data System (ADS)

    Sirjani, Davud B.; Salt, Alec N.; Gill, Ruth M.; Hale, Shane A.

    2004-03-01

    Distortion generated by the cochlea can provide a valuable indicator of its functional state. In the present study, the dependence of distortion on the operating point of the cochlear transducer and its relevance to endolymph volume disturbances has been investigated. Calculations have suggested that as the operating point moves away from zero, second harmonic distortion would increase. Cochlear microphonic waveforms were analyzed to derive the cochlear transducer operating point and to quantify harmonic distortions. Changes in operating point and distortion were measured during endolymph manipulations that included 200-Hz tone exposures at 115-dB SPL, injections of artificial endolymph into scala media at 80, 200, or 400 nl/min, and treatment with furosemide given intravenously or locally into the cochlea. Results were compared with other functional changes that included action potential thresholds at 2.8 or 8 kHz, summating potential, endocochlear potential, and the 2 f1-f2 and f2-f1 acoustic emissions. The results demonstrated that volume disturbances caused changes in the operating point that resulted in predictable changes in distortion. Understanding the factors influencing operating point is important in the interpretation of distortion measurements and may lead to tests that can detect abnormal endolymph volume states.

  4. Applications of operational calculus: trigonometric interpolating equation for the eight-point cube

    SciTech Connect

    Silver, Gary L

    2009-01-01

    A general method for obtaining a trigonometric-type interpolating equation for the eight-point cubical array is illustrated. It can often be used to reproduce a ninth datum at an arbitrary point near the center of the array by adjusting a variable exponent. The new method complements operational polynomial and exponential methods for the same design.

  5. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  6. Optimizing and controlling earthmoving operations using spatial technologies

    NASA Astrophysics Data System (ADS)

    Alshibani, Adel

    This thesis presents a model designed for optimizing, tracking, and controlling earthmoving operations. The proposed model utilizes, Genetic Algorithm (GA), Linear Programming (LP), and spatial technologies including Global Positioning Systems (GPS) and Geographic Information Systems (GIS) to support the management functions of the developed model. The model assists engineers and contractors in selecting near optimum crew formations in planning phase and during construction, using GA and LP supported by the Pathfinder Algorithm developed in a GIS environment. GA is used in conjunction with a set of rules developed to accelerate the optimization process and to avoid generating and evaluating hypothetical and unrealistic crew formations. LP is used to determine quantities of earth to be moved from different borrow pits and to be placed at different landfill sites to meet project constraints and to minimize the cost of these earthmoving operations. On the one hand, GPS is used for onsite data collection and for tracking construction equipment in near real-time. On the other hand, GIS is employed to automate data acquisition and to analyze the collected spatial data. The model is also capable of reconfiguring crew formations dynamically during the construction phase while site operations are in progress. The optimization of the crew formation considers: (1) construction time, (2) construction direct cost, or (3) construction total cost. The model is also capable of generating crew formations to meet, as close as possible, specified time and/or cost constraints. In addition, the model supports tracking and reporting of project progress utilizing the earned-value concept and the project ratio method with modifications that allow for more accurate forecasting of project time and cost at set future dates and at completion. The model is capable of generating graphical and tabular reports. The developed model has been implemented in prototype software, using Object

  7. MANGO – Modal Analysis for Grid Operation: A Method for Damping Improvement through Operating Point Adjustment

    SciTech Connect

    Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.; Chen, Yousu; Trudnowski, Daniel J.; Diao, Ruisheng; Fuller, Jason C.; Mittelstadt, William A.; Hauer, John F.; Dagle, Jeffery E.

    2010-10-18

    Small signal stability problems are one of the major threats to grid stability and reliability in the U.S. power grid. An undamped mode can cause large-amplitude oscillations and may result in system breakups and large-scale blackouts. There have been several incidents of system-wide oscillations. Of those incidents, the most notable is the August 10, 1996 western system breakup, a result of undamped system-wide oscillations. Significant efforts have been devoted to monitoring system oscillatory behaviors from measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision, time-synchronized data needed for detecting oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measurements to identify system oscillation modes and their damping. Low damping indicates potential system stability issues. Modal analysis has been demonstrated with phasor measurements to have the capability of estimating system modes from both oscillation signals and ambient data. With more and more phasor measurements available and ModeMeter techniques maturing, there is yet a need for methods to bring modal analysis from monitoring to actions. The methods should be able to associate low damping with grid operating conditions, so operators or automated operation schemes can respond when low damping is observed. The work presented in this report aims to develop such a method and establish a Modal Analysis for Grid Operation (MANGO) procedure to aid grid operation decision making to increase inter-area modal damping. The procedure can provide operation suggestions (such as increasing generation or decreasing load) for mitigating inter-area oscillations.

  8. 47 CFR 90.471 - Points of operation in internal transmitter control systems.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... licensee for internal communications and transmitter control purposes. Operating positions in internal... 47 Telecommunication 5 2012-10-01 2012-10-01 false Points of operation in internal transmitter control systems. 90.471 Section 90.471 Telecommunication FEDERAL COMMUNICATIONS COMMISSION...

  9. 47 CFR 90.471 - Points of operation in internal transmitter control systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... licensee for internal communications and transmitter control purposes. Operating positions in internal... 47 Telecommunication 5 2014-10-01 2014-10-01 false Points of operation in internal transmitter control systems. 90.471 Section 90.471 Telecommunication FEDERAL COMMUNICATIONS COMMISSION...

  10. 47 CFR 90.471 - Points of operation in internal transmitter control systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... licensee for internal communications and transmitter control purposes. Operating positions in internal... 47 Telecommunication 5 2013-10-01 2013-10-01 false Points of operation in internal transmitter control systems. 90.471 Section 90.471 Telecommunication FEDERAL COMMUNICATIONS COMMISSION...

  11. 47 CFR 90.473 - Operation of internal transmitter control systems through licensed fixed control points.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Transmitter Control Internal Transmitter Control Systems § 90.473 Operation of internal transmitter control systems through licensed fixed control points. An internal transmitter control system may be operated... internal system from the transmitter control circuit or to close the system......

  12. 47 CFR 90.473 - Operation of internal transmitter control systems through licensed fixed control points.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Transmitter Control Internal Transmitter Control Systems § 90.473 Operation of internal transmitter control systems through licensed fixed control points. An internal transmitter control system may be operated... internal system from the transmitter control circuit or to close the system......

  13. Fixed Points and Stability for a Sum of Two Operators in Locally Convex Spaces

    DTIC Science & Technology

    topological spaces is formulated in terms of specific topologies on the set of nonlinear operators, and a theorem on the stability of fixed points of a sum of two operators is given. As a byproduct, sufficient conditions for a mapping to be open or to be onto are

  14. To the point: teaching the obstetrics and gynecology medical student in the operating room.

    PubMed

    Hampton, Brittany S; Craig, LaTasha B; Abbott, Jodi F; Buery-Joyner, Samantha D; Dalrymple, John L; Forstein, David A; Hopkins, Laura; McKenzie, Margaret L; Page-Ramsey, Sarah M; Pradhan, Archana; Wolf, Abigail; Graziano, Scott C

    2015-10-01

    This article, from the "To the Point" series that is prepared by the Association of Professors of Gynecology and Obstetrics Undergraduate Medical Education Committee, is a review of considerations for teaching the medical student in the operating room during the obstetrics/gynecology clerkship. The importance of the medical student operating room experience and barriers to learning in the operating room are discussed. Specific considerations for the improvement of medical student learning and operating room experience, which include the development of operating room objectives and specific curricula, an increasing awareness regarding role modeling, and faculty development, are reviewed.

  15. Extension of the Operating Point of the Mercury IVA from 6 to 8 MV

    DTIC Science & Technology

    2013-06-01

    EXTENSION OF THE OPERATING POINT OF THE MERCURY IVA FROM 6 TO 8 MV∗ R. J. Allenξ, R. J. Commisso, G. Coopersteina, P. F. Ottingera and J. W...over 200 shots at 8 MV. I. INTRODUCTION The original design of the Mercury IVA allowed operation at 6 MV and 300 kA [1]. Although the...Operating Point Of The Mercury Iva From 6 To 8 Mv 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e

  16. Go-ICP: A Globally Optimal Solution to 3D ICP Point-Set Registration.

    PubMed

    Yang, Jiaolong; Li, Hongdong; Campbell, Dylan; Jia, Yunde

    2016-11-01

    The Iterative Closest Point (ICP) algorithm is one of the most widely used methods for point-set registration. However, being based on local iterative optimization, ICP is known to be susceptible to local minima. Its performance critically relies on the quality of the initialization and only local optimality is guaranteed. This paper presents the first globally optimal algorithm, named Go-ICP, for Euclidean (rigid) registration of two 3D point-sets under the L2 error metric defined in ICP. The Go-ICP method is based on a branch-and-bound scheme that searches the entire 3D motion space SE(3). By exploiting the special structure of SE(3) geometry, we derive novel upper and lower bounds for the registration error function. Local ICP is integrated into the BnB scheme, which speeds up the new method while guaranteeing global optimality. We also discuss extensions, addressing the issue of outlier robustness. The evaluation demonstrates that the proposed method is able to produce reliable registration results regardless of the initialization. Go-ICP can be applied in scenarios where an optimal solution is desirable or where a good initialization is not always available.

  17. An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification

    ERIC Educational Resources Information Center

    Wang, Jun; Samal, Ashok; Rong, Panying; Green, Jordan R.

    2016-01-01

    Purpose: The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. Method: The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of…

  18. Optimization of a point-focusing, distributed receiver solar thermal electric system

    NASA Technical Reports Server (NTRS)

    Pons, R. L.

    1979-01-01

    This paper presents an approach to optimization of a solar concept which employs solar-to-electric power conversion at the focus of parabolic dish concentrators. The optimization procedure is presented through a series of trade studies, which include the results of optical/thermal analyses and individual subsystem trades. Alternate closed-cycle and open-cycle Brayton engines and organic Rankine engines are considered to show the influence of the optimization process, and various storage techniques are evaluated, including batteries, flywheels, and hybrid-engine operation.

  19. Comparison of Operator Aided Optimization with Iterative Manual Optimization in a Simulated Tactical Decision Aiding Task.

    DTIC Science & Technology

    1980-07-01

    1131122 1.1.8 111111.25 1II DII1.6 MCROCOPY RLSOLII4% TESI OIARI NAIIONAL HORIAL. l ’T.ANPrARI. I. A I • ’:I NS Yfl(UIY 12A ODA %\\ E50O q ~5., I, 5...and the operator inputs. -10- OAO only: IMO only: Problem Definition (Strike launch point, target location, and composite sensor capability model) Q Ip...interaction is plotted in Figure 12. Operators in -39- 100 99.60 (circle) 99.27 (square) 93.18 90 [ 89.24 80 [j Replication I Q Replication 2 Unaided OAO

  20. Decision Support Systems to Optimize the Operational Efficiency of Dams and Maintain Regulatory Compliance Criteria

    NASA Astrophysics Data System (ADS)

    Parkinson, S.; Morehead, M. D.; Conner, J. T.; Frye, C.

    2012-12-01

    Increasing demand for water and electricity, increasing variability in weather and climate and stricter requirements for riverine ecosystem health has put ever more stringent demands on hydropower operations. Dam operators are being impacted by these constraints and are looking for methods to meet these requirements while retaining the benefits hydropower offers. Idaho Power owns and operates 17 hydroelectric plants in Idaho and Oregon which have both Federal and State compliance requirements. Idaho Power has started building Decision Support Systems (DSS) to aid the hydroelectric plant operators in maximizing hydropower operational efficiency, while meeting regulatory compliance constraints. Regulatory constraints on dam operations include: minimum in-stream flows, maximum ramp rate of river stage, reservoir volumes, and reservoir ramp rate for draft and fill. From the hydroelectric standpoint, the desire is to vary the plant discharge (ramping) such that generation matches electricity demand (load-following), but ramping is limited by the regulatory requirements. Idaho Power desires DSS that integrate real time and historic data, simulates the rivers behavior from the hydroelectric plants downstream to the compliance measurement point and presents the information in an easily understandable display that allows the operators to make informed decisions. Creating DSS like these has a number of scientific and technical challenges. Real-time data are inherently noisy and automated data cleaning routines are required to filter the data. The DSS must inform the operators when incoming data are outside of predefined bounds. Complex river morphologies can make the timing and shape of a discharge change traveling downstream from a power plant nearly impossible to represent with a predefined lookup table. These complexities require very fast hydrodynamic models of the river system that simulate river characteristics (ex. Stage, discharge) at the downstream compliance point

  1. Optimal Parameter Exploration for Online Change-Point Detection in Activity Monitoring Using Genetic Algorithms

    PubMed Central

    Khan, Naveed; McClean, Sally; Zhang, Shuai; Nugent, Chris

    2016-01-01

    In recent years, smart phones with inbuilt sensors have become popular devices to facilitate activity recognition. The sensors capture a large amount of data, containing meaningful events, in a short period of time. The change points in this data are used to specify transitions to distinct events and can be used in various scenarios such as identifying change in a patient’s vital signs in the medical domain or requesting activity labels for generating real-world labeled activity datasets. Our work focuses on change-point detection to identify a transition from one activity to another. Within this paper, we extend our previous work on multivariate exponentially weighted moving average (MEWMA) algorithm by using a genetic algorithm (GA) to identify the optimal set of parameters for online change-point detection. The proposed technique finds the maximum accuracy and F_measure by optimizing the different parameters of the MEWMA, which subsequently identifies the exact location of the change point from an existing activity to a new one. Optimal parameter selection facilitates an algorithm to detect accurate change points and minimize false alarms. Results have been evaluated based on two real datasets of accelerometer data collected from a set of different activities from two users, with a high degree of accuracy from 99.4% to 99.8% and F_measure of up to 66.7%. PMID:27792177

  2. Point-and-stare operation and high-speed image acquisition in real-time hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Driver, Richard D.; Bannon, David P.; Ciccone, Domenic; Hill, Sam L.

    2010-04-01

    The design and optical performance of a small-footprint, low-power, turnkey, Point-And-Stare hyperspectral analyzer, capable of fully automated field deployment in remote and harsh environments, is described. The unit is packaged for outdoor operation in an IP56 protected air-conditioned enclosure and includes a mechanically ruggedized fully reflective, aberration-corrected hyperspectral VNIR (400-1000 nm) spectrometer with a board-level detector optimized for point and stare operation, an on-board computer capable of full system data-acquisition and control, and a fully functioning internal hyperspectral calibration system for in-situ system spectral calibration and verification. Performance data on the unit under extremes of real-time survey operation and high spatial and high spectral resolution will be discussed. Hyperspectral acquisition including full parameter tracking is achieved by the addition of a fiber-optic based downwelling spectral channel for solar illumination tracking during hyperspectral acquisition and the use of other sensors for spatial and directional tracking to pinpoint view location. The system is mounted on a Pan-And-Tilt device, automatically controlled from the analyzer's on-board computer, making the HyperspecTM particularly adaptable for base security, border protection and remote deployments. A hyperspectral macro library has been developed to control hyperspectral image acquisition, system calibration and scene location control. The software allows the system to be operated in a fully automatic mode or under direct operator control through a GigE interface.

  3. Optimization of Insertion Cost for Transfer Trajectories to Libration Point Orbits

    NASA Technical Reports Server (NTRS)

    Howell, K. C.; Wilson, R. S.; Lo, M. W.

    1999-01-01

    The objective of this work is the development of efficient techniques to optimize the cost associated with transfer trajectories to libration point orbits in the Sun-Earth-Moon four body problem, that may include lunar gravity assists. Initially, dynamical systems theory is used to determine invariant manifolds associated with the desired libration point orbit. These manifolds are employed to produce an initial approximation to the transfer trajectory. Specific trajectory requirements such as, transfer injection constraints, inclusion of phasing loops, and targeting of a specified state on the manifold are then incorporated into the design of the transfer trajectory. A two level differential corrections process is used to produce a fully continuous trajectory that satisfies the design constraints, and includes appropriate lunar and solar gravitational models. Based on this methodology, and using the manifold structure from dynamical systems theory, a technique is presented to optimize the cost associated with insertion onto a specified libration point orbit.

  4. Data Mining Method for Battery Operation Optimization in Photovoltaics

    NASA Astrophysics Data System (ADS)

    Sato, Katsunori; Wakao, Shinji

    Recently, a photovoltaic (PV) system has attracted attention because of serious environmental and energy problems. In near future, PV systems intensively connected to the grid will bring about the difficulties in the power system operation. As a countermeasure, this paper deals with the introduction of storage battery for making the unstable PV power controllable. In this regard, when we introduce a storage battery into a PV system, we have to consider the advantages and disadvantages. In order to evaluate the system from various perspectives, we have carried out multi-objective optimization of battery operation in PV system design. However, as the number of objective functions increases, it becomes difficult to appropriately interpret the correlation among objective functions and design variables. With this background, in this paper, a novel computational method is proposed for data mining of PV system design, in which we make an attempt to effectively extract the design information of the battery system with the use of Self-Organizing Map (SOM).

  5. A Particle Swarm Optimization Algorithm for Optimal Operating Parameters of VMI Systems in a Two-Echelon Supply Chain

    NASA Astrophysics Data System (ADS)

    Sue-Ann, Goh; Ponnambalam, S. G.

    This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.

  6. Optimizing Wind And Hydropower Generation Within Realistic Reservoir Operating Policy

    NASA Astrophysics Data System (ADS)

    Magee, T. M.; Clement, M. A.; Zagona, E. A.

    2012-12-01

    Previous studies have evaluated the benefits of utilizing the flexibility of hydropower systems to balance the variability and uncertainty of wind generation. However, previous hydropower and wind coordination studies have simplified non-power constraints on reservoir systems. For example, some studies have only included hydropower constraints on minimum and maximum storage volumes and minimum and maximum plant discharges. The methodology presented here utilizes the pre-emptive linear goal programming optimization solver in RiverWare to model hydropower operations with a set of prioritized policy constraints and objectives based on realistic policies that govern the operation of actual hydropower systems, including licensing constraints, environmental constraints, water management and power objectives. This approach accounts for the fact that not all policy constraints are of equal importance. For example target environmental flow levels may not be satisfied if it would require violating license minimum or maximum storages (pool elevations), but environmental flow constraints will be satisfied before optimizing power generation. Additionally, this work not only models the economic value of energy from the combined hydropower and wind system, it also captures the economic value of ancillary services provided by the hydropower resources. It is recognized that the increased variability and uncertainty inherent with increased wind penetration levels requires an increase in ancillary services. In regions with liberalized markets for ancillary services, a significant portion of hydropower revenue can result from providing ancillary services. Thus, ancillary services should be accounted for when determining the total value of a hydropower system integrated with wind generation. This research shows that the end value of integrated hydropower and wind generation is dependent on a number of factors that can vary by location. Wind factors include wind penetration level

  7. Multiplicative approximations, optimal hypervolume distributions, and the choice of the reference point.

    PubMed

    Friedrich, Tobias; Neumann, Frank; Thyssen, Christian

    2015-01-01

    Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multi-objective problems as the population of such an algorithm can be used to represent the trade-offs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multi-objective problems. We consider indicator-based algorithms whose goal is to maximize the hypervolume for a given problem by distributing [Formula: see text] points on the Pareto front. To gain new theoretical insights into the behavior of hypervolume-based algorithms, we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of bi-objective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolume-based approaches and examine Pareto fronts of different shapes by numerical calculations.

  8. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    NASA Astrophysics Data System (ADS)

    Goldberg, Daniel N.; Krishna Narayanan, Sri Hari; Hascoet, Laurent; Utke, Jean

    2016-05-01

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enabling larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. The methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.

  9. Road centerline extraction from airborne LiDAR point cloud based on hierarchical fusion and optimization

    NASA Astrophysics Data System (ADS)

    Hui, Zhenyang; Hu, Youjian; Jin, Shuanggen; Yevenyo, Yao Ziggah

    2016-08-01

    Road information acquisition is an important part of city informatization construction. Airborne LiDAR provides a new means of acquiring road information. However, the existing road extraction methods using LiDAR point clouds always decide the road intensity threshold based on experience, which cannot obtain the optimal threshold to extract a road point cloud. Moreover, these existing methods are deficient in removing the interference of narrow roads and several attached areas (e.g., parking lot and bare ground) to main roads extraction, thereby imparting low completeness and correctness to the city road network extraction result. Aiming at resolving the key technical issues of road extraction from airborne LiDAR point clouds, this paper proposes a novel method to extract road centerlines from airborne LiDAR point clouds. The proposed approach is mainly composed of three key algorithms, namely, Skewness balancing, Rotating neighborhood, and Hierarchical fusion and optimization (SRH). The skewness balancing algorithm used for the filtering was adopted as a new method for obtaining an optimal intensity threshold such that the "pure" road point cloud can be obtained. The rotating neighborhood algorithm on the other hand was developed to remove narrow roads (corridors leading to parking lots or sidewalks), which are not the main roads to be extracted. The proposed hierarchical fusion and optimization algorithm caused the road centerlines to be unaffected by certain attached areas and ensured the road integrity as much as possible. The proposed method was tested using the Vaihingen dataset. The results demonstrated that the proposed method can effectively extract road centerlines in a complex urban environment with 91.4% correctness and 80.4% completeness.

  10. Implementation of neural network hardware based on a floating point operation in an FPGA

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Seob; Jung, Seul

    2007-12-01

    This paper presents a hardware design and implementation of the radial basis function (RBF) neural network (NN) by the hardware description language. Due to its nonlinear characteristics, it is very difficult to implement for a system with integer-based operation. To develop nonlinear functions such sigmoid functions or exponential functions, floating point operations are required. The exponential function is designed based on the 32bit single-precision floating-point format. In addition, to update weights in the network, the back-propagation algorithm is also implemented in the hardware. Most operations are performed in the floating-point based arithmetic unit and accomplished sequentially by the instruction order stored in ROM. The NN is implemented and tested on the Altera FPGA "Cyclone2 EP2C70F672C8" for nonlinear classifications.

  11. Providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOEpatents

    Archer, Charles J.; Faraj, Ahmad A.; Inglett, Todd A.; Ratterman, Joseph D.

    2012-10-23

    Methods, apparatus, and products are disclosed for providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: identifying each link in the global combining network for each compute node of the operational group; designating one of a plurality of point-to-point class routing identifiers for each link such that no compute node in the operational group is connected to two adjacent compute nodes in the operational group with links designated for the same class routing identifiers; and configuring each compute node of the operational group for point-to-point communications with each adjacent compute node in the global combining network through the link between that compute node and that adjacent compute node using that link's designated class routing identifier.

  12. Analysis of an optimization-based atomistic-to-continuum coupling method for point defects

    DOE PAGES

    Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; ...

    2015-11-16

    Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.

  13. Pointing calibration of the MKIVA DSN antennas Voyager 2 Uranus encounter operations support

    NASA Technical Reports Server (NTRS)

    Stevens, R.; Riggs, R. L.; Wood, B.

    1986-01-01

    The MKIVA DSN introduced significant changes to the pointing systems of the 34-meter and 64-meter diameter antennas. To support the Voyager 2 Uranus Encounter, the systems had to be accurately calibrated. Reliable techniques for use of the calibrations during intense mission support activity had to be provided. This article describes the techniques used to make the antenna pointing calibrations and to demonstrate their operational use. The results of the calibrations are summarized.

  14. Jamming Transition of Point-To Traffic Through Co-Operative Mechanisms

    NASA Astrophysics Data System (ADS)

    Fang, Jun; Qin, Zheng; Chen, Xiqun; Xu, Zhaohui

    2012-11-01

    We study the jamming transition of two-dimensional point-to-point traffic through co-operative mechanisms (DCM) using computer simulation. We propose two decentralized co-operative mechanisms CM which are incorporated into the point-to-point traffic models: stepping aside (CM-SA) and choosing alternative routes (CM-CAR). Incorporating CM-SA is to prevent a type of ping-pong jumps from happening when two objects standing face-to-face want to move in opposite directions. Incorporating CM-CAR is to handle the conflict when more than one object competes for the same point in parallel update. We investigate and compare four models mainly from fundamental diagrams, jam patterns and the distribution of co-operation probability. It is found that although it decreases the average velocity a little, the CM-SA increases the critical density and the average flow. Despite increasing the average velocity, the CM-CAR decreases the average flow by creating substantially vacant areas inside jam clusters. We investigate the jam patterns of four models carefully and explain this result qualitatively. In addition, we discuss the advantage and applicability of decentralized co-operation modeling.

  15. Phase-operation for conduction electron by atomic-scale scattering via single point-defect

    SciTech Connect

    Nagaoka, Katsumi Yaginuma, Shin; Nakayama, Tomonobu

    2014-03-17

    In order to propose a phase-operation technique for conduction electrons in solid, we have investigated, using scanning tunneling microscopy, an atomic-scale electron-scattering phenomenon on a 2D subband state formed in Si. Particularly, we have noticed a single surface point-defect around which a standing-wave pattern created, and a dispersion of scattering phase-shifts by the defect-potential against electron-energy has been measured. The behavior is well-explained with appropriate scattering parameters: the potential height and radius. This result experimentally proves that the atomic-scale potential scattering via the point defect enables phase-operation for conduction electrons.

  16. Performing a scatterv operation on a hierarchical tree network optimized for collective operations

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-22

    Performing a scatterv operation on a hierarchical tree network optimized for collective operations including receiving, by the scatterv module installed on the node, from a nearest neighbor parent above the node a chunk of data having at least a portion of data for the node; maintaining, by the scatterv module installed on the node, the portion of the data for the node; determining, by the scatterv module installed on the node, whether any portions of the data are for a particular nearest neighbor child below the node or one or more other nodes below the particular nearest neighbor child; and sending, by the scatterv module installed on the node, those portions of data to the nearest neighbor child if any portions of the data are for a particular nearest neighbor child below the node or one or more other nodes below the particular nearest neighbor child.

  17. Chemically optimizing operational efficiency of molecular rotary motors.

    PubMed

    Conyard, Jamie; Cnossen, Arjen; Browne, Wesley R; Feringa, Ben L; Meech, Stephen R

    2014-07-09

    Unidirectional molecular rotary motors that harness photoinduced cis-trans (E-Z) isomerization are promising tools for the conversion of light energy to mechanical motion in nanoscale molecular machines. Considerable progress has been made in optimizing the frequency of ground-state rotation, but less attention has been focused on excited-state processes. Here the excited-state dynamics of a molecular motor with electron donor and acceptor substituents located to modify the excited-state reaction coordinate, without altering its stereochemistry, are studied. The substituents are shown to modify the photochemical yield of the isomerization without altering the motor frequency. By combining 50 fs resolution time-resolved fluorescence with ultrafast transient absorption spectroscopy the underlying excited-state dynamics are characterized. The Franck-Condon excited state relaxes in a few hundred femtoseconds to populate a lower energy dark state by a pathway that utilizes a volume conserving structural change. This is assigned to pyramidalization at a carbon atom of the isomerizing bridging double bond. The structure and energy of the dark state thus reached are a function of the substituent, with electron-withdrawing groups yielding a lower energy longer lived dark state. The dark state is coupled to the Franck-Condon state and decays on a picosecond time scale via a coordinate that is sensitive to solvent friction, such as rotation about the bridging bond. Neither subpicosecond nor picosecond dynamics are sensitive to solvent polarity, suggesting that intramolecular charge transfer and solvation are not key driving forces for the rate of the reaction. Instead steric factors and medium friction determine the reaction pathway, with the sterically remote substitution primarily influencing the energetics. Thus, these data indicate a chemical method of optimizing the efficiency of operation of these molecular motors without modifying their overall rotational frequency.

  18. A unified treatment of some perturbed fixed point iterative methods with an infinite pool of operators

    NASA Astrophysics Data System (ADS)

    Nikazad, Touraj; Abbasi, Mokhtar

    2017-04-01

    In this paper, we introduce a subclass of strictly quasi-nonexpansive operators which consists of well-known operators as paracontracting operators (e.g., strictly nonexpansive operators, metric projections, Newton and gradient operators), subgradient projections, a useful part of cutter operators, strictly relaxed cutter operators and locally strongly Féjer operators. The members of this subclass, which can be discontinuous, may be employed by fixed point iteration methods; in particular, iterative methods used in convex feasibility problems. The closedness of this subclass, with respect to composition and convex combination of operators, makes it useful and remarkable. Another advantage with members of this subclass is the possibility to adapt them to handle convex constraints. We give convergence result, under mild conditions, for a perturbation resilient iterative method which is based on an infinite pool of operators in this subclass. The perturbation resilient iterative methods are relevant and important for their possible use in the framework of the recently developed superiorization methodology for constrained minimization problems. To assess the convergence result, the class of operators and the assumed conditions, we illustrate some extensions of existence research works and some new results.

  19. Common fixed points in best approximation for Banach operator pairs with Ciric type I-contractions

    NASA Astrophysics Data System (ADS)

    Hussain, N.

    2008-02-01

    The common fixed point theorems, similar to those of Ciric [Lj.B. Ciric, On a common fixed point theorem of a Gregus type, Publ. Inst. Math. (Beograd) (N.S.) 49 (1991) 174-178; Lj.B. Ciric, On Diviccaro, Fisher and Sessa open questions, Arch. Math. (Brno) 29 (1993) 145-152; Lj.B. Ciric, On a generalization of Gregus fixed point theorem, Czechoslovak Math. J. 50 (2000) 449-458], Fisher and Sessa [B. Fisher, S. Sessa, On a fixed point theorem of Gregus, Internat. J. Math. Math. Sci. 9 (1986) 23-28], Jungck [G. Jungck, On a fixed point theorem of Fisher and Sessa, Internat. J. Math. Math. Sci. 13 (1990) 497-500] and Mukherjee and Verma [R.N. Mukherjee, V. Verma, A note on fixed point theorem of Gregus, Math. Japon. 33 (1988) 745-749], are proved for a Banach operator pair. As applications, common fixed point and approximation results for Banach operator pair satisfying Ciric type contractive conditions are obtained without the assumption of linearity or affinity of either T or I. Our results unify and generalize various known results to a more general class of noncommuting mappings.

  20. Optimal integration of gravity in trajectory planning of vertical pointing movements.

    PubMed

    Crevecoeur, Frédéric; Thonnard, Jean-Louis; Lefèvre, Philippe

    2009-08-01

    The planning and control of motor actions requires knowledge of the dynamics of the controlled limb to generate the appropriate muscular commands and achieve the desired goal. Such planning and control imply that the CNS must be able to deal with forces and constraints acting on the limb, such as the omnipresent force of gravity. The present study investigates the effect of hypergravity induced by parabolic flights on the trajectory of vertical pointing movements to test the hypothesis that motor commands are optimized with respect to the effect of gravity on the limb. Subjects performed vertical pointing movements in normal gravity and hypergravity. We use a model based on optimal control to identify the role played by gravity in the optimal arm trajectory with minimal motor costs. First, the simulations in normal gravity reproduce the asymmetry in the velocity profiles (the velocity reaches its maximum before half of the movement duration), which typically characterizes the vertical pointing movements performed on Earth, whereas the horizontal movements present symmetrical velocity profiles. Second, according to the simulations, the optimal trajectory in hypergravity should present an increase in the peak acceleration and peak velocity despite the increase in the arm weight. In agreement with these predictions, the subjects performed faster movements in hypergravity with significant increases in the peak acceleration and peak velocity, which were accompanied by a significant decrease in the movement duration. This suggests that movement kinematics change in response to an increase in gravity, which is consistent with the hypothesis that motor commands are optimized and the action of gravity on the limb is taken into account. The results provide evidence for an internal representation of gravity in the central planning process and further suggest that an adaptation to altered dynamics can be understood as a reoptimization process.

  1. Genetic algorithm optimization of point charges in force field development: challenges and insights.

    PubMed

    Ivanov, Maxim V; Talipov, Marat R; Timerghazin, Qadir K

    2015-02-26

    Evolutionary methods, such as genetic algorithms (GAs), provide powerful tools for optimization of the force field parameters, especially in the case of simultaneous fitting of the force field terms against extensive reference data. However, GA fitting of the nonbonded interaction parameters that includes point charges has not been explored in the literature, likely due to numerous difficulties with even a simpler problem of the least-squares fitting of the atomic point charges against a reference molecular electrostatic potential (MEP), which often demonstrates an unusually high variation of the fitted charges on buried atoms. Here, we examine the performance of the GA approach for the least-squares MEP point charge fitting, and show that the GA optimizations suffer from a magnified version of the classical buried atom effect, producing highly scattered yet correlated solutions. This effect can be understood in terms of the linearly independent, natural coordinates of the MEP fitting problem defined by the eigenvectors of the least-squares sum Hessian matrix, which are also equivalent to the eigenvectors of the covariance matrix evaluated for the scattered GA solutions. GAs quickly converge with respect to the high-curvature coordinates defined by the eigenvectors related to the leading terms of the multipole expansion, but have difficulty converging with respect to the low-curvature coordinates that mostly depend on the buried atom charges. The performance of the evolutionary techniques dramatically improves when the point charge optimization is performed using the Hessian or covariance matrix eigenvectors, an approach with a significant potential for the evolutionary optimization of the fixed-charge biomolecular force fields.

  2. Optimal Operation Method of Smart House by Controllable Loads based on Smart Grid Topology

    NASA Astrophysics Data System (ADS)

    Yoza, Akihiro; Uchida, Kosuke; Yona, Atsushi; Senju, Tomonobu

    2013-08-01

    From the perspective of global warming suppression and depletion of energy resources, renewable energy such as wind generation (WG) and photovoltaic generation (PV) are getting attention in distribution systems. Additionally, all electrification apartment house or residence such as DC smart house have increased in recent years. However, due to fluctuating power from renewable energy sources and loads, supply-demand balancing fluctuations of power system become problematic. Therefore, "smart grid" has become very popular in the worldwide. This article presents a methodology for optimal operation of a smart grid to minimize the interconnection point power flow fluctuations. To achieve the proposed optimal operation, we use distributed controllable loads such as battery and heat pump. By minimizing the interconnection point power flow fluctuations, it is possible to reduce the maximum electric power consumption and the electric cost. This system consists of photovoltaics generator, heat pump, battery, solar collector, and load. In order to verify the effectiveness of the proposed system, MATLAB is used in simulations.

  3. Optimizing Wellfield Operation in a Variable Power Price Regime.

    PubMed

    Bauer-Gottwein, Peter; Schneider, Raphael; Davidsen, Claus

    2016-01-01

    Wellfield management is a multiobjective optimization problem. One important objective has been energy efficiency in terms of minimizing the energy footprint (EFP) of delivered water (MWh/m(3) ). However, power systems in most countries are moving in the direction of deregulated markets and price variability is increasing in many markets because of increased penetration of intermittent renewable power sources. In this context the relevant management objective becomes minimizing the cost of electric energy used for pumping and distribution of groundwater from wells rather than minimizing energy use itself. We estimated EFP of pumped water as a function of wellfield pumping rate (EFP-Q relationship) for a wellfield in Denmark using a coupled well and pipe network model. This EFP-Q relationship was subsequently used in a Stochastic Dynamic Programming (SDP) framework to minimize total cost of operating the combined wellfield-storage-demand system over the course of a 2-year planning period based on a time series of observed price on the Danish power market and a deterministic, time-varying hourly water demand. In the SDP setup, hourly pumping rates are the decision variables. Constraints include storage capacity and hourly water demand fulfilment. The SDP was solved for a baseline situation and for five scenario runs representing different EFP-Q relationships and different maximum wellfield pumping rates. Savings were quantified as differences in total cost between the scenario and a constant-rate pumping benchmark. Minor savings up to 10% were found in the baseline scenario, while the scenario with constant EFP and unlimited pumping rate resulted in savings up to 40%. Key factors determining potential cost savings obtained by flexible wellfield operation under a variable power price regime are the shape of the EFP-Q relationship, the maximum feasible pumping rate and the capacity of available storage facilities.

  4. Research on the modeling of the missile's disturbance motion and the initial control point optimization

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Dalin; Tang, Shengjing

    2012-11-01

    The initial trajectory design of the missile is an important part of the overall design, but often a tedious calculation and analysis process due to the large dimension nonlinear differential equations and the traditional statistical analysis methods. To improve the traditional design methods, a robust optimization concept and method are introduced in this paper to deal with the determination of the initial control point. First, the Gaussian Radial Basis Network is adopted to establish the approximate model of the missile's disturbance motion based on the disturbance motion and disturbance factors analysis. Then, a direct analytical relationship between the disturbance input and statistical results is deduced on the basis of Gaussian Radial Basis Network model. Subsequently, a robust optimization model is established aiming at the initial control point design problem and the niche Pareto genetic algorithm for multi-objective optimization is adopted to solve this optimization model. An integral design example is give at last and the simulation results have verified the validity of this method.

  5. Optimized Operation and Electrical Power Supply System of Ignitor*

    NASA Astrophysics Data System (ADS)

    Coletti, A.; Candela, G.; Coletti, R.; Costa, P.; Maffia, G.; Santinelli, M.; Starace, F.; Sforna, M.; Allegra, G.; Trevisan, L.; Florio, A.; Novaro, R.; Coppi, B.

    2006-10-01

    The performance of the control system for the position and shape of the elongated, tight aspect ratio plasma column of Two reference sets of parameters for the operation of Ignitor have been identified. One, the main set, involves plasma currents up to 11MA and toroidal fields up to 13T. The reduced parameter set corresponds to 7MA with fields of 9T and considerably longer pulse flat-tops. The evolution of the relevant currents in the toroidal and the poloidal field magnet systems has been optimized in order to minimize the requirements on the electrical power supply and cryogenic cooling systems. Thyristor amplifiers are adapted to drive both the toroidal and poloidal field magnet systems. The total installed power for these systems is 2400 MVA. The connection of this to the terminals, involving two nodes of the 400 kV grid, at the Caorso site, which houses a dismantled nuclear power station, has been analyzed and authorized by the TERNA- GRTN Agency. A particular consideration has been given to the problems involving the control of both the position and the shaping of the plasma column.*Sponsored in part by ENEA of Italy and by the U.S. DOE.

  6. 77 FR 40091 - Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating, Units 2 and 3

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-06

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc.; Indian Point Nuclear Generating, Units 2 and 3 AGENCY: Nuclear... statement for license renewal of nuclear plants; availability. SUMMARY: The U.S. Nuclear...

  7. Office of Naval Research (ONR) Support for R/V Point Sur Ship Operations

    DTIC Science & Technology

    2016-01-07

    WORK UNIT NUMBER N/A 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) San Jose State university Research Foundation 210 N. Fourth St...Discussion of Work : The ONR Ship Operations Award, N00014-15-1-2123, provided support for the Research Vessel Point Sur from January 1, 2014 through June

  8. 47 CFR 90.471 - Points of operation in internal transmitter control systems.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... control systems. 90.471 Section 90.471 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES PRIVATE LAND MOBILE RADIO SERVICES Transmitter Control Internal Transmitter Control Systems § 90.471 Points of operation in internal transmitter control systems....

  9. 47 CFR 90.471 - Points of operation in internal transmitter control systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... control systems. 90.471 Section 90.471 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES PRIVATE LAND MOBILE RADIO SERVICES Transmitter Control Internal Transmitter Control Systems § 90.471 Points of operation in internal transmitter control systems....

  10. 76 FR 65118 - Drawbridge Operation Regulation; Bear Creek, Sparrows Point, MD

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-20

    ... SECURITY Coast Guard 33 CFR Part 117 RIN 1625-AA09 Drawbridge Operation Regulation; Bear Creek, Sparrows... Avenue) highway toll drawbridge across Bear Creek, mile 1.5, Sparrows Point, MD was replaced with a fixed... Bear Creek, mile 1.5 was removed and replaced with a fixed bridge in 1998. Prior to 1998, a...

  11. ATP-gamma-S shifts the operating point of outer hair cell transduction towards scala tympani.

    PubMed

    Bobbin, Richard P; Salt, Alec N

    2005-07-01

    ATP receptor agonists and antagonists alter cochlear mechanics as measured by changes in distortion product otoacoustic emissions (DPOAE). Some of the effects on DPOAEs are consistent with the hypothesis that ATP affects mechano-electrical transduction and the operating point of the outer hair cells (OHCs). This hypothesis was tested by monitoring the effect of ATP-gamma-S on the operating point of the OHCs. Guinea pigs anesthetized with urethane and with sectioned middle ear muscles were used. The cochlear microphonic (CM) was recorded differentially (scala vestibuli referenced to scala tympani) across the basal turn before and after perfusion (20 min) of the perilymph compartment with artificial perilymph (AP) and ATP-gamma-S dissolved in AP. The operating point was derived from the cochlear microphonics (CM) recorded in response low frequency (200 Hz) tones at high level (106, 112 and 118 dB SPL). The analysis procedure used a Boltzmann function to simulate the CM waveform and the Boltzmann parameters were adjusted to best-fit the calculated waveform to the CM. Compared to the initial perfusion with AP, ATP-gamma-S (333 microM) enhanced peak clipping of the positive peak of the CM (that occurs during organ of Corti displacements towards scala tympani), which was in keeping with ATP-induced displacement of the transducer towards scala tympani. CM waveform analysis quantified the degree of displacement and showed that the changes were consistent with the stimulus being centered on a different region of the transducer curve. The change of operating point meant that the stimulus was applied to a region of the transducer curve where there was greater saturation of the output on excursions towards scala tympani and less saturation towards scala vestibuli. A significant degree of recovery of the operating point was observed after washing with AP. Dose response curves generated by perfusing ATP-gamma-S (333 microM) in a cumulative manner yielded an EC(50) of 19.8 micro

  12. Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve

    NASA Astrophysics Data System (ADS)

    Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.

    2009-04-01

    Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.

  13. Operationally optimal maneuver strategy for spacecraft injected into sub-geosynchronous transfer orbit

    NASA Astrophysics Data System (ADS)

    Kiran, B. S.; Singh, Satyendra; Negi, Kuldeep

    The GSAT-12 spacecraft is providing Communication services from the INSAT/GSAT system in the Indian region. The spacecraft carries 12 extended C-band transponders. GSAT-12 was launched by ISRO’s PSLV from Sriharikota, into a sub-geosynchronous Transfer Orbit (sub-GTO) of 284 x 21000 km with inclination 18 deg. This Mission successfully accomplished combined optimization of launch vehicle and satellite capabilities to maximize operational life of the s/c. This paper describes mission analysis carried out for GSAT-12 comprising launch window, orbital events study and orbit raising maneuver strategies considering various Mission operational constraints. GSAT-12 is equipped with two earth sensors (ES), three gyroscopes and digital sun sensor. The launch window was generated considering mission requirement of minimum 45 minutes of ES data for calibration of gyros with Roll-sun-pointing orientation in T.O. Since the T.O. period was a rather short 6.1 hr, required pitch biases were worked out to meet the gyro-calibration requirement. A 440 N Liquid Apogee Motor (LAM) is used for orbit raising. The objective of the maneuver strategy is to achieve desired drift orbit satisfying mission constraints and minimizing propellant expenditure. In case of sub-GTO, the optimal strategy is to first perform an in-plane maneuver at perigee to raise the apogee to synchronous level and then perform combined maneuvers at the synchronous apogee to achieve desired drift orbit. The perigee burn opportunities were examined considering ground station visibility requirement for monitoring the burn. Two maneuver strategies were proposed: an optimal five-burn strategy with two perigee burns centered around perigee#5 and perigee#8 with partial ground station visibility and three apogee burns with dual station visibility, a near-optimal five-burn strategy with two off-perigee burns at perigee#5 and perigee#8 with single ground station visibility and three apogee burns with dual station visibility

  14. The Hubble Space Telescope fine guidance system operating in the coarse track pointing control mode

    NASA Technical Reports Server (NTRS)

    Whittlesey, Richard

    1993-01-01

    The Hubble Space Telescope (HST) Fine Guidance System has set new standards in pointing control capability for earth orbiting spacecraft. Two precision pointing control modes are implemented in the Fine Guidance System; one being a Coarse Track Mode which employs a pseudo-quadrature detector approach and the second being a Fine Mode which uses a two axis interferometer implementation. The Coarse Track Mode was designed to maintain FGS pointing error to within 20 milli-arc seconds (rms) when guiding on a 14.5 Mv star. The Fine Mode was designed to maintain FGS pointing error to less than 3 milli-arc seconds (rms). This paper addresses the HST FGS operating in the Coarse Track Mode. An overview of the implementation, the operation, and both the predicted and observed on orbit performance is presented. The discussion includes a review of the Fine Guidance System hardware which uses two beam steering Star Selector servos, four photon counting photomultiplier tube detectors, as well as a 24 bit microprocessor, which executes the control system firmware. Unanticipated spacecraft operational characteristics are discussed as they impact pointing performance. These include the influence of spherically aberrated star images as well as the mechanical shocks induced in the spacecraft during and following orbital day/night terminator crossings. Computer modeling of the Coarse Track Mode verifies the observed on orbit performance trends in the presence of these optical and mechanical disturbances. It is concluded that the coarse track pointing control function is performing as designed and is providing a robust pointing control capability for the Hubble Space Telescope.

  15. Turbine Reliability and Operability Optimization through the use of Direct Detection Lidar Final Technical Report

    SciTech Connect

    Johnson, David K; Lewis, Matthew J; Pavlich, Jane C; Wright, Alan D; Johnson, Kathryn E; Pace, Andrew M

    2013-02-01

    The goal of this Department of Energy (DOE) project is to increase wind turbine efficiency and reliability with the use of a Light Detection and Ranging (LIDAR) system. The LIDAR provides wind speed and direction data that can be used to help mitigate the fatigue stress on the turbine blades and internal components caused by wind gusts, sub-optimal pointing and reactionary speed or RPM changes. This effort will have a significant impact on the operation and maintenance costs of turbines across the industry. During the course of the project, Michigan Aerospace Corporation (MAC) modified and tested a prototype direct detection wind LIDAR instrument; the resulting LIDAR design considered all aspects of wind turbine LIDAR operation from mounting, assembly, and environmental operating conditions to laser safety. Additionally, in co-operation with our partners, the National Renewable Energy Lab and the Colorado School of Mines, progress was made in LIDAR performance modeling as well as LIDAR feed forward control system modeling and simulation. The results of this investigation showed that using LIDAR measurements to change between baseline and extreme event controllers in a switching architecture can reduce damage equivalent loads on blades and tower, and produce higher mean power output due to fewer overspeed events. This DOE project has led to continued venture capital investment and engagement with leading turbine OEMs, wind farm developers, and wind farm owner/operators.

  16. A Scalable, Parallel Approach for Multi-Point, High-Fidelity Aerostructural Optimization of Aircraft Configurations

    NASA Astrophysics Data System (ADS)

    Kenway, Gaetan K. W.

    This thesis presents new tools and techniques developed to address the challenging problem of high-fidelity aerostructural optimization with respect to large numbers of design variables. A new mesh-movement scheme is developed that is both computationally efficient and sufficiently robust to accommodate large geometric design changes and aerostructural deformations. A fully coupled Newton-Krylov method is presented that accelerates the convergence of aerostructural systems and provides a 20% performance improvement over the traditional nonlinear block Gauss-Seidel approach and can handle more exible structures. A coupled adjoint method is used that efficiently computes derivatives for a gradient-based optimization algorithm. The implementation uses only machine accurate derivative techniques and is verified to yield fully consistent derivatives by comparing against the complex step method. The fully-coupled large-scale coupled adjoint solution method is shown to have 30% better performance than the segregated approach. The parallel scalability of the coupled adjoint technique is demonstrated on an Euler Computational Fluid Dynamics (CFD) model with more than 80 million state variables coupled to a detailed structural finite-element model of the wing with more than 1 million degrees of freedom. Multi-point high-fidelity aerostructural optimizations of a long-range wide-body, transonic transport aircraft configuration are performed using the developed techniques. The aerostructural analysis employs Euler CFD with a 2 million cell mesh and a structural finite element model with 300 000 DOF. Two design optimization problems are solved: one where takeoff gross weight is minimized, and another where fuel burn is minimized. Each optimization uses a multi-point formulation with 5 cruise conditions and 2 maneuver conditions. The optimization problems have 476 design variables are optimal results are obtained within 36 hours of wall time using 435 processors. The TOGW

  17. Li/CFx Cells Optimized for Low-Temperature Operation

    NASA Technical Reports Server (NTRS)

    Smart, Marshall C.; Whitacre, Jay F.; Bugga, Ratnakumar V.; Prakash, G. K. Surya; Bhalla, Pooja; Smith, Kiah

    2009-01-01

    Some developments reported in prior NASA Tech Briefs articles on primary electrochemical power cells containing lithium anodes and fluorinated carbonaceous (CFx) cathodes have been combined to yield a product line of cells optimized for relatively-high-current operation at low temperatures at which commercial lithium-based cells become useless. These developments have involved modifications of the chemistry of commercial Li/CFx cells and batteries, which are not suitable for high-current and low-temperature applications because they are current-limited and their maximum discharge rates decrease with decreasing temperature. One of two developments that constitute the present combination is, itself, a combination of developments: (1) the use of sub-fluorinated carbonaceous (CFx wherein x<1) cathode material, (2) making the cathodes thinner than in most commercial units, and (3) using non-aqueous electrolytes formulated especially to enhance low-temperature performance. This combination of developments was described in more detail in High-Energy-Density, Low- Temperature Li/CFx Primary Cells (NPO-43219), NASA Tech Briefs, Vol. 31, No. 7 (July 2007), page 43. The other development included in the present combination is the use of an anion receptor as an electrolyte additive, as described in the immediately preceding article, "Additive for Low-Temperature Operation of Li-(CF)n Cells" (NPO- 43579). A typical cell according to the present combination of developments contains an anion-receptor additive solvated in an electrolyte that comprises LiBF4 dissolved at a concentration of 0.5 M in a mixture of four volume parts of 1,2 dimethoxyethane with one volume part of propylene carbonate. The proportion, x, of fluorine in the cathode in such a cell lies between 0.5 and 0.9. The best of such cells fabricated to date have exhibited discharge capacities as large as 0.6 A h per gram at a temperature of 50 C when discharged at a rate of C/5 (where C is the magnitude of the

  18. Renormalization of quark bilinear operators in a momentum-subtraction scheme with a nonexceptional subtraction point

    SciTech Connect

    Sturm, C.; Soni, A.; Aoki, Y.; Christ, N. H.; Izubuchi, T.; Sachrajda, C. T. C.

    2009-07-01

    We extend the Rome-Southampton regularization independent momentum-subtraction renormalization scheme (RI/MOM) for bilinear operators to one with a nonexceptional, symmetric subtraction point. Two-point Green's functions with the insertion of quark bilinear operators are computed with scalar, pseudoscalar, vector, axial-vector and tensor operators at one-loop order in perturbative QCD. We call this new scheme RI/SMOM, where the S stands for 'symmetric'. Conversion factors are derived, which connect the RI/SMOM scheme and the MS scheme and can be used to convert results obtained in lattice calculations into the MS scheme. Such a symmetric subtraction point involves nonexceptional momenta implying a lattice calculation with substantially suppressed contamination from infrared effects. Further, we find that the size of the one-loop corrections for these infrared improved kinematics is substantially decreased in the case of the pseudoscalar and scalar operator, suggesting a much better behaved perturbative series. Therefore it should allow us to reduce the error in the determination of the quark mass appreciably.

  19. Optimal Orbital Coverage of Theater Operations and Targets

    DTIC Science & Technology

    2007-03-01

    satellites. Several different approaches to coverage optimization are used. For the case of a single satellite, the number of daylight passes made... optimization . The third approach is to prevent a gap in coverage by placing the satellites in orbits spaced evenly by longitude of the ascending node. One...balance the tradeoffs between the number of passes and slant range, an optimization algorithm was developed and implemented as a computer program

  20. Planned LMSS propagation experiment using ACTS: Preliminary antenna pointing results during mobile operations

    NASA Technical Reports Server (NTRS)

    Rowland, John R.; Goldhirsh, Julius; Vogel, Wolfhard J.; Torrence, Geoffrey W.

    1991-01-01

    An overview and a status description of the planned LMSS mobile K band experiment with ACTS is presented. As a precursor to the ACTS mobile measurements at 20.185 GHz, measurements at 19.77 GHz employing the Olympus satellite were originally planned. However, because of the demise of Olympus in June of 1991, the efforts described here are focused towards the ACTS measurements. In particular, we describe the design and testing results of a gyro controlled mobile-antenna pointing system. Preliminary pointing measurements during mobile operations indicate that the present system is suitable for measurements employing a 15 cm aperture (beamwidth at approximately 7 deg) receiving antenna operating with ACTS in the high gain transponder mode. This should enable measurements with pattern losses smaller than plus or minus 1 dB over more than 95 percent of the driving distance. Measurements with the present mount system employing a 60 cm aperture (beamwidth at approximately 1.7 deg) results in pattern losses smaller than plus or minus 3 dB for 70 percent of the driving distance. Acceptable propagation measurements may still be made with this system by employing developed software to flag out bad data points due to extreme pointing errors. The receiver system including associated computer control software has been designed and assembled. Plans are underway to integrate the antenna mount with the receiver on the University of Texas mobile receiving van and repeat the pointing tests on highways employing a recently designed radome system.

  1. Planned LMSS propagation experiment using ACTS: Preliminary antenna pointing results during mobile operations

    NASA Astrophysics Data System (ADS)

    Rowland, John R.; Goldhirsh, Julius; Vogel, Wolfhard J.; Torrence, Geoffrey W.

    1991-07-01

    An overview and a status description of the planned LMSS mobile K band experiment with ACTS is presented. As a precursor to the ACTS mobile measurements at 20.185 GHz, measurements at 19.77 GHz employing the Olympus satellite were originally planned. However, because of the demise of Olympus in June of 1991, the efforts described here are focused towards the ACTS measurements. In particular, we describe the design and testing results of a gyro controlled mobile-antenna pointing system. Preliminary pointing measurements during mobile operations indicate that the present system is suitable for measurements employing a 15 cm aperture (beamwidth at approximately 7 deg) receiving antenna operating with ACTS in the high gain transponder mode. This should enable measurements with pattern losses smaller than plus or minus 1 dB over more than 95 percent of the driving distance. Measurements with the present mount system employing a 60 cm aperture (beamwidth at approximately 1.7 deg) results in pattern losses smaller than plus or minus 3 dB for 70 percent of the driving distance. Acceptable propagation measurements may still be made with this system by employing developed software to flag out bad data points due to extreme pointing errors. The receiver system including associated computer control software has been designed and assembled. Plans are underway to integrate the antenna mount with the receiver on the University of Texas mobile receiving van and repeat the pointing tests on highways employing a recently designed radome system.

  2. Zero-point energies, the uncertainty principle, and positivity of the quantum Brownian density operator.

    PubMed

    Tameshtit, Allan

    2012-04-01

    High-temperature and white-noise approximations are frequently invoked when deriving the quantum Brownian equation for an oscillator. Even if this white-noise approximation is avoided, it is shown that if the zero-point energies of the environment are neglected, as they often are, the resultant equation will violate not only the basic tenet of quantum mechanics that requires the density operator to be positive, but also the uncertainty principle. When the zero-point energies are included, asymptotic results describing the evolution of the oscillator are obtained that preserve positivity and, therefore, the uncertainty principle.

  3. Performance of FORTRAN floating-point operations on the Flex/32 multicomputer

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1987-01-01

    A series of experiments has been run to examine the floating-point performance of FORTRAN programs on the Flex/32 (Trademark) computer. The experiments are described, and the timing results are presented. The time required to execute a floating-point operation is found to vary considerbaly depending on a number of factors. One factor of particular interest from an algorithm design standpoint is the difference in speed between common memory accesses and local memory accesses. Common memory accesses were found to be slower, and guidelines are given for determinig when it may be cost effective to copy data from common to local memory.

  4. The method of optimization of neuro-based concurrent operations in neurocomputers

    NASA Astrophysics Data System (ADS)

    Romanchuk, V. A.

    2017-02-01

    The article deals with the task of optimization of neuro-based concurrent operations to be implemented in neurocomputers. We define mathematical tools of this optimization that can employ the set-theoretic approach towards such concepts as task, operation, and microcommand. We consider segmentation and parallelization of operations as methods to use, depending on precedence relations among operations that constitute these segments. The task solution of optimization of neuro-based concurrent operations in neurocomputers can be applied to a whole class of neurocomputers, regardless of the manufacturer, the model or the product line, since we only address the general properties and principles of the neurocomputer operation. We select criteria and define methods of evaluating the effectiveness of parallelization of concurrent operations, when they are implemented in neurocomputers. We describe our empiric research in the form of a software system that automatically optimizes neuro-based concurrent operations in neurocomputers on the NP Studio platform.

  5. Outage Capacity Optimization for Free-Space Optical Links With Pointing Errors

    NASA Astrophysics Data System (ADS)

    Farid, Ahmed A.; Hranilovic, Steve

    2007-07-01

    We investigate the performance and design of free-space optical (FSO) communication links over slow fading channels from an information theory perspective. A statistical model for the optical intensity fluctuation at the receiver due to the combined effects of atmospheric turbulence and pointing errors is derived. Unlike earlier work, our model considers the effect of beam width, detector size, and jitter variance explicitly. Expressions for the outage probability are derived for a variety of atmospheric conditions. For given weather and misalignment conditions, the beam width is optimized to maximize the channel capacity subject to outage. Large gains in achievable rate are realized versus using a nominal beam width. In light fog, by optimizing the beam width, the achievable rate is increased by 80% over the nominal beam width at an outage probability of 10-5. Well-known error control codes are then applied to the channel and shown to realize much of the achievable gains.

  6. An Efficient and Optimal Filter for Identifying Point Sources in Millimeter/Submillimeter Wavelength Sky Maps

    NASA Astrophysics Data System (ADS)

    Perera, T. A.; Wilson, G. W.; Scott, K. S.; Austermann, J. E.; Schaar, J. R.; Mancera, A.

    2013-07-01

    A new technique for reliably identifying point sources in millimeter/submillimeter wavelength maps is presented. This method accounts for the frequency dependence of noise in the Fourier domain as well as nonuniformities in the coverage of a field. This optimal filter is an improvement over commonly-used matched filters that ignore coverage gradients. Treating noise variations in the Fourier domain as well as map space is traditionally viewed as a computationally intensive problem. We show that the penalty incurred in terms of computing time is quite small due to casting many of the calculations in terms of FFTs and exploiting the absence of sharp features in the noise spectra of observations. Practical aspects of implementing the optimal filter are presented in the context of data from the AzTEC bolometer camera. The advantages of using the new filter over the standard matched filter are also addressed in terms of a typical AzTEC map.

  7. Schrödinger Operator with Non-Zero Accumulation Points of Complex Eigenvalues

    NASA Astrophysics Data System (ADS)

    Bögli, Sabine

    2016-11-01

    We study Schrödinger operators {H=-Δ + V} in {L2(Ω)} where {Ω} is {R^d} or the half-space {{R+d}} , subject to (real) Robin boundary conditions in the latter case. For {p > d} we construct a non-real potential {V in Lp(Ω) \\cap L^{∞}(Ω)} that decays at infinity so that H has infinitely many non-real eigenvalues accumulating at every point of the essential spectrum {σ_ess(H)=[0,∞)} . This demonstrates that the Lieb-Thirring inequalities for selfadjoint Schrödinger operators are no longer true in the non-selfadjoint case.

  8. Estimation of the global average temperature with optimally weighted point gauges

    NASA Technical Reports Server (NTRS)

    Hardin, James W.; Upson, Robert B.

    1993-01-01

    This paper considers the minimum mean squared error (MSE) incurred in estimating an idealized Earth's global average temperature with a finite network of point gauges located over the globe. We follow the spectral MSE formalism given by North et al. (1992) and derive the optimal weights for N gauges in the problem of estimating the Earth's global average temperature. Our results suggest that for commonly used configurations the variance of the estimate due to sampling error can be reduced by as much as 50%.

  9. Optimized shape semantic graph representation for object understanding and recognition in point clouds

    NASA Astrophysics Data System (ADS)

    Ning, Xiaojuan; Wang, Yinghui; Meng, Weiliang; Zhang, Xiaopeng

    2016-10-01

    To understand and recognize the three-dimensional (3-D) objects represented as point cloud data, we use an optimized shape semantic graph (SSG) to describe 3-D objects. Based on the decomposed components of an object, the boundary surface of different components and the topology of components, the SSG gives a semantic description that is consistent with human vision perception. The similarity measurement of the SSG for different objects is effective for distinguishing the type of object and finding the most similar one. Experiments using a shape database show that the SSG is valuable for capturing the components of the objects and the corresponding relations between them. The SSG is not only suitable for an object without any loops but also appropriate for an object with loops to represent the shape and the topology. Moreover, a two-step progressive similarity measurement strategy is proposed to effectively improve the recognition rate in the shape database containing point-sample data.

  10. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....

  11. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....

  12. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...-point frogs and derails shall be selected through circuit controller operated directly by switch points... switch, movable-point frog, and derail in the routes governed by such signal. Circuits shall be arranged... when each switch, movable-point frog, and derail in the route is in proper position....

  13. Optimization of a catchment-scale coupled surface-subsurface hydrological model using pilot points

    NASA Astrophysics Data System (ADS)

    Danapour, Mehrdis; Stisen, Simon; Lajer Højberg, Anker

    2016-04-01

    Transient coupled surface-subsurface models are usually complex and contain a large amount of spatio-temporal information. In the traditional calibration approach, model parameters are adjusted against only few spatially aggregated observations of discharge or individual point observations of groundwater head. However, this approach doesn't enable an assessment of spatially explicit predictive model capabilities at the intermediate scale relevant for many applications. The overall objectives of this project is to develop a new model calibration and evaluation framework by combining distributed model parameterization and regularization with new types of objective functions focusing on optimizing spatial patterns rather than individual points or catchment scale features. Inclusion of detailed observed spatial patterns of hydraulic head gradients or relevant information obtained from remote sensing data in the calibration process could allow for a better representation of spatial variability of hydraulic properties. Pilot Points as an alternative to classical parameterization approaches, introduce great flexibility when calibrating heterogeneous systems without neglecting expert knowledge (Doherty, 2003). A highly parameterized optimization of complex distributed hydrological models at catchment scale is challenging due to the computational burden that comes with it. In this study the physically-based coupled surface-subsurface model MIKE SHE is calibrated for the 8,500 km2 area of central Jylland (Denmark) that is characterized by heterogeneous geology and considerable groundwater flow across topographical catchment boundaries. The calibration of the distributed conductivity fields is carried out with a pilot point-based approach, implemented using the PEST parameter estimation tool. To reduce the high number of calibration parameters, PEST's advanced singular value decomposition combined with regularization was utilized and a reduction of the model's complexity was

  14. Laser pointing camera: a valuable tool for the LGS-AO operations

    NASA Astrophysics Data System (ADS)

    Centrone, M.; Bonaccini Calia, D.; Pedichini, F.; Cerruto, A.; Ricciardi, A.; Ambrosino, F.

    2016-07-01

    We describe the design, functionalities and commissioning results of the Laser Pointing Camera, developed at INAF-OAR in collaboration with ESO and Astrel for the 4LGSF of the ESO Adaptive Optics Facility. The LPC has proven a fundamental tool during commissioning and operation of the 4LGSF. It allows to calibrate the pointing and focusing models of the four LGS, to reduce to zero the overhead time for the open-loop acquisition of the LGS in the wavefront sensor. During LGS-AO operation it collects regularly the LGS photometry, the LGS fwhm and the cirrus clouds scattering levels. By recognizing via astrometric software the field stars as well as the multiple LGS, LPC is insensitive to flexures of the laser launch telescope or of the receiver telescope opto-mechanics. We present the Commissioning results of the Laser Pointing Camera, obtained at the ESO VLT during the all 4LGSF Laser Guide Star Units Commissioning, and will discuss its possible extension for the ELT operations.

  15. Sound source localization on an axial fan at different operating points

    NASA Astrophysics Data System (ADS)

    Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes

    2016-08-01

    A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.

  16. Muscle motor point identification is essential for optimizing neuromuscular electrical stimulation use.

    PubMed

    Gobbo, Massimiliano; Maffiuletti, Nicola A; Orizio, Claudio; Minetto, Marco A

    2014-02-25

    Transcutaneous neuromuscular electrical stimulation applied in clinical settings is currently characterized by a wide heterogeneity of stimulation protocols and modalities. Practitioners usually refer to anatomic charts (often provided with the user manuals of commercially available stimulators) for electrode positioning, which may lead to inconsistent outcomes, poor tolerance by the patients, and adverse reactions. Recent evidence has highlighted the crucial importance of stimulating over the muscle motor points to improve the effectiveness of neuromuscular electrical stimulation. Nevertheless, the correct electrophysiological definition of muscle motor point and its practical significance are not always fully comprehended by therapists and researchers in the field. The commentary describes a straightforward and quick electrophysiological procedure for muscle motor point identification. It consists in muscle surface mapping by using a stimulation pen-electrode and it is aimed at identifying the skin area above the muscle where the motor threshold is the lowest for a given electrical input, that is the skin area most responsive to electrical stimulation. After the motor point mapping procedure, a proper placement of the stimulation electrode(s) allows neuromuscular electrical stimulation to maximize the evoked tension, while minimizing the dose of the injected current and the level of discomfort. If routinely applied, we expect this procedure to improve both stimulation effectiveness and patient adherence to the treatment.The aims of this clinical commentary are to present an optimized procedure for the application of neuromuscular electrical stimulation and to highlight the clinical implications related to its use.

  17. Weaving time into system architecture: satellite cost per operational day and optimal design lifetime

    NASA Astrophysics Data System (ADS)

    Saleh, Joseph H.; Hastings, Daniel E.; Newman, Dava J.

    2004-03-01

    An augmented perspective on system architecture is proposed (diachronic) that complements the traditional views on system architecture (synchronic). This paper proposes to view in a system architecture the flow of service (or utility) that the system will provide over its design lifetime. It suggests that the design lifetime is a fundamental component of system architecture although one cannot see it or touch it. Consequently, cost, utility, and value per unit time metrics are introduced. A framework is then developed that identifies optimal design lifetimes for complex systems in general, and space systems in particular, based on this augmented perspective of system architecture and on these metrics. It is found that an optimal design lifetime for a satellite exists, even in the case of constant expected revenues per day over the system's lifetime, and that it changes substantially with the expected Time to Obsolescence of the system and the volatility of the market the system is serving in the case of a commercial venture. The analysis thus proves that it is essential for a system architect to match the design lifetime with the dynamical characteristics of the environment the system is/will be operating in. It is also shown that as the uncertainty in the dynamical characteristics of the environment the system is operating in increases, the value of having the option to upgrade, modify, or extend the lifetime of a system at a later point in time increases depending on how events unfold.

  18. Loop Heat Pipe Operation Using Heat Source Temperature for Set Point Control

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Paiva, Kleber; Mantelli, Marcia

    2011-01-01

    Loop heat pipes (LHPs) have been used for thermal control of several NASA and commercial orbiting spacecraft. The LHP operating temperature is governed by the saturation temperature of its compensation chamber (CC). Most LHPs use the CC temperature for feedback control of its operating temperature. There exists a thermal resistance between the heat source to be cooled by the LHP and the LHP's CC. Even if the CC set point temperature is controlled precisely, the heat source temperature will still vary with its heat output. For most applications, controlling the heat source temperature is of most interest. A logical question to ask is: "Can the heat source temperature be used for feedback control of the LHP operation?" A test program has been implemented to answer the above question. Objective is to investigate the LHP performance using the CC temperature and the heat source temperature for feedback control

  19. Science Operations for the 2008 NASA Lunar Analog Field Test at Black Point Lava Flow, Arizona

    NASA Technical Reports Server (NTRS)

    Garry W. D.; Horz, F.; Lofgren, G. E.; Kring, D. A.; Chapman, M. G.; Eppler, D. B.; Rice, J. W., Jr.; Nelson, J.; Gernhardt, M. L.; Walheim, R. J.

    2009-01-01

    Surface science operations on the Moon will require merging lessons from Apollo with new operation concepts that exploit the Constellation Lunar Architecture. Prototypes of lunar vehicles and robots are already under development and will change the way we conduct science operations compared to Apollo. To prepare for future surface operations on the Moon, NASA, along with several supporting agencies and institutions, conducted a high-fidelity lunar mission simulation with prototypes of the small pressurized rover (SPR) and unpressurized rover (UPR) (Fig. 1) at Black Point lava flow (Fig. 2), 40 km north of Flagstaff, Arizona from Oct. 19-31, 2008. This field test was primarily intended to evaluate and compare the surface mobility afforded by unpressurized and pressurized rovers, the latter critically depending on the innovative suit-port concept for efficient egress and ingress. The UPR vehicle transports two astronauts who remain in their EVA suits at all times, whereas the SPR concept enables astronauts to remain in a pressurized shirt-sleeve environment during long translations and while making contextual observations and enables rapid (less than or equal to 10 minutes) transfer to and from the surface via suit-ports. A team of field geologists provided realistic science scenarios for the simulations and served as crew members, field observers, and operators of a science backroom. Here, we present a description of the science team s operations and lessons learned.

  20. An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification

    PubMed Central

    Samal, Ashok; Rong, Panying; Green, Jordan R.

    2016-01-01

    Purpose The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. Method The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of words, and a set of short phrases during the recording. We used a machine-learning classifier (support-vector machine) to classify the speech stimuli on the basis of articulatory movements. We then compared classification accuracies of the flesh-point combinations to determine an optimal set of sensors. Results When data from the 4 sensors (T1: the vicinity between the tongue tip and tongue blade; T4: the tongue-body back; UL: the upper lip; and LL: the lower lip) were combined, phoneme and word classifications were most accurate and were comparable with the full set (including T2: the tongue-body front; and T3: the tongue-body front). Conclusion We identified a 4-sensor set—that is, T1, T4, UL, LL—that yielded a classification accuracy (91%–95%) equivalent to that using all 6 sensors. These findings provide an empirical basis for selecting sensors and their locations for scientific and emerging clinical applications that incorporate articulatory movements. PMID:26564030

  1. Improved optimization algorithm for proximal point-based dictionary updating methods

    NASA Astrophysics Data System (ADS)

    Zhao, Changchen; Hwang, Wen-Liang; Lin, Chun-Liang; Chen, Weihai

    2016-09-01

    Proximal K-singular value decomposition (PK-SVD) is a dictionary updating algorithm that incorporates proximal point method into K-SVD. The attempt of combining proximal method and K-SVD has achieved promising result in such areas as sparse approximation, image denoising, and image compression. However, the optimization procedure of PK-SVD is complicated and, therefore, limits the algorithm in both theoretical analysis and practical use. This article proposes a simple but effective optimization approach to the formulation of PK-SVD. We cast this formulation as a fitting problem and relax the constraint on the direction of the k'th row in the sparse coefficient matrix. This relaxation strengthens the regularization effect of the proximal point. The proposed algorithm needs fewer steps to implement and further boost the performance of PK-SVD while maintaining the same computational complexity. Experimental results demonstrate that the proposed algorithm outperforms conventional algorithms in reconstruction error, recovery rate, and convergence speed for sparse approximation and achieves better results in image denoising.

  2. Loop Heat Pipe Operation Using Heat Source Temperature for Set Point Control

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Paiva, Kleber; Mantelli, Marcia

    2011-01-01

    The LHP operating temperature is governed by the saturation temperature of its reservoir. Controlling the reservoir saturation temperature is commonly accomplished by cold biasing the reservoir and using electrical heaters to provide the required control power. Using this method, the loop operating temperature can be controlled within +/- 0.5K. However, because of the thermal resistance that exists between the heat source and the LHP evaporator, the heat source temperature will vary with its heat output even if LHP operating temperature is kept constant. Since maintaining a constant heat source temperature is of most interest, a question often raised is whether the heat source temperature can be used for LHP set point temperature control. A test program with a miniature LHP has been carried out to investigate the effects on the LHP operation when the control temperature sensor is placed on the heat source instead of the reservoir. In these tests, the LHP reservoir is cold-biased and is heated by a control heater. Tests results show that it is feasible to use the heat source temperature for feedback control of the LHP operation. Using this method, the heat source temperature can be maintained within a tight range for moderate and high powers. At low powers, however, temperature oscillations may occur due to interactions among the reservoir control heater power, the heat source mass, and the heat output from the heat source. In addition, the heat source temperature could temporarily deviate from its set point during fast thermal transients. The implication is that more sophisticated feedback control algorithms need to be implemented for LHP transient operation when the heat source temperature is used for feedback control.

  3. Optimization of the Operation of Green Buildings applying the Facility Management

    NASA Astrophysics Data System (ADS)

    Somorová, Viera

    2014-06-01

    Nowadays, in the field of civil engineering there exists an upward trend towards environmental sustainability. It relates mainly to the achievement of energy efficiency and also to the emission reduction throughout the whole life cycle of the building, i.e. in the course of its implementation, use and liquidation. These requirements are fulfilled, to a large extent, by green buildings. The characteristic feature of green buildings are primarily highly-sophisticated technical and technological equipments which are installed therein. The sophisticated systems of technological equipments need also the sophisticated management. From this point of view the facility management has all prerequisites to meet this requirement. The paper is aimed to define the facility management as an effective method which enables the optimization of the management of supporting activities by creating conditions for the optimum operation of green buildings viewed from the aspect of the environmental conditions

  4. The Optimized Operation of Gas Turbine Combined Heat and Power Units Oriented for the Grid-Connected Control

    NASA Astrophysics Data System (ADS)

    Xia, Shu; Ge, Xiaolin

    2016-04-01

    In this study, according to various grid-connected demands, the optimization scheduling models of Combined Heat and Power (CHP) units are established with three scheduling modes, which are tracking the total generation scheduling mode, tracking steady output scheduling mode and tracking peaking curve scheduling mode. In order to reduce the solution difficulty, based on the principles of modern algebraic integers, linearizing techniques are developed to handle complex nonlinear constrains of the variable conditions, and the optimized operation problem of CHP units is converted into a mixed-integer linear programming problem. Finally, with specific examples, the 96 points day ahead, heat and power supply plans of the systems are optimized. The results show that, the proposed models and methods can develop appropriate coordination heat and power optimization programs according to different grid-connected control.

  5. Target point correction optimized based on the dose distribution of each fraction in daily IGRT

    NASA Astrophysics Data System (ADS)

    Stoll, Markus; Giske, Kristina; Stoiber, Eva M.; Schwarz, Michael; Bendl, Rolf

    2014-03-01

    Purpose: To use daily re-calculated dose distributions for optimization of target point corrections (TPCs) in image guided radiation therapy (IGRT). This aims to adapt fractioned intensity modulated radiation therapy (IMRT) to changes in the dose distribution induced by anatomical changes. Methods: Daily control images from an in-room on-rail spiral CT-Scanner of three head-and-neck cancer patients were analyzed. The dose distribution was re-calculated on each control CT after an initial TPC, found by a rigid image registration method. The clinical target volumes (CTVs) were transformed from the planning CT to the rigidly aligned control CTs using a deformable image registration method. If at least 95% of each transformed CTV was covered by the initially planned D95 value, the TPC was considered acceptable. Otherwise the TPC was iteratively altered to maximize the dose coverage of the CTVs. Results: In 14 (out of 59) fractions the criterion was already fulfilled after the initial TPC. In 10 fractions the TPC can be optimized to fulfill the coverage criterion. In 31 fractions the coverage can be increased but the criterion is not fulfilled. In another 4 fractions the coverage cannot be increased by the TPC optimization. Conclusions: The dose coverage criterion allows selection of patients who would benefit from replanning. Using the criterion to include daily re-calculated dose distributions in the TPC reduces the replanning rate in the analysed three patients from 76% to 59% compared to the rigid image registration TPC.

  6. Design optimization of composite structures operating in acoustic environments

    NASA Astrophysics Data System (ADS)

    Chronopoulos, D.

    2015-10-01

    The optimal mechanical and geometric characteristics for layered composite structures subject to vibroacoustic excitations are derived. A Finite Element description coupled to Periodic Structure Theory is employed for the considered layered panel. Structures of arbitrary anisotropy as well as geometric complexity can thus be modelled by the presented approach. Damping can also be incorporated in the calculations. Initially, a numerical continuum-discrete approach for computing the sensitivity of the acoustic wave characteristics propagating within the modelled periodic composite structure is exhibited. The first- and second-order sensitivities of the acoustic transmission coefficient expressed within a Statistical Energy Analysis context are subsequently derived as a function of the computed acoustic wave characteristics. Having formulated the gradient vector as well as the Hessian matrix, the optimal mechanical and geometric characteristics satisfying the considered mass, stiffness and vibroacoustic performance criteria are sought by employing Newton's optimization method.

  7. Optimization strategy integrity for watershed agricultural non-point source pollution control based on Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Gong, Y.; Yu, Y. J.; Zhang, W. Y.

    2016-08-01

    This study has established a set of methodological systems by simulating loads and analyzing optimization strategy integrity for the optimization of watershed non-point source pollution control. First, the source of watershed agricultural non-point source pollution is divided into four aspects, including agricultural land, natural land, livestock breeding, and rural residential land. Secondly, different pollution control measures at the source, midway and ending stages are chosen. Thirdly, the optimization effect of pollution load control in three stages are simulated, based on the Monte Carlo simulation. The method described above is applied to the Ashi River watershed in Heilongjiang Province of China. Case study results indicate that the combined three types of control measures can be implemented only if the government promotes the optimized plan and gradually improves implementation efficiency. This method for the optimization strategy integrity for watershed non-point source pollution control has significant reference value.

  8. Stochastic Network Interdiction for Optimizing Defensive Counter Air Operations Planning

    DTIC Science & Technology

    2011-12-01

    the interdictor reveals his defensive strategy . Washburn and Wood [8] view the network interdiction problem as a simultaneous, two-person, zero-sum... the distance to the nearest refueling point . For example, the cost for an area ( , )i j AI∈ is one if an aircraft formation can stay on combat air...combination of area defense and point defense allows the defender to deploy more efficient tactics and protect more friendly assets with fewer resources

  9. Newtonian Imperialist Competitve Approach to Optimizing Observation of Multiple Target Points in Multisensor Surveillance Systems

    NASA Astrophysics Data System (ADS)

    Afghan-Toloee, A.; Heidari, A. A.; Joibari, Y.

    2013-09-01

    The problem of specifying the minimum number of sensors to deploy in a certain area to face multiple targets has been generally studied in the literatures. In this paper, we are arguing the multi-sensors deployment problem (MDP). The Multi-sensor placement problem can be clarified as minimizing the cost required to cover the multi target points in the area. We propose a more feasible method for the multi-sensor placement problem. Our method makes provision the high coverage of grid based placements while minimizing the cost as discovered in perimeter placement techniques. The NICA algorithm as improved ICA (Imperialist Competitive Algorithm) is used to decrease the performance time to explore an enough solution compared to other meta-heuristic schemes such as GA, PSO and ICA. A three dimensional area is used for clarify the multiple target and placement points, making provision x, y, and z computations in the observation algorithm. A structure of model for the multi-sensor placement problem is proposed: The problem is constructed as an optimization problem with the objective to minimize the cost while covering all multiple target points upon a given probability of observation tolerance.

  10. Design, Performance and Optimization for Multimodal Radar Operation

    PubMed Central

    Bhat, Surendra S.; Narayanan, Ram M.; Rangaswamy, Muralidhar

    2012-01-01

    This paper describes the underlying methodology behind an adaptive multimodal radar sensor that is capable of progressively optimizing its range resolution depending upon the target scattering features. It consists of a test-bed that enables the generation of linear frequency modulated waveforms of various bandwidths. This paper discusses a theoretical approach to optimizing the bandwidth used by the multimodal radar. It also discusses the various experimental results obtained from measurement. The resolution predicted from theory agrees quite well with that obtained from experiments for different target arrangements.

  11. Performance prediction of the high head Francis-99 turbine for steady operation points

    NASA Astrophysics Data System (ADS)

    Casartelli, E.; Mangani, L.; Ryan, O.; Del Rio, A.

    2017-01-01

    Steady-state numerical investigations are still the reference computational method for the prediction of the global machine performance during the design phase. Accordingly, steady state CFD simulations of the complete high head Francis-99 turbine, from spiral casing to draft tube have been performed at three operating conditions, namely at part load (PL), best efficiency point (BEP), and high load (HL). In addition, simulations with a moving runner for the three operating points are conducted and compared to the steady state results. The prediction accuracy of the numerical results is assessed comparing global and local data to the available experimental results. A full 360°-model is applied for the unsteady simulations and for the steady state simulations a reduced domain was used for the periodic components, with respectively only one guide vane and one runner passage. The steady state rotor-stator interactions were modeled with a mixing-plane. All CFD simulations were performed at model scale with an in-house 3D, unstructured, object-oriented finite volume code designed to solve incompressible RANS-Equations. Steady and unsteady solver simulations are both able to predict similar values for torque and head in design and off-design. Flow features in off-design operation such as a vortex rope in PL operation can be predicted by both simulation types, though all simulations tend to overestimate head and torque. Differences among steady and unsteady simulations can mainly be attributed to the averaging process used in the mixing plane interface in steady state simulations. Measured efficiency agrees best with the unsteady simulations for BEP and PL operation, though the steady state simulations also provide a cost-effective alternative with comparable accuracy.

  12. Program for Research on Dietary Supplements in Military Operations and Healthcare Metabolically Optimized Brain - JWF

    DTIC Science & Technology

    2014-05-01

    ABSTRACT “The Program for Research on Dietary Supplements in Military Operations and Healthcare: The Metabolically Optimized Brain ( MOB ) Study targets a...Operations and Healthcare: The Metabolically Optimized Brain ( MOB ) Study targets a more specific aspect of dietary nutrition, feeding policy and...psychological consequences of brain injury from high intensity training, and combat operations exposures. The MOB Study has 3 specific aims: 1. Convene a

  13. Optimizing Air Force Depot Programming to Maximize Operational Capability

    DTIC Science & Technology

    2014-03-27

    34 vii LINGO Component... LINGO Code with Notional Data by Model .................................. 45 RAND Formulation to Maximize Operational Capability...Minimize Cost ...................................................................................... 49 Appendix B –Final LINGO Code by Model

  14. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... circuit controller operated by switch points or by switch locking mechanism. 236.303 Section 236.303... § 236.303 Control circuits for signals, selection through circuit controller operated by switch points or by switch locking mechanism. The control circuit for each aspect with indication more...

  15. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... circuit controller operated by switch points or by switch locking mechanism. 236.303 Section 236.303... § 236.303 Control circuits for signals, selection through circuit controller operated by switch points or by switch locking mechanism. The control circuit for each aspect with indication more...

  16. Interactive method for planning constrained, fuel-optimal orbital proximity operations

    NASA Technical Reports Server (NTRS)

    Abramovitz, Adrian; Grunwald, Arthur J.

    1993-01-01

    An interactive graphical method for planning fuel-efficient rendezvous trajectories in the multi-spacecraft environment of the space station is presented. The method allows the operator to compose a multi-burn transfer trajectory between arbitrary initial chaser and target trajectories. The available task time of the mission is limited and the maneuver is subject to various operational constraints, such as departure, arrival, plume impingement and spatial constraints. The maneuvers are described in terms of the relataive motion experienced in a Space-Station centered coordinate system. The optimization method is based on the primer vector and its extension to non-optimal trajectories. The visual feedback of trajectory shapes, operational constraints, and optimization functions, provided by user-transparaent and continuously active background computations, allows the operator to make fast, iterative design changes which rapidly converge to fuel-efficient solutions. The optimization functions are presented. A variety of simple design examples has been presented to demonstrate the usefulness of the method. In many cases the addition of a properly positioned intermediate waypoint resulted in fuel savings of up to 30%. Furthermore, due to the counter-intuitive character of the optimization functions, most fuel-optimal solutions could not have been found without the aid of the optimization tools. Operating the system was found to be very easy, and did not require any previous in-depth knowledge of orbital dynamics or trajectory. The planning tool is an example of operator assisted optimization of nonlinear cost-functions.

  17. On the Scaling Limits of Determinantal Point Processes with Kernels Induced by Sturm-Liouville Operators

    NASA Astrophysics Data System (ADS)

    Bornemann, Folkmar

    2016-08-01

    By applying an idea of Borodin and Olshanski [J. Algebra 313 (2007), 40-60], we study various scaling limits of determinantal point processes with trace class projection kernels given by spectral projections of selfadjoint Sturm-Liouville operators. Instead of studying the convergence of the kernels as functions, the method directly addresses the strong convergence of the induced integral operators. We show that, for this notion of convergence, the Dyson, Airy, and Bessel kernels are universal in the bulk, soft-edge, and hard-edge scaling limits. This result allows us to give a short and unified derivation of the known formulae for the scaling limits of the classical random matrix ensembles with unitary invariance, that is, the Gaussian unitary ensemble (GUE), the Wishart or Laguerre unitary ensemble (LUE), and the MANOVA (multivariate analysis of variance) or Jacobi unitary ensemble (JUE).

  18. Critical Point Facility (CPE) Group in the Spacelab Payload Operations Control Center (SL POCC)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The primary payload for Space Shuttle Mission STS-42, launched January 22, 1992, was the International Microgravity Laboratory-1 (IML-1), a pressurized manned Spacelab module. The goal of IML-1 was to explore in depth the complex effects of weightlessness of living organisms and materials processing. Around-the-clock research was performed on the human nervous system's adaptation to low gravity and effects of microgravity on other life forms such as shrimp eggs, lentil seedlings, fruit fly eggs, and bacteria. Materials processing experiments were also conducted, including crystal growth from a variety of substances such as enzymes, mercury iodide, and a virus. The Huntsville Operations Support Center (HOSC) Spacelab Payload Operations Control Center (SL POCC) at the Marshall Space Flight Center (MSFC) was the air/ground communication channel used between the astronauts and ground control teams during the Spacelab missions. Featured is the Critical Point Facility (CPE) group in the SL POCC during STS-42, IML-1 mission.

  19. Critical Point Facility (CPF) Team in the Spacelab Payload Operations Control Center (SL POCC)

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The primary payload for Space Shuttle Mission STS-42, launched January 22, 1992, was the International Microgravity Laboratory-1 (IML-1), a pressurized manned Spacelab module. The goal of IML-1 was to explore in depth the complex effects of weightlessness of living organisms and materials processing. Around-the-clock research was performed on the human nervous system's adaptation to low gravity and effects of microgravity on other life forms such as shrimp eggs, lentil seedlings, fruit fly eggs, and bacteria. Materials processing experiments were also conducted, including crystal growth from a variety of substances such as enzymes, mercury iodide, and a virus. The Huntsville Operations Support Center (HOSC) Spacelab Payload Operations Control Center (SL POCC) at the Marshall Space Flight Center (MSFC) was the air/ground communication channel used between the astronauts and ground control teams during the Spacelab missions. Featured is the Critical Point Facility (CPF) team in the SL POCC during the IML-1 mission.

  20. Optimization of the Nano-Dust Analyzer (NDA) for operation under solar UV illumination

    NASA Astrophysics Data System (ADS)

    O`Brien, L.; Grün, E.; Sternovsky, Z.

    2015-12-01

    The performance of the Nano-Dust Analyzer (NDA) instrument is analyzed for close pointing to the Sun, finding the optimal field-of-view (FOV), arrangement of internal baffles and measurement requirements. The laboratory version of the NDA instrument was recently developed (O'Brien et al., 2014) for the detection and elemental composition analysis of nano-dust particles. These particles are generated near the Sun by the collisional breakup of interplanetary dust particles (IDP), and delivered to Earth's orbit through interaction with the magnetic field of the expanding solar wind plasma. NDA is operating on the basis of impact ionization of the particle and collecting the generated ions in a time-of-flight fashion. The challenge in the measurement is that nano-dust particles arrive from a direction close to that of the Sun and thus the instrument is exposed to intense ultraviolet (UV) radiation. The performed optical ray-tracing analysis shows that it is possible to suppress the number of UV photons scattering into NDA's ion detector to levels that allow both high signal-to-noise ratio measurements, and long-term instrument operation. Analysis results show that by avoiding direct illumination of the target, the photon flux reaching the detector is reduced by a factor of about 103. Furthermore, by avoiding the target and also implementing a low-reflective coating, as well as an optimized instrument geometry consisting of an internal baffle system and a conical detector housing, the photon flux can be reduced by a factor of 106, bringing it well below the operation requirement. The instrument's FOV is optimized for the detection of nano-dust particles, while excluding the Sun. With the Sun in the FOV, the instrument can operate with reduced sensitivity and for a limited duration. The NDA instrument is suitable for future space missions to provide the unambiguous detection of nano-dust particles, to understand the conditions in the inner heliosphere and its temporal

  1. Optimal biliary access point and learning curve for endoscopic ultrasound-guided hepaticogastrostomy with transmural stenting

    PubMed Central

    Oh, Dongwook; Park, Do Hyun; Song, Tae Jun; Lee, Sang Soo; Seo, Dong-Wan; Lee, Sung Koo; Kim, Myung-Hwan

    2016-01-01

    Background: Although endoscopic ultrasound-guided hepaticogastrostomy (EUS-HGS) with transmural stenting has increased for biliary decompression in patients with an inaccessible papilla, the optimal biliary access point and the learning curve of EUS-HGS have not been studied. We evaluated the optimal biliary access point and learning curve for technically successful EUS-HGS. Methods: 129 consecutive patients (male n = 81, 62.3%; malignant n = 113, 87.6%) who underwent EUS-HGS due to an inaccessible papilla were enrolled. EUS finding and procedure times according to each needle puncture attempt in EUS-HGS were prospectively measured. Learning curves of EUS-HGS were calculated for two main outcome measurements (procedure time and adverse events) by using the moving average method and cumulative sum (CUSUM) analysis, respectively. Results: A total of 174 EUS-HGS attempts were performed in 129 patients. The mean number of needle punctures was 1.35 ± 0.57. Using the logistic regression model, bile duct diameter of the puncture site ⩽ 5 mm [odds ratio (OR) 3.7, 95% confidence interval (CI): 1.71–8.1, p < 0.01] and hepatic portion length [linear distance from the mural wall to the punctured bile duct wall on EUS; mean hepatic portion length was 27 mm (range 10–47 mm)] > 3 cm (OR 5.7, 95% CI: 2.7–12, p < 0.01) were associated with low technical success. Procedure time and adverse events were shorter after 24 cases, and stabilized at 33 cases of EUS-HGS, respectively. Conclusions: Our data suggest that a bile duct diameter > 5 mm and hepatic portion length 1 cm to ⩽ 3 cm on EUS may be suitable for successful EUS-HGS. In our learning curve analysis, over 33 cases might be required to achieve the plateau phase for successful EUS-HGS. PMID:28286558

  2. Operational Analysis of Time-Optimal Maneuvering for Imaging Spacecraft

    DTIC Science & Technology

    2013-03-01

    EO Electro Optical IR Infrared SAR Synthetic Aperture Radar CMG Control Moment Gyroscope FOV Field of View GSD Ground Sample Distance STK...Earth in LEO, the slewing capability of the spacecraft will affect the speed of the imaging satellite’s target acquisition for satellite imagery ...sensor can then acquire the desired target for imagery capture [11]. Optimal control theory can also be applied towards enabling rapid target -to

  3. Cost optimization for series-parallel execution of a collection of intersecting operation sets

    NASA Astrophysics Data System (ADS)

    Dolgui, Alexandre; Levin, Genrikh; Rozin, Boris; Kasabutski, Igor

    2016-05-01

    A collection of intersecting sets of operations is considered. These sets of operations are performed successively. The operations of each set are activated simultaneously. Operation durations can be modified. The cost of each operation decreases with the increase in operation duration. In contrast, the additional expenses for each set of operations are proportional to its time. The problem of selecting the durations of all operations that minimize the total cost under constraint on completion time for the whole collection of operation sets is studied. The mathematical model and method to solve this problem are presented. The proposed method is based on a combination of Lagrangian relaxation and dynamic programming. The results of numerical experiments that illustrate the performance of the proposed method are presented. This approach was used for optimization multi-spindle machines and machining lines, but the problem is common in engineering optimization and thus the techniques developed could be useful for other applications.

  4. WE-B-304-00: Point/Counterpoint: Biological Dose Optimization

    SciTech Connect

    2015-06-15

    The ultimate goal of radiotherapy treatment planning is to find a treatment that will yield a high tumor control probability (TCP) with an acceptable normal tissue complication probability (NTCP). Yet most treatment planning today is not based upon optimization of TCPs and NTCPs, but rather upon meeting physical dose and volume constraints defined by the planner. It has been suggested that treatment planning evaluation and optimization would be more effective if they were biologically and not dose/volume based, and this is the claim debated in this month’s Point/Counterpoint. After a brief overview of biologically and DVH based treatment planning by the Moderator Colin Orton, Joseph Deasy (for biological planning) and Charles Mayo (against biological planning) will begin the debate. Some of the arguments in support of biological planning include: this will result in more effective dose distributions for many patients DVH-based measures of plan quality are known to have little predictive value there is little evidence that either D95 or D98 of the PTV is a good predictor of tumor control sufficient validated outcome prediction models are now becoming available and should be used to drive planning and optimization Some of the arguments against biological planning include: several decades of experience with DVH-based planning should not be discarded we do not know enough about the reliability and errors associated with biological models the radiotherapy community in general has little direct experience with side by side comparisons of DVH vs biological metrics and outcomes it is unlikely that a clinician would accept extremely cold regions in a CTV or hot regions in a PTV, despite having acceptable TCP values Learning Objectives: To understand dose/volume based treatment planning and its potential limitations To understand biological metrics such as EUD, TCP, and NTCP To understand biologically based treatment planning and its potential limitations.

  5. Experimental Investigation of a Point Design Optimized Arrow Wing HSCT Configuration

    NASA Technical Reports Server (NTRS)

    Narducci, Robert P.; Sundaram, P.; Agrawal, Shreekant; Cheung, S.; Arslan, A. E.; Martin, G. L.

    1999-01-01

    The M2.4-7A Arrow Wing HSCT configuration was optimized for straight and level cruise at a Mach number of 2.4 and a lift coefficient of 0.10. A quasi-Newton optimization scheme maximized the lift-to-drag ratio (by minimizing drag-to-lift) using Euler solutions from FL067 to estimate the lift and drag forces. A 1.675% wind-tunnel model of the Opt5 HSCT configuration was built to validate the design methodology. Experimental data gathered at the NASA Langley Unitary Plan Wind Tunnel (UPWT) section #2 facility verified CFL3D Euler and Navier-Stokes predictions of the Opt5 performance at the design point. In turn, CFL3D confirmed the improvement in the lift-to-drag ratio obtained during the optimization, thus validating the design procedure. A data base at off-design conditions was obtained during three wind-tunnel tests. The entry into NASA Langley UPWT section #2 obtained data at a free stream Mach number, M(sub infinity), of 2.55 as well as the design Mach number, M(sub infinity)=2.4. Data from a Mach number range of 1.8 to 2.4 was taken at UPWT section #1. Transonic and low supersonic Mach numbers, M(sub infinity)=0.6 to 1.2, was gathered at the NASA Langley 16 ft. Transonic Wind Tunnel (TWT). In addition to good agreement between CFD and experimental data, highlights from the wind-tunnel tests include a trip dot study suggesting a linear relationship between trip dot drag and Mach number, an aeroelastic study that measured the outboard wing deflection and twist, and a flap scheduling study that identifies the possibility of only one leading-edge and trailing-edge flap setting for transonic cruise and another for low supersonic acceleration.

  6. Optimal control of a spinning double-pyramid Earth-pointing tethered formation

    NASA Astrophysics Data System (ADS)

    Williams, Paul

    2009-06-01

    The dynamics and control of a tethered satellite formation for Earth-pointing observation missions is considered. For most practical applications in Earth orbit, a tether formation must be spinning in order to maintain tension in the tethers. It is possible to obtain periodic spinning solutions for a triangular formation whose initial conditions are close to the orbit normal. However, these solutions contain significant deviations of the satellites on a sphere relative to the desired Earth-pointing configuration. To maintain a plane of satellites spinning normal to the orbit plane, it is necessary to utilize "anchors". Such a configuration resembles a double-pyramid. In this paper, control of a double-pyramid tethered formation is studied. The equations of motion are derived in a floating orbital coordinate system for the general case of an elliptic reference orbit. The motion of the satellites is derived assuming inelastic tethers that can vary in length in a controlled manner. Cartesian coordinates in a rotating reference frame attached to the desired spin frame provide a simple means of expressing the equations of motion, together with a set of constraint equations for the tether tensions. Periodic optimal control theory is applied to the system to determine sets of controlled periodic trajectories by varying the lengths of all interconnecting tethers (nine in total), as well as retrieval and simple reconfiguration trajectories. A modal analysis of the system is also performed using a lumped mass representation of the tethers.

  7. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers

    NASA Astrophysics Data System (ADS)

    Weinmann, Martin; Jutzi, Boris; Hinz, Stefan; Mallet, Clément

    2015-07-01

    3D scene analysis in terms of automatically assigning 3D points a respective semantic label has become a topic of great importance in photogrammetry, remote sensing, computer vision and robotics. In this paper, we address the issue of how to increase the distinctiveness of geometric features and select the most relevant ones among these for 3D scene analysis. We present a new, fully automated and versatile framework composed of four components: (i) neighborhood selection, (ii) feature extraction, (iii) feature selection and (iv) classification. For each component, we consider a variety of approaches which allow applicability in terms of simplicity, efficiency and reproducibility, so that end-users can easily apply the different components and do not require expert knowledge in the respective domains. In a detailed evaluation involving 7 neighborhood definitions, 21 geometric features, 7 approaches for feature selection, 10 classifiers and 2 benchmark datasets, we demonstrate that the selection of optimal neighborhoods for individual 3D points significantly improves the results of 3D scene analysis. Additionally, we show that the selection of adequate feature subsets may even further increase the quality of the derived results while significantly reducing both processing time and memory consumption.

  8. Optimization of the thermogauge furnace for realizing high temperature fixed points

    SciTech Connect

    Wang, T.; Dong, W.; Liu, F.

    2013-09-11

    The thermogauge furnace was commonly used in many NMIs as a blackbody source for calibration of the radiation thermometer. It can also be used for realizing the high temperature fixed point(HTFP). According to our experience, when realizing HTFP we need the furnace provide relative good temperature uniformity to avoid the possible damage to the HTFP. To improve temperature uniformity in the furnace, the furnace tube was machined near the tube ends with a help of a simulation analysis by 'ansys workbench'. Temperature distributions before and after optimization were measured and compared at 1300 °C, 1700°C, 2500 °C, which roughly correspond to Co-C(1324 °C), Pt-C(1738 °C) and Re-C(2474 °C), respectively. The results clearly indicate that through machining the tube the temperature uniformity of the Thermogage furnace can be remarkably improved. A Pt-C high temperature fixed point was realized in the modified Thermogauge furnace subsequently, the plateaus were compared with what obtained using old heater, and the results were presented in this paper.

  9. Free-time and fixed end-point optimal control theory in quantum mechanics: application to entanglement generation.

    PubMed

    Mishima, K; Yamashita, K

    2009-01-21

    We have constructed free-time and fixed end-point optimal control theory for quantum systems and applied it to entanglement generation between rotational modes of two polar molecules coupled by dipole-dipole interaction. The motivation of the present work is to solve optimal control problems more flexibly by extending the popular fixed time and fixed end-point optimal control theory for quantum systems to free-time and fixed end-point optimal control theory. As a demonstration, the theory that we have constructed in this paper will be applied to entanglement generation in rotational modes of NaCl-NaBr polar molecular systems that are sensitive to the strength of entangling interactions. Our method will significantly be useful for the quantum control of nonlocal interaction such as entangling interaction, which depends crucially on the strength of the interaction or the distance between the two molecules, and other general quantum dynamics, chemical reactions, and so on.

  10. Free-time and fixed end-point optimal control theory in quantum mechanics: Application to entanglement generation

    NASA Astrophysics Data System (ADS)

    Mishima, K.; Yamashita, K.

    2009-01-01

    We have constructed free-time and fixed end-point optimal control theory for quantum systems and applied it to entanglement generation between rotational modes of two polar molecules coupled by dipole-dipole interaction. The motivation of the present work is to solve optimal control problems more flexibly by extending the popular fixed time and fixed end-point optimal control theory for quantum systems to free-time and fixed end-point optimal control theory. As a demonstration, the theory that we have constructed in this paper will be applied to entanglement generation in rotational modes of NaCl-NaBr polar molecular systems that are sensitive to the strength of entangling interactions. Our method will significantly be useful for the quantum control of nonlocal interaction such as entangling interaction, which depends crucially on the strength of the interaction or the distance between the two molecules, and other general quantum dynamics, chemical reactions, and so on.

  11. Optimizing Resources of United States Navy for Humanitarian Operations

    DTIC Science & Technology

    2014-08-26

    6  A Notional Scenario ...Humanitarian Operations The vessels that the USN deployed for HADR in the 2004 Indian Ocean tsunami were the entire Abraham Lincoln Carrier Strike...officers. We studied every ship that was deployed to respond to certain disasters. Apte et al. (2013) studied the 2004 Indian Ocean tsunami , the 2005

  12. Optimizing operational flexibility and enforcement liability in Title V permits

    SciTech Connect

    McCann, G.T.

    1997-12-31

    Now that most states have interim or full approval of the portions of their state implementation plans (SIPs) implementing Title V (40 CFR Part 70) of the Clean Air Act Amendments (CAAA), most sources which require a Title V permit have submitted or are well on the way to submitting a Title V operating permit application. Numerous hours have been spent preparing applications to ensure the administrative completeness of the application and operational flexibility for the facility. Although much time and effort has been spent on Title V permit applications, the operating permit itself is the final goal. This paper outlines the major Federal requirements for Title V permits as given in the CAAA at 40 CFR 70.6, Permit Content. These Federal requirements and how they will effect final Title V permits and facilities will be discussed. This paper will provide information concerning the Federal requirements for Title V permits and suggestions on how to negotiate a Title V permit to maximize operational flexibility and minimize enforcement liability.

  13. Study on Operation Optimization of Pumping Station's 24 Hours Operation under Influences of Tides and Peak-Valley Electricity Prices

    NASA Astrophysics Data System (ADS)

    Yi, Gong; Jilin, Cheng; Lihua, Zhang; Rentian, Zhang

    2010-06-01

    According to different processes of tides and peak-valley electricity prices, this paper determines the optimal start up time in pumping station's 24 hours operation between the rating state and adjusting blade angle state respectively based on the optimization objective function and optimization model for single-unit pump's 24 hours operation taking JiangDu No.4 Pumping Station for example. In the meantime, this paper proposes the following regularities between optimal start up time of pumping station and the process of tides and peak-valley electricity prices each day within a month: (1) In the rating and adjusting blade angle state, the optimal start up time in pumping station's 24 hours operation which depends on the tide generation at the same day varies with the process of tides. There are mainly two kinds of optimal start up time which include the time at tide generation and 12 hours after it. (2) In the rating state, the optimal start up time on each day in a month exhibits a rule of symmetry from 29 to 28 of next month in the lunar calendar. The time of tide generation usually exists in the period of peak electricity price or the valley one. The higher electricity price corresponds to the higher minimum cost of water pumping at unit, which means that the minimum cost of water pumping at unit depends on the peak-valley electricity price at the time of tide generation on the same day. (3) In the adjusting blade angle state, the minimum cost of water pumping at unit in pumping station's 24 hour operation depends on the process of peak-valley electricity prices. And in the adjusting blade angle state, 4.85%˜5.37% of the minimum cost of water pumping at unit will be saved than that of in the rating state.

  14. Enabling a viable technique for the optimization of LNG carrier cargo operations

    NASA Astrophysics Data System (ADS)

    Alaba, Onakoya Rasheed; Nwaoha, T. C.; Okwu, M. O.

    2016-09-01

    In this study, we optimize the loading and discharging operations of the Liquefied Natural Gas (LNG) carrier. First, we identify the required precautions for LNG carrier cargo operations. Next, we prioritize these precautions using the analytic hierarchy process (AHP) and experts' judgments, in order to optimize the operational loading and discharging exercises of the LNG carrier, prevent system failure and human error, and reduce the risk of marine accidents. Thus, the objective of our study is to increase the level of safety during cargo operations.

  15. 77 FR 6601 - Facility Operating License Amendment From Nine Mile Point Nuclear Station, LLC.; Nine Mile Point...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-08

    ... of the beam to apply a downward clamping force on each inlet subassembly to resist the elbow and nozzle hydraulic reaction forces during normal operation. ] The material change does not affect the... request for hearing, a petition for leave to intervene, any motion or other document filed in...

  16. Reservoir Stimulation Optimization with Operational Monitoring for Creation of EGS

    SciTech Connect

    Carlos A. Fernandez

    2014-09-15

    EGS field projects have not sustained production at rates greater than ½ of what is needed for economic viability. The primary limitation that makes commercial EGS infeasible is our current inability to cost-effectively create high-permeability reservoirs from impermeable, igneous rock within the 3,000-10,000 ft depth range. Our goal is to develop a novel fracturing fluid technology that maximizes reservoir permeability while reducing stimulation cost and environmental impact. Laboratory equipment development to advance laboratory characterization/monitoring is also a priority of this project to study and optimize the physicochemical properties of these fracturing fluids in a range of reservoir conditions. Barrier G is the primarily intended GTO barrier to be addressed as well as support addressing barriers D, E and I.

  17. Reservoir Stimulation Optimization with Operational Monitoring for Creation of EGS

    SciTech Connect

    Fernandez, Carlos A.

    2013-09-25

    EGS field projects have not sustained production at rates greater than ½ of what is needed for economic viability. The primary limitation that makes commercial EGS infeasible is our current inability to cost-effectively create high-permeability reservoirs from impermeable, igneous rock within the 3,000-10,000 ft depth range. Our goal is to develop a novel fracturing fluid technology that maximizes reservoir permeability while reducing stimulation cost and environmental impact. Laboratory equipment development to advance laboratory characterization/monitoring is also a priority of this project to study and optimize the physicochemical properties of these fracturing fluids in a range of reservoir conditions. Barrier G is the primarily intended GTO barrier to be addressed as well as support addressing barriers D, E and I.

  18. Integrating event detection system operation characteristics into sensor placement optimization.

    SciTech Connect

    Hart, William Eugene; McKenna, Sean Andrew; Phillips, Cynthia Ann; Murray, Regan Elizabeth; Hart, David Blaine

    2010-05-01

    We consider the problem of placing sensors in a municipal water network when we can choose both the location of sensors and the sensitivity and specificity of the contamination warning system. Sensor stations in a municipal water distribution network continuously send sensor output information to a centralized computing facility, and event detection systems at the control center determine when to signal an anomaly worthy of response. Although most sensor placement research has assumed perfect anomaly detection, signal analysis software has parameters that control the tradeoff between false alarms and false negatives. We describe a nonlinear sensor placement formulation, which we heuristically optimize with a linear approximation that can be solved as a mixed-integer linear program. We report the results of initial experiments on a real network and discuss tradeoffs between early detection of contamination incidents, and control of false alarms.

  19. Methods and devices for optimizing the operation of a semiconductor optical modulator

    DOEpatents

    Zortman, William A.

    2015-07-14

    A semiconductor-based optical modulator includes a control loop to control and optimize the modulator's operation for relatively high data rates (above 1 GHz) and/or relatively high voltage levels. Both the amplitude of the modulator's driving voltage and the bias of the driving voltage may be adjusted using the control loop. Such adjustments help to optimize the operation of the modulator by reducing the number of errors present in a modulated data stream.

  20. Using Response Surface Methodology as an Approach to Understand and Optimize Operational Air Power

    DTIC Science & Technology

    2010-01-01

    Introduction to Taguchi Methodology . In Taguchi Methods : Proceedings of the 1988 European Conference, 1-14. London: Elsevier Applied Science. Box G. E. and N... Methodology As an Approach to Understand and Optimize Operational Air Power Marvin L. Simpson, Jr. Resit Unal Report Documentation Page Form...00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Using Response Surface Methodology As an Approach to Understand and Optimize Operational Air Power

  1. Knowledge Visualizations: A Tool to Achieve Optimized Operational Decision Making and Data Integration

    DTIC Science & Technology

    2015-06-01

    and pedigree possess additive implications toward the quality of the data utilized within the DSS. F. SUMMARY Decision - making theories such as...VISUALIZATIONS: A TOOL TO ACHIEVE OPTIMIZED OPERATIONAL DECISION MAKING AND DATA INTEGRATION by Paul C. Hudson Jeffrey A. Rzasa June 2015 Thesis...TOOL TO ACHIEVE OPTIMIZED OPERATIONAL DECISION MAKING AND DATA INTEGRATION 5. FUNDING NUMBERS 6. AUTHOR(S) Paul C. Hudson, and Jeffrey A. Rzasa

  2. Application of the gravity search algorithm to multi-reservoir operation optimization

    NASA Astrophysics Data System (ADS)

    Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.

    2016-12-01

    Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.

  3. Multi-objective teaching-learning-based optimization algorithm for reducing carbon emissions and operation time in turning operations

    NASA Astrophysics Data System (ADS)

    Lin, Wenwen; Yu, D. Y.; Wang, S.; Zhang, Chaoyong; Zhang, Sanqiang; Tian, Huiyu; Luo, Min; Liu, Shengqiang

    2015-07-01

    In addition to energy consumption, the use of cutting fluids, deposition of worn tools and certain other manufacturing activities can have environmental impacts. All these activities cause carbon emission directly or indirectly; therefore, carbon emission can be used as an environmental criterion for machining systems. In this article, a direct method is proposed to quantify the carbon emissions in turning operations. To determine the coefficients in the quantitative method, real experimental data were obtained and analysed in MATLAB. Moreover, a multi-objective teaching-learning-based optimization algorithm is proposed, and two objectives to minimize carbon emissions and operation time are considered simultaneously. Cutting parameters were optimized by the proposed algorithm. Finally, the analytic hierarchy process was used to determine the optimal solution, which was found to be more environmentally friendly than the cutting parameters determined by the design of experiments method.

  4. A Foundation for Operational Planning: The Concepts of Center of Gravity, Decisive Point, and the Culminating Point.

    DTIC Science & Technology

    1987-04-21

    and the spring thaw. Southwestern Front farces Included the 6th Irug (40,566 menI 46 tanks), the I1st Wuards Bring (70,811 men), Moble Girup Papau...theiatn preced seulplning httoialeamls. Odfertiaons CUReR the paucity of aircraft, technology , or doctrine, concentrated armored forces produced the...with respect to theoretical concepts provides Insights for operational planning on the contemperarg battlefield. Technological and doctrinal

  5. Optimal Operation of Data Centers in Future Smart Grid

    NASA Astrophysics Data System (ADS)

    Ghamkhari, Seyed Mahdi

    The emergence of cloud computing has established a growing trend towards building massive, energy-hungry, and geographically distributed data centers. Due to their enormous energy consumption, data centers are expected to have major impact on the electric grid by significantly increasing the load at locations where they are built. However, data centers also provide opportunities to help the grid with respect to robustness and load balancing. For instance, as data centers are major and yet flexible electric loads, they can be proper candidates to offer ancillary services, such as voluntary load reduction, to the smart grid. Also, data centers may better stabilize the price of energy in the electricity markets, and at the same time reduce their electricity cost by exploiting the diversity in the price of electricity in the day-ahead and real-time electricity markets. In this thesis, such potentials are investigated within an analytical profit maximization framework by developing new mathematical models based on queuing theory. The proposed models capture the trade-off between quality-of-service and power consumption in data centers. They are not only accurate, but also they posses convexity characteristics that facilitate joint optimization of data centers' service rates, demand levels and demand bids to different electricity markets. The analysis is further expanded to also develop a unified comprehensive energy portfolio optimization for data centers in the future smart grid. Specifically, it is shown how utilizing one energy option may affect selecting other energy options that are available to a data center. For example, we will show that the use of on-site storage and the deployment of geographical workload distribution can particularly help data centers in utilizing high-risk energy options such as renewable generation. The analytical approach in this thesis takes into account service-level-agreements, risk management constraints, and also the statistical

  6. Point-of-care ultrasonography during rescue operations on board a Polish Medical Air Rescue helicopter.

    PubMed

    Darocha, Tomasz; Gałązkowski, Robert; Sobczyk, Dorota; Żyła, Zbigniew; Drwiła, Rafał

    2014-12-01

    Point-of-care ultrasound examination has been increasingly widely used in pre-hospital care. The use of ultrasound in rescue medicine allows for a quick differential diagnosis, identification of the most important medical emergencies and immediate introduction of targeted treatment. Performing and interpreting a pre-hospital ultrasound examination can improve the accuracy of diagnosis and thus reduce mortality. The authors' own experiences are presented in this paper, which consist in using a portable, hand-held ultrasound apparatus during rescue operations on board a Polish Medical Air Rescue helicopter. The possibility of using an ultrasound apparatus during helicopter rescue service allows for a full professional evaluation of the patient's health condition and enables the patient to be brought to a center with the most appropriate facilities for their condition.

  7. Optimal Sunshade Configurations for Space-Based Geoengineering near the Sun-Earth L1 Point

    PubMed Central

    Sánchez, Joan-Pau; McInnes, Colin R.

    2015-01-01

    Within the context of anthropogenic climate change, but also considering the Earth’s natural climate variability, this paper explores the speculative possibility of large-scale active control of the Earth’s radiative forcing. In particular, the paper revisits the concept of deploying a large sunshade or occulting disk at a static position near the Sun-Earth L1 Lagrange equilibrium point. Among the solar radiation management methods that have been proposed thus far, space-based concepts are generally seen as the least timely, albeit also as one of the most efficient. Large occulting structures could potentially offset all of the global mean temperature increase due to greenhouse gas emissions. This paper investigates optimal configurations of orbiting occulting disks that not only offset a global temperature increase, but also mitigate regional differences such as latitudinal and seasonal difference of monthly mean temperature. A globally resolved energy balance model is used to provide insights into the coupling between the motion of the occulting disks and the Earth’s climate. This allows us to revise previous studies, but also, for the first time, to search for families of orbits that improve the efficiency of occulting disks at offsetting climate change on both global and regional scales. Although natural orbits exist near the L1 equilibrium point, their period does not match that required for geoengineering purposes, thus forced orbits were designed that require small changes to the disk attitude in order to control its motion. Finally, configurations of two occulting disks are presented which provide the same shading area as previously published studies, but achieve reductions of residual latitudinal and seasonal temperature changes. PMID:26309047

  8. Optimal Sunshade Configurations for Space-Based Geoengineering near the Sun-Earth L1 Point.

    PubMed

    Sánchez, Joan-Pau; McInnes, Colin R

    2015-01-01

    Within the context of anthropogenic climate change, but also considering the Earth's natural climate variability, this paper explores the speculative possibility of large-scale active control of the Earth's radiative forcing. In particular, the paper revisits the concept of deploying a large sunshade or occulting disk at a static position near the Sun-Earth L1 Lagrange equilibrium point. Among the solar radiation management methods that have been proposed thus far, space-based concepts are generally seen as the least timely, albeit also as one of the most efficient. Large occulting structures could potentially offset all of the global mean temperature increase due to greenhouse gas emissions. This paper investigates optimal configurations of orbiting occulting disks that not only offset a global temperature increase, but also mitigate regional differences such as latitudinal and seasonal difference of monthly mean temperature. A globally resolved energy balance model is used to provide insights into the coupling between the motion of the occulting disks and the Earth's climate. This allows us to revise previous studies, but also, for the first time, to search for families of orbits that improve the efficiency of occulting disks at offsetting climate change on both global and regional scales. Although natural orbits exist near the L1 equilibrium point, their period does not match that required for geoengineering purposes, thus forced orbits were designed that require small changes to the disk attitude in order to control its motion. Finally, configurations of two occulting disks are presented which provide the same shading area as previously published studies, but achieve reductions of residual latitudinal and seasonal temperature changes.

  9. Spontaneous fluctuation indices of the cardiovagal baroreflex accurately measure the baroreflex sensitivity at the operating point during upright tilt.

    PubMed

    Schwartz, Christopher E; Medow, Marvin S; Messer, Zachary; Stewart, Julian M

    2013-06-15

    Spontaneous fluctuation indices of cardiovagal baroreflex have been suggested to be inaccurate measures of baroreflex function during orthostatic stress compared with alternate open-loop methods (e.g. neck pressure/suction, modified Oxford method). We therefore tested the hypothesis that spontaneous fluctuation measurements accurately reflect local baroreflex gain (slope) at the operating point measured by the modified Oxford method, and that apparent differences between these two techniques during orthostasis can be explained by a resetting of the baroreflex function curve. We computed the sigmoidal baroreflex function curves supine and during 70° tilt in 12 young, healthy individuals. With the use of the modified Oxford method, slopes (gains) of supine and upright curves were computed at their maxima (Gmax) and operating points. These were compared with measurements of spontaneous indices in both positions. Supine spontaneous analyses of operating point slope were similar to calculated Gmax of the modified Oxford curve. In contrast, upright operating point was distant from the centering point of the reset curve and fell on the nonlinear portion of the curve. Whereas spontaneous fluctuation measurements were commensurate with the calculated slope of the upright modified Oxford curve at the operating point, they were significantly lower than Gmax. In conclusion, spontaneous measurements of cardiovagal baroreflex function accurately estimate the slope near operating points in both supine and upright position.

  10. Multidisciplinary Optimization Approach for Design and Operation of Constrained and Complex-shaped Space Systems

    NASA Astrophysics Data System (ADS)

    Lee, Dae Young

    The design of a small satellite is challenging since they are constrained by mass, volume, and power. To mitigate these constraint effects, designers adopt deployable configurations on the spacecraft that result in an interesting and difficult optimization problem. The resulting optimization problem is challenging due to the computational complexity caused by the large number of design variables and the model complexity created by the deployables. Adding to these complexities, there is a lack of integration of the design optimization systems into operational optimization, and the utility maximization of spacecraft in orbit. The developed methodology enables satellite Multidisciplinary Design Optimization (MDO) that is extendable to on-orbit operation. Optimization of on-orbit operations is possible with MDO since the model predictive controller developed in this dissertation guarantees the achievement of the on-ground design behavior in orbit. To enable the design optimization of highly constrained and complex-shaped space systems, the spherical coordinate analysis technique, called the "Attitude Sphere", is extended and merged with an additional engineering tools like OpenGL. OpenGL's graphic acceleration facilitates the accurate estimation of the shadow-degraded photovoltaic cell area. This technique is applied to the design optimization of the satellite Electric Power System (EPS) and the design result shows that the amount of photovoltaic power generation can be increased more than 9%. Based on this initial methodology, the goal of this effort is extended from Single Discipline Optimization to Multidisciplinary Optimization, which includes the design and also operation of the EPS, Attitude Determination and Control System (ADCS), and communication system. The geometry optimization satisfies the conditions of the ground development phase; however, the operation optimization may not be as successful as expected in orbit due to disturbances. To address this issue

  11. Using Interior Point Method Optimization Techniques to Improve 2- and 3-Dimensional Models of Earth Structures

    NASA Astrophysics Data System (ADS)

    Zamora, A.; Gutierrez, A. E.; Velasco, A. A.

    2014-12-01

    2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.

  12. An Optimized Multicolor Point-Implicit Solver for Unstructured Grid Applications on Graphics Processing Units

    NASA Technical Reports Server (NTRS)

    Zubair, Mohammad; Nielsen, Eric; Luitjens, Justin; Hammond, Dana

    2016-01-01

    In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructuredgrid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically requires a significant fraction of the overall application run time. In this work, an efficient implementation of the solver for graphics processing units is proposed. Several factors present unique challenges to achieving an efficient implementation in this environment. These include the variable amount of parallelism available in different kernel calls, indirect memory access patterns, low arithmetic intensity, and the requirement to support variable block sizes. In this work, the solver is reformulated to use standard sparse and dense Basic Linear Algebra Subprograms (BLAS) functions. However, numerical experiments show that the performance of the BLAS functions available in existing CUDA libraries is suboptimal for matrices representative of those encountered in actual simulations. Instead, optimized versions of these functions are developed. Depending on block size, the new implementations show performance gains of up to 7x over the existing CUDA library functions.

  13. Optimal point of insertion of the needle in neuraxial blockade using a midline approach: study in a geometrical model

    PubMed Central

    Vogt, Mark; van Gerwen, Dennis J; van den Dobbelsteen, John J; Hagenaars, Martin

    2016-01-01

    Performance of neuraxial blockade using a midline approach can be technically difficult. It is therefore important to optimize factors that are under the influence of the clinician performing the procedure. One of these factors might be the chosen point of insertion of the needle. Surprisingly few data exist on where between the tips of two adjacent spinous processes the needle should be introduced. A geometrical model was adopted to gain more insight into this issue. Spinous processes were represented by parallelograms. The length, the steepness relative to the skin, and the distance between the parallelograms were varied. The influence of the chosen point of insertion of the needle on the range of angles at which the epidural and subarachnoid space could be reached was studied. The optimal point of insertion was defined as the point where this range is the widest. The geometrical model clearly demonstrated, that the range of angles at which the epidural or subarachnoid space can be reached, is dependent on the point of insertion between the tips of the adjacent spinous processes. The steeper the spinous processes run, the more cranial the point of insertion should be. Assuming that the model is representative for patients, the performance of neuraxial blockade using a midline approach might be improved by choosing the optimal point of insertion. PMID:27570462

  14. Tethered Balloon Operations at ARM AMF3 Site at Oliktok Point, AK

    NASA Astrophysics Data System (ADS)

    Dexheimer, D.; Lucero, D. A.; Helsel, F.; Hardesty, J.; Ivey, M.

    2015-12-01

    Oliktok Point has been the home of the Atmospheric Radiation Measurement Program's (ARM) third ARM Mobile Facility, or AMF3, since October 2013. The AMF3 is operated through Sandia National Laboratories and hosts instrumentation collecting continuous measurements of clouds, aerosols, precipitation, energy, and other meteorological variables. The Arctic region is warming more quickly than any other region due to climate change and Arctic sea ice is declining to record lows. Sparsity of atmospheric data from the Arctic leads to uncertainty in process comprehension, and atmospheric general circulation models (AGCM) are understood to underestimate low cloud presence in the Arctic. Increased vertical resolution of meteorological properties and cloud measurements will improve process understanding and help AGCMs better characterize Arctic clouds. SNL is developing a tethered balloon system capable of regular operation at AMF3 in order to provide increased vertical resolution atmospheric data. The tethered balloon can be operated within clouds at altitudes up to 7,000' AGL within DOE's R-2204 restricted area. Pressure, relative humidity, temperature, wind speed, and wind direction are recorded at multiple altitudes along the tether. These data were validated against stationary met tower data in Albuquerque, NM. The altitudes of the sensors were determined by GPS and calculated using a line counter and clinometer and compared. Wireless wetness sensors and supercooled liquid water content sensors have also been deployed and their data has been compared with other sensors. This presentation will provide an overview of the balloons, sensors, and test flights flown, and will provide a preliminary look at data from sensor validation campaigns and test flights.

  15. System and method of cylinder deactivation for optimal engine torque-speed map operation

    SciTech Connect

    Sujan, Vivek A; Frazier, Timothy R; Follen, Kenneth; Moon, Suk-Min

    2014-11-11

    This disclosure provides a system and method for determining cylinder deactivation in a vehicle engine to optimize fuel consumption while providing the desired or demanded power. In one aspect, data indicative of terrain variation is utilized in determining a vehicle target operating state. An optimal active cylinder distribution and corresponding fueling is determined from a recommendation from a supervisory agent monitoring the operating state of the vehicle of a subset of the total number of cylinders, and a determination as to which number of cylinders provides the optimal fuel consumption. Once the optimal cylinder number is determined, a transmission gear shift recommendation is provided in view of the determined active cylinder distribution and target operating state.

  16. Haar wavelet operational matrix method for solving constrained nonlinear quadratic optimal control problem

    NASA Astrophysics Data System (ADS)

    Swaidan, Waleeda; Hussin, Amran

    2015-10-01

    Most direct methods solve finite time horizon optimal control problems with nonlinear programming solver. In this paper, we propose a numerical method for solving nonlinear optimal control problem with state and control inequality constraints. This method used quasilinearization technique and Haar wavelet operational matrix to convert the nonlinear optimal control problem into a quadratic programming problem. The linear inequality constraints for trajectories variables are converted to quadratic programming constraint by using Haar wavelet collocation method. The proposed method has been applied to solve Optimal Control of Multi-Item Inventory Model. The accuracy of the states, controls and cost can be improved by increasing the Haar wavelet resolution.

  17. Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David

    2016-01-01

    Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.

  18. Optimizing post-operative Crohn’s disease treatment

    PubMed Central

    Domènech, Eugeni; Mañosa, Míriam; Lobatón, Triana; Cabré, Eduard

    2014-01-01

    Despite the availability of biological drugs and the widespread and earlier use of immunosuppressants, intestinal resection remains necessary in almost half of the patients with Crohn’s disease. The development of new mucosal lesions in previously unaffected intestinal segments (a phenomenon known as post-operative recurrence, POR) occur within the first year in up to 80% if no preventive measure is started soon after resectional surgery, leading to clinical manifestations (clinical recurrence) and even needing new intestinal resection (surgical recurrence) in some patients. That is the reason why endoscopic monitoring has been recommended within 6 to 12 months after surgery. Active smoking is the only indisputable risk factor for early POR development. Among several evaluated drugs, only thiopurine and anti-tumor necrosis factor therapy seem to be effective and feasible in the long-term both for preventing or even treating recurrent lesions, at least in a proportion of patients. However, to date, it is not clear which patients should start with one or another drug right after surgery. It is also not well established how and how often POR should be assessed in patients with a normal ileocolonoscopy within the first 12 months. PMID:25331779

  19. Optimal operational modes for frameless space radiators with organosilicon ultrahigh coolant

    NASA Astrophysics Data System (ADS)

    Bondareva, N. V.; Koroteev, A. A.; Safronov, A. A.; Filatov, N. I.; Shishkanov, I. I.

    2016-12-01

    Optimal modes of operation of frameless space radiators with organosilicon ultrahigh-vacuum working medium have been determined. Recommendations for increasing efficiency and intensity of the sheet radiation cooling under different modes of operation of the droplet cooler-radiator in the space are formulated. A method for determining the optimal number of droplet planes within the fine droplet sheet structure is presented. How the flow rarefaction influences onto on radiator's main thermal characteristics is investigated. The operational modes of the frameless radiator with "cross" configuration are grounded.

  20. 77 FR 66492 - Entergy Nuclear Operations, Inc., Entergy Nuclear Indian Point 2, LLC, and Entergy Nuclear Indian...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-05

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION [Docket Nos.: 50-003, 50-247, 50-286; NRC-2012-0265: License Nos.: DPR- 5, DPR-26, and DPR-64] Entergy Nuclear Operations, Inc., Entergy Nuclear Indian Point 2, LLC, and Entergy Nuclear Indian Point 3, LLC; Issuance of Director's Decision Notice...

  1. SG-t optimization and processing technology of the points cloud of the railway tank car (container)

    NASA Astrophysics Data System (ADS)

    Zhang, Zhipeng

    2016-10-01

    Based on 3D laser scanning technology in railway tank car and tank container (the railway tank car (container) for short) application, the point cloud which was incomplete and noise which was found during the scanning process were analyzed. Based on the massive scanning point cloud of railway tank car(container), proposed a fast and effective SG-t point cloud optimization processing method. The SG-t method included sp-H point cloud pre-processing method and Eti-G model reconstruction. The tests showed that the new methods could optimize point cloud which was noise and incomplete in a relatively short time. It could reconstruct model fast and efficient. It could greatly improve the efficiency and precision of scanning. The results which Compared with the results of capacity comparison method showed that measurement uncertainty increased from 4×10-3,k=2 to 3×10-3,k=2.Optimization and processing method of the point cloud of the 3D laser scanning of railway tank car (container) provide a reference to the development of related technologies.

  2. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  3. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  4. 78 FR 4879 - Nine Mile Point 3 Nuclear Project, LLC and UniStar Nuclear Operating Services, LLC Combined...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-23

    ... COMMISSION Nine Mile Point 3 Nuclear Project, LLC and UniStar Nuclear Operating Services, LLC Combined... Nuclear Project, LLC, and UniStar Nuclear Operating Services, LLC (UniStar), submitted a Combined License...) application for UniStar's Calvert Cliffs Nuclear Power Plant, Unit 3 (CCNPP3). The NRC docketed the...

  5. Seasonal-Scale Optimization of Conventional Hydropower Operations in the Upper Colorado System

    NASA Astrophysics Data System (ADS)

    Bier, A.; Villa, D.; Sun, A.; Lowry, T. S.; Barco, J.

    2011-12-01

    Sandia National Laboratories is developing the Hydropower Seasonal Concurrent Optimization for Power and the Environment (Hydro-SCOPE) tool to examine basin-wide conventional hydropower operations at seasonal time scales. This tool is part of an integrated, multi-laboratory project designed to explore different aspects of optimizing conventional hydropower operations. The Hydro-SCOPE tool couples a one-dimensional reservoir model with a river routing model to simulate hydrology and water quality. An optimization engine wraps around this model framework to solve for long-term operational strategies that best meet the specific objectives of the hydrologic system while honoring operational and environmental constraints. The optimization routines are provided by Sandia's open source DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) software. Hydro-SCOPE allows for multi-objective optimization, which can be used to gain insight into the trade-offs that must be made between objectives. The Hydro-SCOPE tool is being applied to the Upper Colorado Basin hydrologic system. This system contains six reservoirs, each with its own set of objectives (such as maximizing revenue, optimizing environmental indicators, meeting water use needs, or other objectives) and constraints. This leads to a large optimization problem with strong connectedness between objectives. The systems-level approach used by the Hydro-SCOPE tool allows simultaneous analysis of these objectives, as well as understanding of potential trade-offs related to different objectives and operating strategies. The seasonal-scale tool will be tightly integrated with the other components of this project, which examine day-ahead and real-time planning, environmental performance, hydrologic forecasting, and plant efficiency.

  6. Optimizing transformations of stencil operations for parallel cache-based architectures

    SciTech Connect

    Bassetti, F.; Davis, K.

    1999-06-28

    This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation and applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.

  7. The optimal operating temperature of the collector of an irreversible solar-driven refrigerator

    NASA Astrophysics Data System (ADS)

    Lin, Guoxing; Yan, Zijun

    1999-01-01

    A universal irreversible solar-driven refrigerator model is presented, in which not only the irreversibility of heat conduction but also the irreversibilities resulting from the friction, eddies and other irreversible effects inside the working fluid are considered. On the basis of this model and the linear heat-loss model of a solar collector, one of the important parameters, called the optimal operating temperature of the collector of a solar-driven refrigerator, is derived by using the finite-time thermodynamic theory. From the result, the maximum overall coefficient of performance of the refrigerator is determined and some significant problems are discussed. The results obtained here are quite realistic and universal, insofar as all the corresponding results derived by using the reversible and endoreversible models and the model considering only the internal irreversibility cycle can be deduced from them. Thus, they may provide some new theoretical bases for further exploitation of solar-driven refrigerators. Furthermore, some shortcoming in the related literature are pointed out.

  8. A hybrid-algorithm-based parallel computing framework for optimal reservoir operation

    NASA Astrophysics Data System (ADS)

    Li, X.; Wei, J.; Li, T.; Wang, G.

    2012-12-01

    Up to date, various optimization models have been developed to offer optimal operating policies for reservoirs. Each optimization model has its own merits and limitations, and no general algorithm exists even today. At times, some optimization models have to be combined to obtain desired results. In this paper, we present a parallel computing framework to combine various optimization models in a different way compared to traditional serial computing. This framework consists of three functional processor types, that is, master processor, slave processor and transfer processor. The master processor has a full computation scheme that allocates optimization models to slave processors; slave processors perform allocated optimization models; the transfer processor is in charge of the solution communication among all slave processors. Based on these, the proposed framework can perform various optimization models in parallel. Because of the solution communication, the framework can also integrate the merits of involved optimization models while in iteration and the performance of each optimization model can therefore be improved. And more, it can be concluded the framework can effectively improve the solution quality and increase the solution speed by making full use of computing power of parallel computers.

  9. Tuning operating point of extrinsic Fabry-Perot interferometric fiber-optic sensors using microstructured fiber and gas pressure.

    PubMed

    Tian, Jiajun; Zhang, Qi; Fink, Thomas; Li, Hong; Peng, Wei; Han, Ming

    2012-11-15

    Intensity-based demodulation of extrinsic Fabry-Perot interferometric (EFPI) fiber-optic sensors requires the light wavelength to be on the quadrature point of the interferometric fringes for maximum sensitivity. In this Letter, we propose a novel and remote operating-point tuning method for EFPI fiber-optic sensors using microstructured fibers (MFs) and gas pressure. We demonstrated the method using a diaphragm-based EFPI sensor with a microstructured lead-in fiber. The holes in the MF were used as gas channels to remotely control the gas pressure inside the Fabry-Perot cavity. Because of the deformation of the diaphragm with gas pressure, the cavity length and consequently the operating point can be remotely tuned for maximum sensitivity. The proposed operating-point tuning method has the advantage of reduced complexity and cost compared to previously reported methods.

  10. Estimating the operating point of the cochlear transducer using low-frequency biased distortion products

    PubMed Central

    Brown, Daniel J.; Hartsock, Jared J.; Gill, Ruth M.; Fitzgerald, Hillary E.; Salt, Alec N.

    2009-01-01

    Distortion products in the cochlear microphonic (CM) and in the ear canal in the form of distortion product otoacoustic emissions (DPOAEs) are generated by nonlinear transduction in the cochlea and are related to the resting position of the organ of Corti (OC). A 4.8 Hz acoustic bias tone was used to displace the OC, while the relative amplitude and phase of distortion products evoked by a single tone [most often 500 Hz, 90 dB SPL (sound pressure level)] or two simultaneously presented tones (most often 4 kHz and 4.8 kHz, 80 dB SPL) were monitored. Electrical responses recorded from the round window, scala tympani and scala media of the basal turn, and acoustic emissions in the ear canal were simultaneously measured and compared during the bias. Bias-induced changes in the distortion products were similar to those predicted from computer models of a saturating transducer with a first-order Boltzmann distribution. Our results suggest that biased DPOAEs can be used to non-invasively estimate the OC displacement, producing a measurement equivalent to the transducer operating point obtained via Boltzmann analysis of the basal turn CM. Low-frequency biased DPOAEs might provide a diagnostic tool to objectively diagnose abnormal displacements of the OC, as might occur with endolymphatic hydrops. PMID:19354389

  11. Application of PSO algorithm in short-term optimization of reservoir operation.

    PubMed

    SaberChenari, Kazem; Abghari, Hirad; Tabari, Hossein

    2016-12-01

    The optimization of the operation of existing water systems such as dams is very important for water resource planning and management especially in arid and semi-arid lands. Due to budget and operational water resource limitations and environmental problems, the operation optimization is gradually replaced by new systems. The operation optimization of water systems is a complex, nonlinear, multi-constraint, and multidimensional problem that needs robust techniques. In this article, the practical swarm optimization (PSO) was adopted for solving the operation problem of multipurpose Mahabad reservoir dam in the northwest of Iran. The desired result or target function is to minimize the difference between downstream monthly demand and release. The method was applied with considering the reduction probabilities of inflow for the four scenarios of normal and drought conditions. The results showed that in most of the scenarios for normal and drought conditions, released water obtained by the PSO model was equal to downstream demand and also, the reservoir volume was reducing for the probabilities of inflow. The PSO model revealed a good performance to minimize the reservoir water loss, and this operation policy can be an appropriate policy in the drought condition for the reservoir.

  12. Collaboration pathway(s) using new tools for optimizing operational climate monitoring from space

    NASA Astrophysics Data System (ADS)

    Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.

    2014-10-01

    Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the needs of decision makers, scientific investigators and global users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent (2014) rulebased decision engine modeling runs that targeted optimizing the intended NPOESS architecture, becomes a surrogate for global operational climate monitoring architecture(s). This rule-based systems tools provide valuable insight for Global climate architectures, through the comparison and evaluation of alternatives considered and the exhaustive range of trade space explored. A representative optimization of Global ECV's (essential climate variables) climate monitoring architecture(s) is explored and described in some detail with thoughts on appropriate rule-based valuations. The optimization tools(s) suggest and support global collaboration pathways and hopefully elicit responses from the audience and climate science shareholders.

  13. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    NASA Astrophysics Data System (ADS)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  14. Partial constraint satisfaction approaches for optimal operation of a hydropower system

    NASA Astrophysics Data System (ADS)

    Ferreira, Andre R.; Teegavarapu, Ramesh S. V.

    2012-09-01

    Optimal operation models for a hydropower system using partial constraint satisfaction (PCS) approaches are proposed and developed in this study. The models use mixed integer nonlinear programming (MINLP) formulations with binary variables. The models also integrate a turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream water quality impairment. New PCS-based models for hydropower optimization formulations are developed using binary and continuous evaluator functions to maximize the constraint satisfaction. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to solve the optimization formulations. Decision maker's preferences towards power production targets and water quality improvements are incorporated using partial satisfaction constraints to obtain compromise operating rules for a multi-objective reservoir operation problem dominated by conflicting goals of energy production, water quality and consumptive water uses.

  15. Dimension reduction of decision variables for multireservoir operation: A spectral optimization model

    NASA Astrophysics Data System (ADS)

    Chen, Duan; Leon, Arturo S.; Gibson, Nathan L.; Hosseini, Parnian

    2016-01-01

    Optimizing the operation of a multireservoir system is challenging due to the high dimension of the decision variables that lead to a large and complex search space. A spectral optimization model (SOM), which transforms the decision variables from time domain to frequency domain, is proposed to reduce the dimensionality. The SOM couples a spectral dimensionality-reduction method called Karhunen-Loeve (KL) expansion within the routine of Nondominated Sorting Genetic Algorithm (NSGA-II). The KL expansion is used to represent the decision variables as a series of terms that are deterministic orthogonal functions with undetermined coefficients. The KL expansion can be truncated into fewer significant terms, and consequently, fewer coefficients by a predetermined number. During optimization, operators of the NSGA-II (e.g., crossover) are conducted only on the coefficients of the KL expansion rather than the large number of decision variables, significantly reducing the search space. The SOM is applied to the short-term operation of a 10-reservoir system in the Columbia River of the United States. Two scenarios are considered herein, the first with 140 decision variables and the second with 3360 decision variables. The hypervolume index is used to evaluate the optimization performance in terms of convergence and diversity. The evaluation of optimization performance is conducted for both conventional optimization model (i.e., NSGA-II without KL) and the SOM with different number of KL terms. The results show that the number of decision variables can be greatly reduced in the SOM to achieve a similar or better performance compared to the conventional optimization model. For the scenario with 140 decision variables, the optimal performance of the SOM model is found with six KL terms. For the scenario with 3360 decision variables, the optimal performance of the SOM model is obtained with 11 KL terms.

  16. Methodological approach for the optimization of drinking water treatment plants' operation: a case study.

    PubMed

    Sorlini, Sabrina; Collivignarelli, Maria Cristina; Castagnola, Federico; Crotti, Barbara Marianna; Raboni, Massimo

    2015-01-01

    Critical barriers to safe and secure drinking water may include sources (e.g. groundwater contamination), treatments (e.g. treatment plants not properly operating) and/or contamination within the distribution system (infrastructure not properly maintained). The performance assessment of these systems, based on monitoring, process parameter control and experimental tests, is a viable tool for the process optimization and water quality control. The aim of this study was to define a procedure for evaluating the performance of full-scale drinking water treatment plants (DWTPs) and for defining optimal solutions for plant upgrading in order to optimize operation. The protocol is composed of four main phases (routine and intensive monitoring programmes - Phases 1 and 2; experimental studies - Phase 3; plant upgrade and optimization - Phase 4). The protocol suggested in this study was tested in a full-scale DWTP placed in the North of Italy (Mortara, Pavia). The results outline some critical aspects of the plant operation and permit the identification of feasible solutions for the DWTP upgrading in order to optimize water treatment operation.

  17. Probability-Based Software for Grid Optimization: Improved Power System Operations Using Advanced Stochastic Optimization

    SciTech Connect

    2012-02-24

    GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.

  18. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    SciTech Connect

    He, Yi; Scheraga, Harold A.; Liwo, Adam

    2015-12-28

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  19. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    PubMed Central

    He, Yi; Liwo, Adam; Scheraga, Harold A.

    2015-01-01

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field. PMID:26723596

  20. Evaluation of the CS-200 analyzer for optimization of amine unit operations. Final report, December 1995

    SciTech Connect

    Skinner, F.D.; Carlson, R.L.; Ellis, P.F.; Fisher, K.S.

    1995-12-01

    Radian Corporation, under contract to GRI, has performed an evaluation of the performance and benefits of using the CS-200 on-line analyzer that measures H2S and CO2 loadings in amine solutions to optimize sweetening unit opration. The evaluation included (1) validation of the analyzer`s acid gas measurements; (2) monitoring the impacts of changes in operating conditions on corrosion rates; and (3) estimation of the cost savings that may be realized by using the analyzer to optimize sweetening unit operation. Data obtained from three commercial sweetening units were used in performing the evaluation. This report presents the results of the study.

  1. A MILP-Based Distribution Optimal Power Flow Model for Microgrid Operation

    SciTech Connect

    Liu, Guodong; Starke, Michael R; Zhang, Xiaohu; Tomsovic, Kevin

    2016-01-01

    This paper proposes a distribution optimal power flow (D-OPF) model for the operation of microgrids. The proposed model minimizes not only the operating cost, including fuel cost, purchasing cost and demand charge, but also several performance indices, including voltage deviation, network power loss and power factor. It co-optimizes the real and reactive power form distributed generators (DGs) and batteries considering their capacity and power factor limits. The D-OPF is formulated as a mixed-integer linear programming (MILP). Numerical simulation results show the effectiveness of the proposed model.

  2. Parameter optimization capability in the trajectory code PMAST (Point-Mass Simulation Tool)

    SciTech Connect

    Outka, D.E.

    1987-01-28

    Trajectory optimization capability has been added to PMAST through addition of the Recursive Quadratic Programming code VF02AD. The scope of trajectory optimization problems the resulting code can solve is very broad, as it takes advantage of the versatility of the original PMAST code. Most three-degree-of-freedom flight-vehicle problems can be simulated with PMAST, and up to 25 parameters specifying initial conditions, weights, control histories and other problem-deck inputs can be used to meet trajectory constraints in some optimal manner. This report outlines the mathematical formulation of the optimization technique, describes the input requirements and suggests guidelines for problem formulation. An example problem is presented to demonstrate the use and features of the optimization portions of the code.

  3. Acousto-optic, point receiver hydrophone probe for operation up to 100 MHz.

    PubMed

    Lewin, P A; Mu, C; Umchid, S; Daryoush, A; El-Sherif, M

    2005-12-01

    This work describes the results of initial evaluation of a wideband acousto-optic hydrophone probe designed to operate as point receiver in the frequency range up to 100 MHz. The hydrophone was implemented as a tapered fiber optic (FO) probe sensor with a tip diameter of approximately 7 microm. Such small physical dimensions of the sensor eliminate the need for spatial averaging corrections so that true pressure-time (p-t) waveforms can be faithfully recorded. The theoretical considerations that predicted the FO probe sensitivity to be equal to 4.3 mV/MPa are presented along with a brief description of the manufacturing process. The calibration results that verified the theoretically predicted sensitivity are also presented along with a brief description of the improvements being currently implemented to increase this sensitivity level by approximately 20 dB. The results of preliminary measurements indicate that the fiber optic probes will exhibit a uniform frequency response and a zero phase shift in the frequency range considered. These features might be very useful in rapid complex calibration i.e. determining both magnitude and phase response of other hydrophones by the substitution method. Also, because of their robust design and linearity, these fiber optic hydrophones could also meet the challenges posed by high intensity focused ultrasound (HIFU) and other therapeutic applications. Overall, the outcome of this work shows that when fully developed, the FO probes will be well suited for high frequency measurements of ultrasound fields and will be able to complement the data collected by the current finite aperture piezoelectric PVDF hydrophones.

  4. A highly sensitive and simply operated protease sensor toward point-of-care testing.

    PubMed

    Park, Seonhwa; Shin, Yu Mi; Seo, Jeongwook; Song, Ji-Joon; Yang, Haesik

    2016-04-21

    Protease sensors for point-of-care testing (POCT) require simple operation, a detection period of less than 20 minutes, and a detection limit of less than 1 ng mL(-1). However, it is difficult to meet these requirements with protease sensors that are based on proteolytic cleavage. This paper reports a highly reproducible protease sensor that allows the sensitive and simple electrochemical detection of the botulinum neurotoxin type E light chain (BoNT/E-LC), which is obtained using (i) low nonspecific adsorption, (ii) high signal-to-background ratio, and (iii) one-step solution treatment. The BoNT/E-LC detection is based on two-step proteolytic cleavage using BoNT/E-LC (endopeptidase) and l-leucine-aminopeptidase (LAP, exopeptidase). Indium-tin oxide (ITO) electrodes are modified partially with reduced graphene oxide (rGO) to increase their electrocatalytic activities. Avidin is then adsorbed on the electrodes to minimize the nonspecific adsorption of proteases. Low nonspecific adsorption allows a highly reproducible sensor response. Electrochemical-chemical (EC) redox cycling involving p-aminophenol (AP) and dithiothreitol (DTT) is performed to obtain a high signal-to-background ratio. After adding a C-terminally AP-labeled oligopeptide, DTT, and LAP simultaneously to a sample solution, no further treatment of the solution is necessary during detection. The detection limits of BoNT/E-LC in phosphate-buffered saline are 0.1 ng mL(-1) for an incubation period of 15 min and 5 fg mL(-1) for an incubation period of 4 h. The detection limit in commercial bottled water is 1 ng mL(-1) for an incubation period of 15 min. The developed sensor is selective to BoNT/E-LC among the four types of BoNTs tested. These results indicate that the protease sensor meets the requirements for POCT.

  5. Free-Time and Fixed End-Point Optimal Control Theory in Quantum Mechanics: Application to Entanglement Generation

    NASA Astrophysics Data System (ADS)

    Mishima, Kenji; Yamashita, Koichi

    2009-03-01

    We have constructed free-time and fixed end-point optimal control theory for quantum systems and applied it to entanglement generation between rotational modes of two polar molecules coupled by dipole-dipole interaction. The motivation of the present work is to solve optimal control problems more flexibly by extending the popular fixed-time and fixed end-point optimal control theory for quantum systems to free-time and fixed end-point optimal control theory. Our theory can not only achieve high transition probabilities but also determine exact temporal duration of the laser pulses. As a demonstration, our theory is applied to entanglement generation in rotational modes of NaCl-NaBr polar molecular systems that are sensitive to the strength of entangling interactions. Using the tailored laser pulses, we discuss the fidelity of entanglement distillation and quantum teleportation. Our method will significantly be useful for the quantum control of non-local interaction such as entangling interaction, and other time-sensitive general quantum dynamics, chemical reactions.

  6. Optimization of Measurement Points Choice in Preparation of Green Areas Acoustic Map

    NASA Astrophysics Data System (ADS)

    Sztubecka, Małgorzata; Bujarkiewicz, Adam; Sztubecki, Jacek

    2016-12-01

    The aim of the article is to analyze the selection of measuring points of sustainable sound level in the spa park. A set of points should allow to make on their basis an acoustic climate map for the park at certain times of day by usage available tools. Practical part contains a comparative analysis of developed noise maps, taking into account different variants of the distribution and number of measuring points on the selected area of the park.

  7. Method and apparatus for optimizing operation of a power generating plant using artificial intelligence techniques

    SciTech Connect

    Wroblewski, David; Katrompas, Alexander M.; Parikh, Neel J.

    2009-09-01

    A method and apparatus for optimizing the operation of a power generating plant using artificial intelligence techniques. One or more decisions D are determined for at least one consecutive time increment, where at least one of the decisions D is associated with a discrete variable for the operation of a power plant device in the power generating plant. In an illustrated embodiment, the power plant device is a soot cleaning device associated with a boiler.

  8. Free-time and fixed end-point optimal control theory in dissipative media: application to entanglement generation and maintenance.

    PubMed

    Mishima, K; Yamashita, K

    2009-07-07

    We develop monotonically convergent free-time and fixed end-point optimal control theory (OCT) in the density-matrix representation to deal with quantum systems showing dissipation. Our theory is more general and flexible for tailoring optimal laser pulses in order to control quantum dynamics with dissipation than the conventional fixed-time and fixed end-point OCT in that the optimal temporal duration of laser pulses can also be optimized exactly. To show the usefulness of our theory, it is applied to the generation and maintenance of the vibrational entanglement of carbon monoxide adsorbed on the copper (100) surface, CO/Cu(100). We demonstrate the numerical results and clarify how to combat vibrational decoherence as much as possible by the tailored shapes of the optimal laser pulses. It is expected that our theory will be general enough to be applied to a variety of dissipative quantum dynamics systems because the decoherence is one of the quantum phenomena sensitive to the temporal duration of the quantum dynamics.

  9. A novel auto-bias control scheme for stabilizing lithium niobate Mach-Zehnder modulator at any operating point

    NASA Astrophysics Data System (ADS)

    Tao, Jin-jing; Zhang, Yang-an; Zhang, Jin-nan; Yuan, Xue-guang; Huang, Yong-qing; Li, Yu-peng

    2014-01-01

    In this paper, we propose and experimentally demonstrate an auto-bias control scheme for stabilizing a lithium niobate (LN) Mach-Zehnder modulator (MZM) at any operating point along the power transmission curve. It is based on that the bias drift would change the operating point and result in varying the output optical average power of the Mach-Zehnder modulator and its first and second derivatives. The ratio of the first to the second derivative of the output optical average power is used in the proposed scheme as the key parameter. The experimental results show that the output optical average power of the LN MZM hardly changes at the desired operating point, and the maximum deviation of output optical average power is less than ±4%.

  10. Short-term optimal operation of water systems using ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Raso, L.; Schwanenberg, D.; van de Giesen, N. C.; van Overloop, P. J.

    2014-09-01

    Short-term water system operation can be realized using Model Predictive Control (MPC). MPC is a method for operational management of complex dynamic systems. Applied to open water systems, MPC provides integrated, optimal, and proactive management, when forecasts are available. Notwithstanding these properties, if forecast uncertainty is not properly taken into account, the system performance can critically deteriorate. Ensemble forecast is a way to represent short-term forecast uncertainty. An ensemble forecast is a set of possible future trajectories of a meteorological or hydrological system. The growing ensemble forecasts’ availability and accuracy raises the question on how to use them for operational management. The theoretical innovation presented here is the use of ensemble forecasts for optimal operation. Specifically, we introduce a tree based approach. We called the new method Tree-Based Model Predictive Control (TB-MPC). In TB-MPC, a tree is used to set up a Multistage Stochastic Programming, which finds a different optimal strategy for each branch and enhances the adaptivity to forecast uncertainty. Adaptivity reduces the sensitivity to wrong forecasts and improves the operational performance. TB-MPC is applied to the operational management of Salto Grande reservoir, located at the border between Argentina and Uruguay, and compared to other methods.

  11. Iterative most-likely point registration (IMLP): a robust algorithm for computing optimal shape alignment.

    PubMed

    Billings, Seth D; Boctor, Emad M; Taylor, Russell H

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP's probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes.

  12. Operational equations for the five-point rectangle, the geometric mean, and data in prismatic arrray

    SciTech Connect

    Silver, Gary L

    2009-01-01

    This paper describes the results of three applications of operational calculus: new representations of five data in a rectangular array, new relationships among data in a prismatic array, and the operational analog of the geometric mean.

  13. Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches

    NASA Astrophysics Data System (ADS)

    Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo

    This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.

  14. Improving multi-objective reservoir operation optimization with sensitivity-informed dimension reduction

    NASA Astrophysics Data System (ADS)

    Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.

    2015-08-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.

  15. Dynamic simulation and optimal real-time operation of CHP systems for buildings

    NASA Astrophysics Data System (ADS)

    Cho, Hee Jin

    Combined Cooling, Heating, and Power (CHP) systems have been widely recognized as a key alternative for electric and thermal energy generation because of their outstanding energy efficiency, reduced environmental emissions, and relative independence from centralized power grids. The systems provide simultaneous onsite or near-site electric and thermal energy generation in a single, integrated package. As CHP becomes increasingly popular worldwide and its total capacity increases rapidly, the research on the topics of CHP performance assessment, design, and operational strategy become increasingly important. Following this trend of research activities to improve energy efficiency, environmental emissions, and operational cost, this dissertation focuses on the following aspects: (a) performance evaluation of a CHP system using a transient simulation model; (b) development of a dynamic simulation model of a power generation unit that can be effectively used in transient simulations of CHP systems; (c) investigation of real-time operation of CHP systems based on optimization with respect to operational cost, primary energy consumption, and carbon dioxide emissions; and (d) development of optimal supervisory feed-forward control that can provide realistic real-time operation of CHP systems with electric and thermal energy storages using short-term weather forecasting. The results from a transient simulation of a CHP system show that technical and economical performance can be readily evaluated using the transient model and that the design, component selection, and control of a CHP system can be improved using this model. The results from the case studies using optimal real-time operation strategies demonstrate that CHP systems with an energy dispatch algorithm have the potential to yield savings in operational cost, primary energy consumption, and carbon dioxide emissions with respect to a conventional HVAC system. Finally, the results from the case study using a

  16. Floating point only SIMD instruction set architecture including compare, select, Boolean, and alignment operations

    DOEpatents

    Gschwind, Michael K.

    2011-03-01

    Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.

  17. Choosing the optimal number of B-spline control points (Part 2: Approximation of surfaces and applications)

    NASA Astrophysics Data System (ADS)

    Harmening, Corinna; Neuner, Hans

    2017-03-01

    Freeform surfaces like B-splines have proven to be a suitable tool to model laser scanner point clouds and to form the basis for an areal data analysis, for example an areal deformation analysis. A variety of parameters determine the B-spline's appearance, the B-spline's complexity being mostly determined by the number of control points. Usually, this parameter type is chosen by intuitive trial-and-error-procedures. In [10] the problem of finding an alternative to these trial-and-error-procedures was addressed for the case of B-spline curves: The task of choosing the optimal number of control points was interpreted as a model selection problem. Two model selection criteria, the Akaike and the Bayesian Information Criterion, were used to identify the B-spline curve with the optimal number of control points from a set of candidate B-spline models. In order to overcome the drawbacks of the information criteria, an alternative approach based on statistical learning theory was developed. The criteria were evaluated by means of simulated data sets. The present paper continues these investigations. If necessary, the methods proposed in [10] are extended to areal approaches so that they can be used to determine the optimal number of B-spline surface control points. Furthermore, the methods are evaluated by means of real laser scanner data sets rather than by simulated ones. The application of those methods to B-spline surfaces reveals the datum problem of those surfaces, meaning that location and number of control points of two B-splines surfaces are only comparable if they are based on the same parameterization. First investigations to solve this problem are presented.

  18. First regularized trace of integro-differential Sturm-Liouville operator on a segment with punctured points at generalized conditions of bonding in deleted points

    NASA Astrophysics Data System (ADS)

    Sarsenbi, Abdisalam A.; Zhumanova, Lyazzat K.

    2016-12-01

    The present work is devoted to calculating a first regularized trace of one integro-differential operator with the main part of the Sturm-Liouville type on a segment with punctured points at integral perturbation of "transmission" conditions. The integro-differential Sturm-Liouville operator -y″(x )+q (x )y (x )+γ ∫0πy (t )d t =λ y (x ) given on the segments π/n (k -1 )points x =π/n k , are solutions. The value of jumps is expressed by the formula y'(π/k n -0 )=y'(π/k n +0 )-βk∫0πy (t )d t , k =1 ,n -1 ¯ . The basic result of the paper is the exact formula of the first regularized trace of the considered differential operator.

  19. Optimal operating policy of the ultrafiltration membrane bioreactor for enzymatic hydrolysis of cellulose

    SciTech Connect

    Lee, SeungGoo; Kim, HakSung . Dept. of Biotechnology)

    1993-09-05

    The dilution rate of an ultrafiltration membrane bioreactor in the enzymatic hydrolysis of cellulose was optimized using the kinetic model developed by Fan and Lee.' The sequence of optimal dilution rates was found to generally consist of an initial period of a minimal value (batch period), a subsequent period of maximum dilution rate, a period of a second batch, and a final period of a singular dilution rate. The effects of operating conditions, such as [beta]-glucosidase activity, operating time, maximum dilution rate, substrate feeding rate, and enzyme-to-substrate ratio on both the conversion yield and the sequence of optimal dilution rates were investigated. To evaluate the validity of kinetic model employed in this work, enzymatic hydrolysis was carried out using -cellulose as a substrate in the ultrafiltration membrane bioreactor. The experimental data were well consistent with the simulation results.

  20. Optimal maneuvering and fine pointing control of large space telescope with a new magnetically suspended, single gimballed momentum storage device

    NASA Technical Reports Server (NTRS)

    Nadkarni, A. A.; Joshi, S. M.

    1976-01-01

    This paper considers the application of an Annular Momentum Control Device (AMCD) to both fine pointing and large-angle maneuvering of a large space telescope (LST). The AMCD, which consists principally of a spinning rim suspended in noncontacting electromagnetic bearings, represents a new development in momentum storage devices. A nonlinear mathematical model of the AMCD/LST system is derived. An optimal stochastic fine-pointing controller is designed via LQG theory and the minimum-energy maneuvering problem is solved via a gradient technique. Number of state variable and control variable constraints, as well as all trigonometric nonlinearities, are considered in the latter part.

  1. Optimization of geometry of elastic bodies in the vicinity of singular points on the example of an adhesive lap joint

    NASA Astrophysics Data System (ADS)

    Matveenko, V. P.; Sevodina, N. V.; Fedorov, A. Yu.

    2013-09-01

    The stress state in adhesive lap joints with various geometric shapes of spew fillet is studied. It is noted that the applied design models of the considered problem include singular points at which infinite stress values are possible if one uses the linear elasticity theory to calculate the stress state. Based on the conclusions of the solution of the geometry optimization problem in the vicinity of the singular points of elastic bodies, variants of the geometry of spew fillet, which provide the most significant decrease in the concentration of stresses in adhesive lap joints, are proposed.

  2. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOEpatents

    Archer, Charles J; Faraj, Ahmad A; Inglett, Todd A; Ratterman, Joseph D

    2013-04-16

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  3. Application of the dynamic ant colony algorithm on the optimal operation of cascade reservoirs

    NASA Astrophysics Data System (ADS)

    Tong, X. X.; Xu, W. S.; Wang, Y. F.; Zhang, Y. W.; Zhang, P. C.

    2016-08-01

    Due to the lack of dynamic adjustments between global searches and local optimization, it is difficult to maintain high diversity and overcome local optimum problems for Ant Colony Algorithms (ACA). Therefore, this paper proposes an improved ACA, Dynamic Ant Colony Algorithm (DACA). DACA applies dynamic adjustments on heuristic factor changes to balance global searches and local optimization in ACA, which decreases cosines. At the same time, by utilizing the randomness and ergodicity of the chaotic search, DACA implements the chaos disturbance on the path found in each ACA iteration to improve the algorithm's ability to jump out of the local optimum and avoid premature convergence. We conducted a case study with DACA for optimal joint operation of the Dadu River cascade reservoirs. The simulation results were compared with the results of the gradual optimization method and the standard ACA, which demonstrated the advantages of DACA in speed and precision.

  4. A Hydro System Modeling Hierarchy to Optimize the Operation of the BC Hydroelectric System

    NASA Astrophysics Data System (ADS)

    Shawwash, Z.

    2012-12-01

    We present the Hydro System Modeling Hierarchy that we have developed to optimize the operation of the BC Hydro system in British Columbia, Canada. The Hierarchy consists of a number of simulation and optimization models that we have developed over the past twelve years in a research program under the Grant-in-Aid Agreement between BC Hydro and the Department of Civil Engineering at UBC. We first provide an overview of the BC Hydro system and then present our modeling framework and discuss a number of optimization modeling tools that we have developed and are currently in use at BC Hydro and we briefly outline ongoing research and model development work supported by BC Hydro and leveraged by a Natural Sciences and Engineering Research Council's (NSERC) Collaborative Research and Development (CRD) grants.he BC Hydro System Optimization Modeling Hierarchy

  5. Optimization of identity operation in NMR spectroscopy via genetic algorithm: Application to the TEDOR experiment

    NASA Astrophysics Data System (ADS)

    Manu, V. S.; Veglia, Gianluigi

    2016-12-01

    Identity operation in the form of π pulses is widely used in NMR spectroscopy. For an isolated single spin system, a sequence of even number of π pulses performs an identity operation, leaving the spin state essentially unaltered. For multi-spin systems, trains of π pulses with appropriate phases and time delays modulate the spin Hamiltonian to perform operations such as decoupling and recoupling. However, experimental imperfections often jeopardize the outcome, leading to severe losses in sensitivity. Here, we demonstrate that a newly designed Genetic Algorithm (GA) is able to optimize a train of π pulses, resulting in a robust identity operation. As proof-of-concept, we optimized the recoupling sequence in the transferred-echo double-resonance (TEDOR) pulse sequence, a key experiment in biological magic angle spinning (MAS) solid-state NMR for measuring multiple carbon-nitrogen distances. The GA modified TEDOR (GMO-TEDOR) experiment with improved recoupling efficiency results in a net gain of sensitivity up to 28% as tested on a uniformly 13C, 15N labeled microcrystalline ubiquitin sample. The robust identity operation achieved via GA paves the way for the optimization of several other pulse sequences used for both solid- and liquid-state NMR used for decoupling, recoupling, and relaxation experiments.

  6. Transient operation and shape optimization of a single PEM fuel cell

    NASA Astrophysics Data System (ADS)

    Chen, Sheng; Ordonez, Juan C.; Vargas, Jose V. C.; Gardolinski, Jose E. F.; Gomes, Maria A. B.

    Geometric design, including the internal structure and external shape, considerably affect the thermal, fluid, and electrochemical characteristics of a polymer electrolyte membrane (PEM) fuel cell, which determine the polarization curves as well as the thermal and power inertias. Shape optimization is a natural alternative to improve the fuel cell performance and make fuel cells more attractive for power generation. This paper investigates the internal and external structure effects on the fuel cell steady and transient operation with consideration of stoichiometric ratios, pumping power, and working temperature limits. The maximal steady state net power output and the fuel cell start-up time under a step-changed current load characterize the fuel cell steady and transient performance respectively. The one-dimensional PEM fuel cell (PEMFC) thermal model introduced in a previous work [J.V.C. Vargas, J.C. Ordonez, A. Bejan, Constructal flow structure for a PEM fuel cell, Int. J. Heat Mass Transfer 47 (2004) 4177-4193] is amended to simulate the fuel cell transient start-up process. The shape optimization consists of the internal and external PEMFC structure optimization. The internal optimization focuses on the optimal allocation of fuel cell compartment thicknesses. The external optimization process seeks the PEM fuel cell optimal external aspect ratios. These two levels of optimizations pursue the optimal geometric design with quick response to the step loads and large power densities. Appropriate dimensionless groups are identified and the numerical results are presented in dimensionless charts for general engineering design. The universality of the general optimal shape found is also discussed.

  7. Integrated Data-Archive and Distributed Hydrological Modelling System for Optimized Dam Operation

    NASA Astrophysics Data System (ADS)

    Shibuo, Yoshihiro; Jaranilla-Sanchez, Patricia Ann; Koike, Toshio

    2013-04-01

    In 2012, typhoon Bopha, which passed through the southern part of the Philippines, devastated the nation leaving hundreds of death tolls and significant destruction of the country. Indeed the deadly events related to cyclones occur almost every year in the region. Such extremes are expected to increase both in frequency and magnitude around Southeast Asia, during the course of global climate change. Our ability to confront such hazardous events is limited by the best available engineering infrastructure and performance of weather prediction. An example of the countermeasure strategy is, for instance, early release of reservoir water (lowering the dam water level) during the flood season to protect the downstream region of impending flood. However, over release of reservoir water affect the regional economy adversely by losing water resources, which still have value for power generation, agricultural and industrial water use. Furthermore, accurate precipitation forecast itself is conundrum task, due to the chaotic nature of the atmosphere yielding uncertainty in model prediction over time. Under these circumstances we present a novel approach to optimize contradicting objectives of: preventing flood damage via priori dam release; while sustaining sufficient water supply, during the predicted storm events. By evaluating forecast performance of Meso-Scale Model Grid Point Value against observed rainfall, uncertainty in model prediction is probabilistically taken into account, and it is then applied to the next GPV issuance for generating ensemble rainfalls. The ensemble rainfalls drive the coupled land-surface- and distributed-hydrological model to derive the ensemble flood forecast. Together with dam status information taken into account, our integrated system estimates the most desirable priori dam release through the shuffled complex evolution algorithm. The strength of the optimization system is further magnified by the online link to the Data Integration and

  8. Ant Colony Optimization with Genetic Operation and Its Application to Traveling Salesman Problem

    NASA Astrophysics Data System (ADS)

    Wang, Rong-Long; Zhou, Xiao-Fan; Okazaki, Kozo

    Ant colony optimization (ACO) algorithms are a recently developed, population-based approach which has been successfully applied to optimization problems. However, in the ACO algorithms it is difficult to adjust the balance between intensification and diversification and thus the performance is not always very well. In this work, we propose an improved ACO algorithm in which some of ants can evolve by performing genetic operation, and the balance between intensification and diversification can be adjusted by numbers of ants which perform genetic operation. The proposed algorithm is tested by simulating the Traveling Salesman Problem (TSP). Experimental studies show that the proposed ACO algorithm with genetic operation has superior performance when compared to other existing ACO algorithms.

  9. Optimization of Operation Sequence in CNC Machine Tools Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Abu Qudeiri, Jaber; Yamamoto, Hidehiko; Ramli, Rizauddin

    The productivity of machine tools is significantly improved by using microcomputer based CAD/CAM systems for NC program generation. Currently, many commercial CAD/CAM packages that provide automatic NC programming have been developed and applied to various cutting processes. Many cutting processes machined by CNC machine tools. In this paper, we attempt to find an efficient solution approach to determine the best sequence of operations for a set of operations that located in asymmetrical locations and different levels. In order to find the best sequence of operations that achieves the shortest cutting tool travel path (CTTP), genetic algorithm is introduced. After the sequence is optimized, the G-codes that use to code for the travel time is created. CTTP can be formulated as a special case of the traveling salesman problem (TSP). The incorporation of genetic algorithm and TSP can be included in the commercial CAD/CAM packages to optimize the CTTP during automatic generation of NC programs.

  10. Optimizing the Long-Term Operating Plan of Railway Marshalling Station for Capacity Utilization Analysis

    PubMed Central

    Zhou, Wenliang; Yang, Xia; Deng, Lianbo

    2014-01-01

    Not only is the operating plan the basis of organizing marshalling station's operation, but it is also used to analyze in detail the capacity utilization of each facility in marshalling station. In this paper, a long-term operating plan is optimized mainly for capacity utilization analysis. Firstly, a model is developed to minimize railcars' average staying time with the constraints of minimum time intervals, marshalling track capacity, and so forth. Secondly, an algorithm is designed to solve this model based on genetic algorithm (GA) and simulation method. It divides the plan of whole planning horizon into many subplans, and optimizes them with GA one by one in order to obtain a satisfactory plan with less computing time. Finally, some numeric examples are constructed to analyze (1) the convergence of the algorithm, (2) the effect of some algorithm parameters, and (3) the influence of arrival train flow on the algorithm. PMID:25525614

  11. Operational Excellence through Schedule Optimization and Production Simulation of Application Specific Integrated Circuits.

    SciTech Connect

    Flory, John Andrew; Padilla, Denise D.; Gauthier, John H.; Zwerneman, April Marie; Miller, Steven P

    2016-05-01

    Upcoming weapon programs require an aggressive increase in Application Specific Integrated Circuit (ASIC) production at Sandia National Laboratories (SNL). SNL has developed unique modeling and optimization tools that have been instrumental in improving ASIC production productivity and efficiency, identifying optimal operational and tactical execution plans under resource constraints, and providing confidence in successful mission execution. With ten products and unprecedented levels of demand, a single set of shared resources, highly variable processes, and the need for external supplier task synchronization, scheduling is an integral part of successful manufacturing. The scheduler uses an iterative multi-objective genetic algorithm and a multi-dimensional performance evaluator. Schedule feasibility is assessed using a discrete event simulation (DES) that incorporates operational uncertainty, variability, and resource availability. The tools provide rapid scenario assessments and responses to variances in the operational environment, and have been used to inform major equipment investments and workforce planning decisions in multiple SNL facilities.

  12. Energy and operation management of a microgrid using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Radosavljević, Jordan; Jevtić, Miroljub; Klimenta, Dardan

    2016-05-01

    This article presents an efficient algorithm based on particle swarm optimization (PSO) for energy and operation management (EOM) of a microgrid including different distributed generation units and energy storage devices. The proposed approach employs PSO to minimize the total energy and operating cost of the microgrid via optimal adjustment of the control variables of the EOM, while satisfying various operating constraints. Owing to the stochastic nature of energy produced from renewable sources, i.e. wind turbines and photovoltaic systems, as well as load uncertainties and market prices, a probabilistic approach in the EOM is introduced. The proposed method is examined and tested on a typical grid-connected microgrid including fuel cell, gas-fired microturbine, wind turbine, photovoltaic and energy storage devices. The obtained results prove the efficiency of the proposed approach to solve the EOM of the microgrids.

  13. Optimal Pumping Strategy with Conjunctive Operation Rule for the Water Supply System

    NASA Astrophysics Data System (ADS)

    Tan, C. A.; Chiu, Y.; CHEN, Y.; Tung, C.

    2011-12-01

    The traditional conjunctive use of surface water and groundwater is that the water demand is first satisfied with the reservoirs operated based on the rule curves, and the groundwater resource constrained by its safety yield is used to provide the deficit between demand and supply, if exists. However, this operation strategy may cause the problem of that the pumped groundwater is excessively centralized during the drought periods and results in serious damage to the aquifers. Hence, in this study we propose an alternative strategy, named conjunctive operation rule, which extends the concept of reservoir operation rule curves to the groundwater resources and allows pumping groundwater during the non-drought periods. The conjunctive operation rule curves, which include two kinds of operation rule curves, one used by reservoir and the other used by groundwater, are defined. The objective function to be minimized was the total water deficit and the operation rule curves were assumed regular distribution and stationary. By considering the damage to the aquifers resulted from the large pumping centralized in a short period, the safety yield and the recovery rates of groundwater levels are included in the constraint set. The global-local optimization scheme is used to simultaneously optimize the spatial and temporal distributions of pumpage with the associated reservoirs operation rules as well. Hence, the optimized pumping distribution could not only effectively reduce the risk of the severe water shortage but minimize the damage to the aquifers due to the excessively centralized pumpage. The developed conjunctive-use model can also provide managers with a tool for decision-making with regard to conjunctive use of surface water and groundwater and achieve the goal of sustainable use of water resources.

  14. Modeling Reservoir-River Networks in Support of Optimizing Seasonal-Scale Reservoir Operations

    NASA Astrophysics Data System (ADS)

    Villa, D. L.; Lowry, T. S.; Bier, A.; Barco, J.; Sun, A.

    2011-12-01

    HydroSCOPE (Hydropower Seasonal Concurrent Optimization of Power and the Environment) is a seasonal time-scale tool for scenario analysis and optimization of reservoir-river networks. Developed in MATLAB, HydroSCOPE is an object-oriented model that simulates basin-scale dynamics with an objective of optimizing reservoir operations to maximize revenue from power generation, reliability in the water supply, environmental performance, and flood control. HydroSCOPE is part of a larger toolset that is being developed through a Department of Energy multi-laboratory project. This project's goal is to provide conventional hydropower decision makers with better information to execute their day-ahead and seasonal operations and planning activities by integrating water balance and operational dynamics across a wide range of spatial and temporal scales. This presentation details the modeling approach and functionality of HydroSCOPE. HydroSCOPE consists of a river-reservoir network model and an optimization routine. The river-reservoir network model simulates the heat and water balance of river-reservoir networks for time-scales up to one year. The optimization routine software, DAKOTA (Design Analysis Kit for Optimization and Terascale Applications - dakota.sandia.gov), is seamlessly linked to the network model and is used to optimize daily volumetric releases from the reservoirs to best meet a set of user-defined constraints, such as maximizing revenue while minimizing environmental violations. The network model uses 1-D approximations for both the reservoirs and river reaches and is able to account for surface and sediment heat exchange as well as ice dynamics for both models. The reservoir model also accounts for inflow, density, and withdrawal zone mixing, and diffusive heat exchange. Routing for the river reaches is accomplished using a modified Muskingum-Cunge approach that automatically calculates the internal timestep and sub-reach lengths to match the conditions of

  15. Numerical study and ex vivo assessment of HIFU treatment time reduction through optimization of focal point trajectory

    NASA Astrophysics Data System (ADS)

    Grisey, A.; Yon, S.; Pechoux, T.; Letort, V.; Lafitte, P.

    2017-03-01

    Treatment time reduction is a key issue to expand the use of high intensity focused ultrasound (HIFU) surgery, especially for benign pathologies. This study aims at quantitatively assessing the potential reduction of the treatment time arising from moving the focal point during long pulses. In this context, the optimization of the focal point trajectory is crucial to achieve a uniform thermal dose repartition and avoid boiling. At first, a numerical optimization algorithm was used to generate efficient trajectories. Thermal conduction was simulated in 3D with a finite difference code and damages to the tissue were modeled using the thermal dose formula. Given an initial trajectory, the thermal dose field was first computed, then, making use of Pontryagin's maximum principle, the trajectory was iteratively refined. Several initial trajectories were tested. Then, an ex vivo study was conducted in order to validate the efficicency of the resulting optimized strategies. Single pulses were performed at 3MHz on fresh veal liver samples with an Echopulse and the size of each unitary lesion was assessed by cutting each sample along three orthogonal planes and measuring the dimension of the whitened area based on photographs. We propose a promising approach to significantly shorten HIFU treatment time: the numerical optimization algorithm was shown to provide a reliable insight on trajectories that can improve treatment strategies. The model must now be improved in order to take in vivo conditions into account and extensively validated.

  16. Optimal use of buffer volumes for the measurement of atmospheric gas concentration in multi-point systems

    NASA Astrophysics Data System (ADS)

    Cescatti, Alessandro; Marcolla, Barbara; Goded, Ignacio; Gruening, Carsten

    2016-09-01

    Accurate multi-point monitoring systems are required to derive atmospheric measurements of greenhouse gas concentrations both for the calculation of surface fluxes with inversion transport models and for the estimation of non-turbulent components of the mass balance equation (i.e. advection and storage fluxes) at eddy covariance sites. When a single analyser is used to monitor multiple sampling points, the deployment of buffer volumes (BVs) along sampling lines can reduce the uncertainty due to the discrete temporal sampling of the signal. In order to optimize the use of buffer volumes we explored various set-ups by simulating their effect on time series of high-frequency CO2 concentration collected at three Fluxnet sites. Besides, we proposed a novel scheme to calculate half-hourly weighted arithmetic means from discrete point samples, accounting for the probabilistic fraction of the signal generated in the averaging period. Results show that the use of BVs with the new averaging scheme reduces the mean absolute error (MAE) up to 80 % compared to a set-up without BVs and up to 60 % compared to the case with BVs and a standard, non-weighted averaging scheme. The MAE of CO2 concentration measurements was observed to depend on the variability of the concentration field and on the size of BVs, which therefore have to be carefully dimensioned. The optimal volume size depends on two main features of the instrumental set-up: the number of measurement points and the time needed to sample at one point (i.e. line purging plus sampling time). A linear and consistent relationship was observed at all sites between the sampling frequency, which summarizes the two features mentioned above, and the renewal frequency associated with the volume. Ultimately, this empirical relationship can be applied to estimate the optimal volume size according to the technical specifications of the sampling system.

  17. Early Mission Maneuver Operations for the Deep Space Climate Observatory Sun-Earth L1 Libration Point Mission

    NASA Technical Reports Server (NTRS)

    Roberts, Craig; Case, Sara; Reagoso, John; Webster, Cassandra

    2015-01-01

    The Deep Space Climate Observatory mission launched on February 11, 2015, and inserted onto a transfer trajectory toward a Lissajous orbit around the Sun-Earth L1 libration point. This paper presents an overview of the baseline transfer orbit and early mission maneuver operations leading up to the start of nominal science orbit operations. In particular, the analysis and performance of the spacecraft insertion, mid-course correction maneuvers, and the deep-space Lissajous orbit insertion maneuvers are discussed, com-paring the baseline orbit with actual mission results and highlighting mission and operations constraints..

  18. 77 FR 22361 - Entergy Nuclear Operations, Inc., (Indian Point Nuclear Generating Units 2 and 3); Notice of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-13

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Entergy Nuclear Operations, Inc., (Indian Point Nuclear Generating Units 2 and 3); Notice of Atomic Safety and Licensing Board Reconstitution Pursuant to 10 CFR 2.313(c) and 2.321(b), the...

  19. RESEARCH PAPERS : Eigendecomposition of the two-point correlation tensor for optimal characterization of mantle convection

    NASA Astrophysics Data System (ADS)

    Balachandar, S.

    1998-01-01

    The two-point correlation tensor provides complete information on mantle convection accurate up to second-order statistics. Unfortunately, the two-point spatial correlation tensor is in general a data-intensive quantity. In the case of mantle convection, a simplified representation of the two-point spatial correlation tensor can be obtained by using spherical symmetry. The two-point correlation can be expressed in terms of a planar correlation tensor, which reduces the correlation's dependence to only three independent variables: the radial locations of the two points and their angular separation. The eigendecomposition of the planar correlation tensor provides a rational methodology for further representing the second-order statistics contained within the two-point correlation in a compact manner. As an illustration, results on the planar correlation are presented for the thermal anomaly obtained from the tomographic model of Su, Woodward & Dziewonski (1994) and the corresponding velocity field obtained from a simple constant-viscosity convection model Zhang & Christensen 1993). The first 10 most energetic eigensolutions of the planar correlation, which constitute an almost three orders of magnitude reduction in the data, capture the two-point correlation to 97 per cent accuracy. Furthermore, the energetic eigenfunctions efficiently characterize the thermal and flow structures of the mantle. The signature of the transition zone is clearly evident in the most energetic temperature eigenfunction, which clearly shows a reversal of thermal fluctuations at a depth of around 830 km. In addition, a local peak in the thermal fluctuations can be observed around a depth of 600 km. In contrast, due to the simplicity of the convection model employed, the velocity eigenfunctions exhibit a simple cellular structure that extends over the entire depth of the mantle and do not exhibit transition-zone signatures.

  20. Ultrasensitive optical microfiber coupler based sensors operating near the turning point of effective group index difference

    NASA Astrophysics Data System (ADS)

    Li, Kaiwei; Zhang, Ting; Liu, Guigen; Zhang, Nan; Zhang, Mengying; Wei, Lei

    2016-09-01

    We propose and study an optical microfiber coupler (OMC) sensor working near the turning point of effective group index difference between the even supermode and odd supermode to achieve high refractive index (RI) sensitivity. Theoretical calculations reveal that infinite sensitivity can be obtained when the measured RI is close to the turning point value. This diameter-dependent turning point corresponds to the condition that the effective group index difference equals zero. To validate our proposed sensing mechanism, we experimentally demonstrate an ultrahigh sensitivity of 39541.7 nm/RIU at a low ambient RI of 1.3334 based on an OMC with the diameter of 1.4 μm. An even higher sensitivity can be achieved by carrying out the measurements at RI closer to the turning point. The resulting ultrasensitive RI sensing platform offers a substantial impact on a variety of applications from high performance trace analyte detection to small molecule sensing.

  1. 47 CFR 90.473 - Operation of internal transmitter control systems through licensed fixed control points.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Operation of internal transmitter control... COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES PRIVATE LAND MOBILE RADIO SERVICES Transmitter Control Internal Transmitter Control Systems § 90.473 Operation of internal transmitter...

  2. 47 CFR 90.473 - Operation of internal transmitter control systems through licensed fixed control points.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Operation of internal transmitter control... COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES PRIVATE LAND MOBILE RADIO SERVICES Transmitter Control Internal Transmitter Control Systems § 90.473 Operation of internal transmitter...

  3. 47 CFR 90.473 - Operation of internal transmitter control systems through licensed fixed control points.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Operation of internal transmitter control... COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES PRIVATE LAND MOBILE RADIO SERVICES Transmitter Control Internal Transmitter Control Systems § 90.473 Operation of internal transmitter...

  4. 78 FR 44881 - Drawbridge Operation Regulation; York River, Between Yorktown and Gloucester Point, VA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-25

    ... structure. Under the regular operating schedule, the Coleman Memorial Bridge, mile 7.0, between Gloucester... operation of the Coleman Memorial Bridge (US 17/George P. Coleman Memorial Swing Bridge) across the York... maintenance work on the moveable spans on the Coleman Memorial Bridge. This temporary deviation allows...

  5. Optimum structural properties for an anode current collector used in a polymer electrolyte membrane water electrolyzer operated at the boiling point of water

    NASA Astrophysics Data System (ADS)

    Li, Hua; Fujigaya, Tsuyohiko; Nakajima, Hironori; Inada, Akiko; Ito, Kohei

    2016-11-01

    This study attempts to optimize the properties of the anode current collector of a polymer electrolyte membrane water electrolyzer at high temperatures, particularly at the boiling point of water. Different titanium meshes (4 commercial ones and 4 modified ones) with various properties are experimentally examined by operating a cell with each mesh under different conditions. The average pore diameter, thickness, and contact angle of the anode current collector are controlled in the ranges of 10-35 μm, 0.2-0.3 mm, and 0-120°, respectively. These results showed that increasing the temperature from the conventional temperature of 80 °C to the boiling point could reduce both the open circuit voltage and the overvoltages to a large extent without notable dehydration of the membrane. These results also showed that decreasing the contact angle and the thickness suppresses the electrolysis overvoltage largely by decreasing the concentration overvoltage. The effect of the average pore diameter was not evident until the temperature reached the boiling point. Using operating conditions of 100 °C and 2 A/cm2, the electrolysis voltage is minimized to 1.69 V with a hydrophilic titanium mesh with an average pore diameter of 21 μm and a thickness of 0.2 mm.

  6. A study on the influence of operating circuit on the position of emission point of fluorescent lamp

    NASA Astrophysics Data System (ADS)

    Uetsuki, Tadao; Genba, Yuki; Kanda, Takashi

    2009-10-01

    High efficiency fluorescent lamp systems driven by high frequency are very popular for general lighting. Therefore it is very beneficial to be able to predict the lamp's life before the lamp dying, because people can buy a new lamp just before the lamp dying and need not have stocks. In order to judge the lifetime of a lamp it is very useful to know where the emission point is on the electrode filament. With regard to a method for grasping the emission point, it has been reported that the distance from the emission point to the end of the filament can be calculated by measuring the voltage across the filament and the currents flowing in both ends of the filament. The lamp's life can be predicted by grasping the movement of the emission point with operating time. Therefore it is very important to confirm whether the movement of the emission point changes or not when the operating circuit is changed. The authors investigated the difference in the way the emission points moved for two lamp systems which are very popular. One system had an electronic ballast having an auxiliary power source for the heating cathode. Another system had an electronic ballast with no power source, but with a capacitor connected to the lamp in parallel. In this presentation these measurement results will be reported.

  7. Optimal Design and Operation of Helium Refrigeration Systems Using the Ganni Cycle

    NASA Astrophysics Data System (ADS)

    Ganni, V.; Knudsen, P.

    2010-04-01

    The constant pressure ratio process, as implemented in the floating pressure—Ganni cycle, is a new variation to prior cryogenic refrigeration and liquefaction cycle designs that allows for optimal operation and design of helium refrigeration systems. This cycle is based upon the traditional equipment used for helium refrigeration system designs, i.e., constant volume displacement compression and critical flow expansion devices. It takes advantage of the fact that for a given load, the expander sets the compressor discharge pressure and the compressor sets its own suction pressure. This cycle not only provides an essentially constant system Carnot efficiency over a wide load range, but invalidates the traditional philosophy that the (`TS') design condition is the optimal operating condition for a given load using the as-built hardware. As such, the Floating Pressure-Ganni Cycle is a solution to reduce the energy consumption while increasing the reliability, flexibility and stability of these systems over a wide operating range and different operating modes and is applicable to most of the existing plants. This paper explains the basic theory behind this cycle operation and contrasts it to the traditional operational philosophies presently used.

  8. OPTIMAL DESIGN AND OPERATION OF HELIUM REFRIGERATION SYSTEMS USING THE GANNI CYCLE

    SciTech Connect

    Venkatarao Ganni, Peter Knudsen

    2010-04-01

    The constant pressure ratio process, as implemented in the floating pressure - Ganni cycle, is a new variation to prior cryogenic refrigeration and liquefaction cycle designs that allows for optimal operation and design of helium refrigeration systems. This cycle is based upon the traditional equipment used for helium refrigeration system designs, i.e., constant volume displacement compression and critical flow expansion devices. It takes advantage of the fact that for a given load, the expander sets the compressor discharge pressure and the compressor sets its own suction pressure. This cycle not only provides an essentially constant system Carnot efficiency over a wide load range, but invalidates the traditional philosophy that the (‘TS’) design condition is the optimal operating condition for a given load using the as-built hardware. As such, the Floating Pressure- Ganni Cycle is a solution to reduce the energy consumption while increasing the reliability, flexibility and stability of these systems over a wide operating range and different operating modes and is applicable to most of the existing plants. This paper explains the basic theory behind this cycle operation and contrasts it to the traditional operational philosophies presently used.

  9. Solving optimum operation of single pump unit problem with ant colony optimization (ACO) algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Y.; Liu, C.

    2012-11-01

    For pumping stations, the effective scheduling of daily pump operations from solutions to the optimum design operation problem is one of the greatest potential areas for energy cost-savings, there are some difficulties in solving this problem with traditional optimization methods due to the multimodality of the solution region. In this case, an ACO model for optimum operation of pumping unit is proposed and the solution method by ants searching is presented by rationally setting the object function and constrained conditions. A weighted directed graph was constructed and feasible solutions may be found by iteratively searching of artificial ants, and then the optimal solution can be obtained by applying the rule of state transition and the pheromone updating. An example calculation was conducted and the minimum cost was found as 4.9979. The result of ant colony algorithm was compared with the result from dynamic programming or evolutionary solving method in commercial software under the same discrete condition. The result of ACO is better and the computing time is shorter which indicates that ACO algorithm can provide a high application value to the field of optimal operation of pumping stations and related fields.

  10. Optimal Reservoir Operation for Hydropower Generation using Non-linear Programming Model

    NASA Astrophysics Data System (ADS)

    Arunkumar, R.; Jothiprakash, V.

    2012-05-01

    Hydropower generation is one of the vital components of reservoir operation, especially for a large multi-purpose reservoir. Deriving optimal operational rules for such a large multi-purpose reservoir serving various purposes like irrigation, hydropower and flood control are complex, because of the large dimension of the problem and the complexity is more if the hydropower production is not an incidental. Thus optimizing the operations of a reservoir serving various purposes requires a systematic study. In the present study such a large multi-purpose reservoir, namely, Koyna reservoir operations are optimized for maximizing the hydropower production subject to the condition of satisfying the irrigation demands using a non-linear programming model. The hydropower production from the reservoir is analysed for three different dependable inflow conditions, representing wet, normal and dry years. For each dependable inflow conditions, various scenarios have been analyzed based on the constraints on the releases and the results are compared. The annual power production, combined monthly power production from all the powerhouses, end of month storage levels, evaporation losses and surplus are discussed. From different scenarios, it is observed that more hydropower can be generated for various dependable inflow conditions, if the restrictions on releases are slightly relaxed. The study shows that Koyna dam is having potential to generate more hydropower.

  11. Online Optimization Method for Operation of Generators in a Micro Grid

    NASA Astrophysics Data System (ADS)

    Hayashi, Yasuhiro; Miyamoto, Hideki; Matsuki, Junya; Iizuka, Toshio; Azuma, Hitoshi

    Recently a lot of studies and developments about distributed generator such as photovoltaic generation system, wind turbine generation system and fuel cell have been performed under the background of the global environment issues and deregulation of the electricity market, and the technique of these distributed generators have progressed. Especially, micro grid which consists of several distributed generators, loads and storage battery is expected as one of the new operation system of distributed generator. However, since precipitous load fluctuation occurs in micro grid for the reason of its smaller capacity compared with conventional power system, high-accuracy load forecasting and control scheme to balance of supply and demand are needed. Namely, it is necessary to improve the precision of operation in micro grid by observing load fluctuation and correcting start-stop schedule and output of generators online. But it is not easy to determine the operation schedule of each generator in short time, because the problem to determine start-up, shut-down and output of each generator in micro grid is a mixed integer programming problem. In this paper, the authors propose an online optimization method for the optimal operation schedule of generators in micro grid. The proposed method is based on enumeration method and particle swarm optimization (PSO). In the proposed method, after picking up all unit commitment patterns of each generators satisfied with minimum up time and minimum down time constraint by using enumeration method, optimal schedule and output of generators are determined under the other operational constraints by using PSO. Numerical simulation is carried out for a micro grid model with five generators and photovoltaic generation system in order to examine the validity of the proposed method.

  12. Optimizing Canal Structure Operation Using Meta-heuristic Algorithms in the Treasure Valley, Idaho

    NASA Astrophysics Data System (ADS)

    Hernandez, J.; Ha, W.; Campbell, A.

    2012-12-01

    The computer program that was proven to produce optimal operational solutions for open-channel irrigation conveyance and distribution networks for synthetic data in previous research was tested for real world data. Data gathered from databases and the field by the Boise Project, Idaho, provided input to the hydraulic model for the physical characteristics of the conveyance system. We selected three reaches of the Deer Flat Low Line in the Treasure Valley for optimizing actual gate operations. The total of 59.1 km canal with a maximum capacity of 34 m3/s irrigates mainly corn, wheat, sugar-beet and potato crops. The computer model uses an accuracy-based learning classifier system (XCS) with an embedded genetic algorithm to produce optimal rules for gate structure operation in irrigation canals. Rules are generated through the exploration and exploitation of genetic algorithm population, with the support of RootCanal, an unsteady-state hydraulic simulation model. The objective function was set for satisfying variable demand along three reaches while minimizing water level deviations from target. All canal gate structures operate simultaneously while maintaining water depth near target values during variable-demand periods, with a hydraulically stabilized system. It is noteworthy to mention that this very simple 3-reach problem, requires the computer performing several thousand simulations during continuous days for finding plausible solutions. The model is currently simulating the Deer Flat Low Line Canal in Caldwell, Idaho with promising results. The population evolution is measured by a fitness parameter, which shows that canal structure operations generated by the model are improving towards plausible solutions. This research is one step forward for optimizing the way we use and manage water resources. Relying on management practices of the past will no longer work in a world that is impacted by global climate variability.

  13. Collaboration pathway(s) using new tools for optimizing `operational' climate monitoring from space

    NASA Astrophysics Data System (ADS)

    Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.

    2015-09-01

    Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a long term solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the collective needs of policy makers, scientific communities and global academic users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent rule-based expert system (RBES) optimization modeling of the intended NPOESS architecture becomes a surrogate for global operational climate monitoring architecture(s). These rulebased systems tools provide valuable insight for global climate architectures, by comparison/evaluation of alternatives and the sheer range of trade space explored. Optimization of climate monitoring architecture(s) for a partial list of ECV (essential climate variables) is explored and described in detail with dialogue on appropriate rule-based valuations. These optimization tool(s) suggest global collaboration advantages and elicit responses from the audience and climate science community. This paper will focus on recent research exploring joint requirement implications of the high profile NPOESS architecture and extends the research and tools to optimization for a climate centric case study. This reflects work from SPIE RS Conferences 2013 and 2014, abridged for simplification30, 32. First, the heavily securitized NPOESS architecture; inspired the recent research question - was Complexity (as a cost/risk factor) overlooked when considering the benefits of aggregating different missions into a single platform. Now years later a complete reversal; should agencies considering Disaggregation as the answer. We'll discuss what some academic research suggests. Second, using the GCOS requirements of earth climate observations via ECV (essential climate variables) many collected from space-based sensors; and accepting their

  14. Information Points and Optimal Discharging Speed: Effects on the Saturation Flow at Signalized Intersections

    ERIC Educational Resources Information Center

    Gao, Lijun

    2015-01-01

    An information point was defined in this study as any object, structure, or activity located outside of a traveling vehicle that could potentially attract the visual attention of the driver. Saturation flow rates were studied for three pairs of signalized intersections in Toledo, Ohio. Each pair of intersections consisted of one intersection with…

  15. Towards an Optimal Interest Point Detector for Measurements in Ultrasound Images

    NASA Astrophysics Data System (ADS)

    Zukal, Martin; Beneš, Radek; Číka, Petr; Říha, Kamil

    2013-12-01

    This paper focuses on the comparison of different interest point detectors and their utilization for measurements in ultrasound (US) images. Certain medical examinations are based on speckle tracking which strongly relies on features that can be reliably tracked frame to frame. Only significant features (interest points) resistant to noise and brightness changes within US images are suitable for accurate long-lasting tracking. We compare three interest point detectors - Harris-Laplace, Difference of Gaussian (DoG) and Fast Hessian - and identify the most suitable one for use in US images on the basis of an objective criterion. Repeatability rate is assumed to be an objective quality measure for comparison. We have measured repeatability in images corrupted by different types of noise (speckle noise, Gaussian noise) and for changes in brightness. The Harris-Laplace detector outperformed its competitors and seems to be a sound option when choosing a suitable interest point detector for US images. However, it has to be noted that Fast Hessian and DoG detectors achieved better results in terms of processing speed.

  16. An intelligent factory-wide optimal operation system for continuous production process

    NASA Astrophysics Data System (ADS)

    Ding, Jinliang; Chai, Tianyou; Wang, Hongfeng; Wang, Junwei; Zheng, Xiuping

    2016-03-01

    In this study, a novel intelligent factory-wide operation system for a continuous production process is designed to optimise the entire production process, which consists of multiple units; furthermore, this system is developed using process operational data to avoid the complexity of mathematical modelling of the continuous production process. The data-driven approach aims to specify the structure of the optimal operation system; in particular, the operational data of the process are used to formulate each part of the system. In this context, the domain knowledge of process engineers is utilised, and a closed-loop dynamic optimisation strategy, which combines feedback, performance prediction, feed-forward, and dynamic tuning schemes into a framework, is employed. The effectiveness of the proposed system has been verified using industrial experimental results.

  17. Determining the optimal operator allocation in SME's food manufacturing company using computer simulation and data envelopment analysis

    NASA Astrophysics Data System (ADS)

    Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Rahman, Asmahanim Ab

    2014-09-01

    In a labor intensive manufacturing system, optimal operator allocation is one of the most important decisions in determining the efficiency of the system. In this paper, ten operator allocation alternatives are identified using the computer simulation ARENA. Two inputs; average wait time and average cycle time and two outputs; average operator utilization and total packet values of each alternative are generated. Four Data Envelopment Analysis (DEA) models; CCR, BCC, MCDEA and AHP/DEA are used to determine the optimal operator allocation at one of the SME food manufacturing companies in Selangor. The results of all four DEA models showed that the optimal operator allocation is six operators at peeling process, three operators at washing and slicing process, three operators at frying process and two operators at packaging process.

  18. Better Redd than Dead: Optimizing Reservoir Operations for Wild Fish Survival During Drought

    NASA Astrophysics Data System (ADS)

    Adams, L. E.; Lund, J. R.; Quiñones, R.

    2014-12-01

    Extreme droughts are difficult to predict and may incur large economic and ecological costs. Dam operations in drought usually consider minimizing economic costs. However, dam operations also offer an opportunity to increase wild fish survival under difficult conditions. Here, we develop a probabilistic optimization approach to developing reservoir release schedules to maximize fish survival in regulated rivers. A case study applies the approach to wild Fall-run Chinook Salmon below Folsom Dam on California's American River. Our results indicate that releasing more water early in the drought will, on average, save more wild fish over the long term.

  19. How does network design constrain optimal operation of intermittent water supply?

    NASA Astrophysics Data System (ADS)

    Lieb, Anna; Wilkening, Jon; Rycroft, Chris

    2015-11-01

    Urban water distribution systems do not always supply water continuously or reliably. As pipes fill and empty, pressure transients may contribute to degraded infrastructure and poor water quality. To help understand and manage this undesirable side effect of intermittent water supply--a phenomenon affecting hundreds of millions of people in cities around the world--we study the relative contributions of fixed versus dynamic properties of the network. Using a dynamical model of unsteady transition pipe flow, we study how different elements of network design, such as network geometry, pipe material, and pipe slope, contribute to undesirable pressure transients. Using an optimization framework, we then investigate to what extent network operation decisions such as supply timing and inflow rate may mitigate these effects. We characterize some aspects of network design that make them more or less amenable to operational optimization.

  20. Optimization of automotive Rankine cycle waste heat recovery under various engine operating condition

    NASA Astrophysics Data System (ADS)

    Punov, Plamen; Milkov, Nikolay; Danel, Quentin; Perilhon, Christelle; Podevin, Pierre; Evtimov, Teodossi

    2017-02-01

    An optimization study of the Rankine cycle as a function of diesel engine operating mode is presented. The Rankine cycle here, is studied as a waste heat recovery system which uses the engine exhaust gases as heat source. The engine exhaust gases parameters (temperature, mass flow and composition) were defined by means of numerical simulation in advanced simulation software AVL Boost. Previously, the engine simulation model was validated and the Vibe function parameters were defined as a function of engine load. The Rankine cycle output power and efficiency was numerically estimated by means of a simulation code in Python(x,y). This code includes discretized heat exchanger model and simplified model of the pump and the expander based on their isentropic efficiency. The Rankine cycle simulation revealed the optimum value of working fluid mass flow and evaporation pressure according to the heat source. Thus, the optimal Rankine cycle performance was obtained over the engine operating map.

  1. The use of experimental design to find the operating maximum power point of PEM fuel cells

    SciTech Connect

    Crăciunescu, Aurelian; Pătularu, Laurenţiu; Ciumbulea, Gloria; Olteanu, Valentin; Pitorac, Cristina; Drugan, Elena

    2015-03-10

    Proton Exchange Membrane (PEM) Fuel Cells are difficult to model due to their complex nonlinear nature. In this paper, the development of a PEM Fuel Cells mathematical model based on the Design of Experiment methodology is described. The Design of Experiment provides a very efficient methodology to obtain a mathematical model for the studied multivariable system with only a few experiments. The obtained results can be used for optimization and control of the PEM Fuel Cells systems.

  2. Computational Analysis of Distance Operators for the Iterative Closest Point Algorithm

    PubMed Central

    Mora-Pascual, Jerónimo M.; García-García, Alberto; Martínez-González, Pablo

    2016-01-01

    The Iterative Closest Point (ICP) algorithm is currently one of the most popular methods for rigid registration so that it has become the standard in the Robotics and Computer Vision communities. Many applications take advantage of it to align 2D/3D surfaces due to its popularity and simplicity. Nevertheless, some of its phases present a high computational cost thus rendering impossible some of its applications. In this work, it is proposed an efficient approach for the matching phase of the Iterative Closest Point algorithm. This stage is the main bottleneck of that method so that any efficiency improvement has a great positive impact on the performance of the algorithm. The proposal consists in using low computational cost point-to-point distance metrics instead of classic Euclidean one. The candidates analysed are the Chebyshev and Manhattan distance metrics due to their simpler formulation. The experiments carried out have validated the performance, robustness and quality of the proposal. Different experimental cases and configurations have been set up including a heterogeneous set of 3D figures, several scenarios with partial data and random noise. The results prove that an average speed up of 14% can be obtained while preserving the convergence properties of the algorithm and the quality of the final results. PMID:27768714

  3. Leveraging an existing data warehouse to annotate workflow models for operations research and optimization.

    PubMed

    Borlawsky, Tara; LaFountain, Jeanne; Petty, Lynda; Saltz, Joel H; Payne, Philip R O

    2008-11-06

    Workflow analysis is frequently performed in the context of operations research and process optimization. In order to develop a data-driven workflow model that can be employed to assess opportunities to improve the efficiency of perioperative care teams at The Ohio State University Medical Center (OSUMC), we have developed a method for integrating standard workflow modeling formalisms, such as UML activity diagrams with data-centric annotations derived from our existing data warehouse.

  4. PLIO: a generic tool for real-time operational predictive optimal control of water networks.

    PubMed

    Cembrano, G; Quevedo, J; Puig, V; Pérez, R; Figueras, J; Verdejo, J M; Escaler, I; Ramón, G; Barnet, G; Rodríguez, P; Casas, M

    2011-01-01

    This paper presents a generic tool, named PLIO, that allows to implement the real-time operational control of water networks. Control strategies are generated using predictive optimal control techniques. This tool allows the flow management in a large water supply and distribution system including reservoirs, open-flow channels for water transport, water treatment plants, pressurized water pipe networks, tanks, flow/pressure control elements and a telemetry/telecontrol system. Predictive optimal control is used to generate flow control strategies from the sources to the consumer areas to meet future demands with appropriate pressure levels, optimizing operational goals such as network safety volumes and flow control stability. PLIO allows to build the network model graphically and then to automatically generate the model equations used by the predictive optimal controller. Additionally, PLIO can work off-line (in simulation) and on-line (in real-time mode). The case study of Santiago-Chile is presented to exemplify the control results obtained using PLIO off-line (in simulation).

  5. A survey of ground operations tools developed to simulate the pointing of space telescopes and the design for WISE

    NASA Technical Reports Server (NTRS)

    Fabinsky, Beth

    2006-01-01

    WISE, the Wide Field Infrared Survey Explorer, is scheduled for launch in June 2010. The mission operations system for WISE requires a software modeling tool to help plan, integrate and simulate all spacecraft pointing and verify that no attitude constraints are violated. In the course of developing the requirements for this tool, an investigation was conducted into the design of similar tools for other space-based telescopes. This paper summarizes the ground software and processes used to plan and validate pointing for a selection of space telescopes; with this information as background, the design for WISE is presented.

  6. A Rapid Method for Optimizing Running Temperature of Electrophoresis through Repetitive On-Chip CE Operations

    PubMed Central

    Kaneda, Shohei; Ono, Koichi; Fukuba, Tatsuhiro; Nojima, Takahiko; Yamamoto, Takatoki; Fujii, Teruo

    2011-01-01

    In this paper, a rapid and simple method to determine the optimal temperature conditions for denaturant electrophoresis using a temperature-controlled on-chip capillary electrophoresis (CE) device is presented. Since on-chip CE operations including sample loading, injection and separation are carried out just by switching the electric field, we can repeat consecutive run-to-run CE operations on a single on-chip CE device by programming the voltage sequences. By utilizing the high-speed separation and the repeatability of the on-chip CE, a series of electrophoretic operations with different running temperatures can be implemented. Using separations of reaction products of single-stranded DNA (ssDNA) with a peptide nucleic acid (PNA) oligomer, the effectiveness of the presented method to determine the optimal temperature conditions required to discriminate a single-base substitution (SBS) between two different ssDNAs is demonstrated. It is shown that a single run for one temperature condition can be executed within 4 min, and the optimal temperature to discriminate the SBS could be successfully found using the present method. PMID:21845077

  7. Optimization of magnetic refrigerators by tuning the heat transfer medium and operating conditions

    NASA Astrophysics Data System (ADS)

    Ghahremani, Mohammadreza; Aslani, Amir; Bennett, Lawrence; Della Torre, Edward

    A new reciprocating Active Magnetic Regenerator (AMR) experimental device has been designed, built and tested to evaluate the effect of the system's parameters on a reciprocating Active Magnetic Regenerator (AMR) near room temperature. Gadolinium turnings were used as the refrigerant, silicon oil as the heat transfer medium, and a magnetic field of 1.3 T was cycled. This study focuses on the methodology of single stage AMR operation conditions to get a higher temperature span near room temperature. Herein, the main objective is not to report the absolute maximum attainable temperature span seen in an AMR system, but rather to find the system's optimal operating conditions to reach that maximum span. The results of this work show that there is an optimal operating frequency, heat transfer fluid flow rate, flow duration, and displaced volume ratio in an AMR system. It is expected that such optimization and the results provided herein will permit the future design and development of more efficient room-temperature magnetic refrigeration systems.

  8. Choosing the Optimal Trigger Point for Analysis of Movements after Stroke Based on Magnetoencephalographic Recordings

    PubMed Central

    Waldmann, Guido; Schauer, Michael; Woldag, Hartwig; Hummelsheim, Horst

    2010-01-01

    The aim of this study was to select the optimal procedure for analysing motor fields (MF) and motor evoked fields (MEF) measured from brain injured patients. Behavioural pretests with patients have shown that most of them cannot stand measurements longer than 30 minutes and they also prefer to move the hand with rather short breaks between movements. Therefore, we were unable to measure the motor field (MF) optimally. Furthermore, we planned to use MEF to monitor cortical plasticity in a motor rehabilitation procedure. Classically, the MF analysis refers to rather long epochs around the movement onset (M-onset). We shortened the analysis epoch down to a range from 1000 milliseconds before until 500 milliseconds after M-onset to fulfil the needs of the patients. Additionally, we recorded the muscular activity (EMG) by surface electrodes on the extensor carpi ulnaris and flexor carpi ulnaris muscles. Magnetoencephalographic (MEG) data were recorded from 9 healthy subjects, who executed horizontally brisk extension and flexion in the right wrist. Significantly higher MF dipole strength was found in data based on EMG-onset than in M-onset based data. There was no difference in MEF I dipole strength between the two trigger latencies. In conclusion, we recommend averaging in respect to the EMG-onset for the analysis of both components MF as well as MEF. PMID:20700420

  9. Optimization principle of operating parameters of heat exchanger by using CFD simulation

    NASA Astrophysics Data System (ADS)

    Mičieta, Jozef; Jiří, Vondál; Jandačka, Jozef; Lenhard, Richard

    2016-03-01

    Design of effective heat transfer devices and minimizing costs are desired sections in industry and they are important for both engineers and users due to the wide-scale use of heat exchangers. Traditional approach to design is based on iterative process in which is gradually changed design parameters, until a satisfactory solution is achieved. The design process of the heat exchanger is very dependent on the experience of the engineer, thereby the use of computational software is a major advantage in view of time. Determination of operating parameters of the heat exchanger and the subsequent estimation of operating costs have a major impact on the expected profitability of the device. There are on the one hand the material and production costs, which are immediately reflected in the cost of device. But on the other hand, there are somewhat hidden costs in view of economic operation of the heat exchanger. The economic balance of operation significantly affects the technical solution and accompanies the design of the heat exchanger since its inception. Therefore, there is important not underestimate the choice of operating parameters. The article describes an optimization procedure for choice of cost-effective operational parameters for a simple double pipe heat exchanger by using CFD software and the subsequent proposal to modify its design for more economical operation.

  10. An optimal point spread function subtraction algorithm for high-contrast imaging: a demonstration with angular differential imaging

    SciTech Connect

    Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D

    2006-09-19

    Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.

  11. Optimal design and operation of solid oxide fuel cell systems for small-scale stationary applications

    NASA Astrophysics Data System (ADS)

    Braun, Robert Joseph

    The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell

  12. Partial difference operators on weighted graphs for image processing on surfaces and point clouds.

    PubMed

    Lozes, Francois; Elmoataz, Abderrahim; Lezoray, Olivier

    2014-09-01

    Partial difference equations (PDEs) and variational methods for image processing on Euclidean domains spaces are very well established because they permit to solve a large range of real computer vision problems. With the recent advent of many 3D sensors, there is a growing interest in transposing and solving PDEs on surfaces and point clouds. In this paper, we propose a simple method to solve such PDEs using the framework of PDEs on graphs. This latter approach enables us to transcribe, for surfaces and point clouds, many models and algorithms designed for image processing. To illustrate our proposal, three problems are considered: (1) p -Laplacian restoration and inpainting; (2) PDEs mathematical morphology; and (3) active contours segmentation.

  13. Space tug point design study. Volume 2: Operations, performance and requirements

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A design study to determine the configuration and characteristics of a space tug was conducted. Among the subjects analyzed in the study are: (1) flight and ground operations, (2) vehicle flight performance and performance enhancement techniques, (3) flight requirements, (4) basic design criteria, and (5) functional and procedural interface requirements between the tug and other systems.

  14. Particulate emissions calculations from fall tillage operations using point and remote sensors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Preparation of soil for agricultural crops produces aerosols that may significantly contribute to seasonal atmospheric loadings of particulate matter (PM). Efforts to reduce PM emissions from tillage operations through a variety of conservation management practices (CMP) have been made but the reduc...

  15. 78 FR 21064 - Drawbridge Operation Regulations; York River, between Yorktown and Gloucester Point, VA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-09

    ... draw of the US 17/George P. Coleman Memorial Swing Bridge across the York River, at mile 7.0, between.... Coleman Memorial Swing Bridge. This temporary deviation allows the drawbridge to remain in the closed- to... Transportation, who owns and operates this swing bridge, has requested a temporary deviation from the...

  16. The QACITS pointing sensor: from theory to on-sky operation on Keck/NIRC2

    NASA Astrophysics Data System (ADS)

    Huby, Elsa; Absil, Olivier; Mawet, Dimitri; Baudoz, Pierre; Femenıa Castellã, Bruno; Bottom, Michael; Ngo, Henry; Serabyn, Eugene

    2016-07-01

    Small inner working angle coronagraphs are essential to benefit from the full potential of large and future extremely large ground-based telescopes, especially in the context of the detection and characterization of exoplanets. Among existing solutions, the vortex coronagraph stands as one of the most effective and promising solutions. However, for focal-plane coronagraph, a small inner working angle comes necessarily at the cost of a high sensitivity to pointing errors. This is the reason why a pointing control system is imperative to stabilize the star on the vortex center against pointing drifts due to mechanical flexures, that generally occur during observation due for instance to temperature and/or gravity variations. We have therefore developed a technique called QACITS1 (Quadrant Analysis of Coronagraphic Images for Tip-tilt Sensing), which is based on the analysis of the coronagraphic image shape to infer the amount of pointing error. It has been shown that the flux gradient in the image is directly related to the amount of tip-tilt affecting the beam. The main advantage of this technique is that it does not require any additional setup and can thus be easily implemented on all current facilities equipped with a vortex phase mask. In this paper, we focus on the implementation of the QACITS sensor at Keck/NIRC2, where an L-band AGPM has been recently commissioned (June and October 2015), successfully validating the QACITS estimator in the case of a centrally obstructed pupil. The algorithm has been designed to be easily handled by any user observing in vortex mode, which is available for science in shared risk mode since 2016B.

  17. Temperature Effects of Point Sources, Riparian Shading, and Dam Operations on the Willamette River, Oregon

    USGS Publications Warehouse

    Rounds, Stewart A.

    2007-01-01

    Water temperature is an important factor influencing the migration, rearing, and spawning of several important fish species in rivers of the Pacific Northwest. To protect these fish populations and to fulfill its responsibilities under the Federal Clean Water Act, the Oregon Department of Environmental Quality set a water temperature Total Maximum Daily Load (TMDL) in 2006 for the Willamette River and the lower reaches of its largest tributaries in northwestern Oregon. As a result, the thermal discharges of the largest point sources of heat to the Willamette River now are limited at certain times of the year, riparian vegetation has been targeted for restoration, and upstream dams are recognized as important influences on downstream temperatures. Many of the prescribed point-source heat-load allocations are sufficiently restrictive that management agencies may need to expend considerable resources to meet those allocations. Trading heat allocations among point-source dischargers may be a more economical and efficient means of meeting the cumulative point-source temperature limits set by the TMDL. The cumulative nature of these limits, however, precludes simple one-to-one trades of heat from one point source to another; a more detailed spatial analysis is needed. In this investigation, the flow and temperature models that formed the basis of the Willamette temperature TMDL were used to determine a spatially indexed 'heating signature' for each of the modeled point sources, and those signatures then were combined into a user-friendly, spreadsheet-based screening tool. The Willamette River Point-Source Heat-Trading Tool allows the user to increase or decrease the heating signature of each source and thereby evaluate the effects of a wide range of potential point-source heat trades. The predictions of the Trading Tool were verified by running the Willamette flow and temperature models under four different trading scenarios, and the predictions typically were accurate

  18. Antiemetic efficacy of capsicum plaster on acupuncture points in patients undergoing thyroid operation

    PubMed Central

    Koo, Min Seok; Lee, Hee-Jong; Jeong, Ji Seon; Lee, Jung-Won

    2013-01-01

    Background Postoperative nausea and vomiting (PONV) occurs in up to 63-84% of patients after thyroid surgery. This study aims to assess the effects of using a capsicum plaster to reduce PONV after thyroid surgery at either the Chinese acupuncture point (acupoint) Pericardium 6 (P6) or Korean hand acupuncture point K-D2. Methods One-hundred eighty-four patients who underwent thyroid surgery were randomized in four groups (n = 46 each): control group = inactive tape at P6 acupoints and on both shoulders as a nonacupoint; P6 group = capsicum plaster at P6 points and inactive tape on both shoulders; K-D2 group = capsicum plaster at K-D2 acupoints and inactive tape on both shoulders; Sham group = capsicum plaster on both shoulders and inactive tape at P6 acupoints. The capsicum plaster was applied before the induction of anesthesia and removed at 8 hr after surgery. Results The incidence and severity of nausea and vomiting and the need for rescue antiemetics were decreased in the patients in the P6 and K-D2 groups compared to the patients in the control and sham groups (P < 0.001). The patients in the P6 and K-D2 groups also reported that they were more satisfied (P < 0.05). Conclusions We conclude that the capsicum plaster at the P6 and K-D2 acupoint was a promising antiemetic method for the patients undergoing thyroid surgery. PMID:24427460

  19. Requests to an optimal process and plant management from a production point of view

    NASA Astrophysics Data System (ADS)

    Heller, Matthias; Hsu, Jack; Terhuerne, Joerg

    2000-10-01

    Well done designs and well equipped machines are only half the way to reproducible and stable quality of a coating production. To achieve this aim it is also necessary to have a complex production management system to one's disposal, including recipe management, machinery management and quality assurance. Most production errors and rejects are caused by wrong handling, which can be leaded back to a lack of actual information, or by errors of measurement-systems and by fails of equipment during a coating process. So, the very simple rules one has to observe are: 1.) Transfer really all necessary information about the process to operator and to machine and force the operator to read this information, by using an online-checklist during charging a batch. 2.) Never trust in a single measurement-result of your plant-equipment, without a cross-check to independent generated data's; redundancy is the magic word for process-assurance. 3.) Check the status of your equipment as often as possible; integrate a maintenance plan in your plant control and let the machine record all parameters, which are relevant for wearing parts or media. This essay will show, how to organize your recipe parameters, transfering information to plant and operator, methods for redundancy and cross-checks of parameters, and an example for a complex coating system based on a LH-A700QE.

  20. Optimal fine pointing control of a large space telescope using an Annular Momentum Control Device

    NASA Technical Reports Server (NTRS)

    Nadkarni, A. A.; Joshi, S. M.; Groom, N. J.

    1977-01-01

    This paper discusses the application of an Annular Momentum Control Device (AMCD) to fine pointing control of a large space telescope (LST). The AMCD represents a new development in the field of momentum storage devices. A linearized mathematical model is developed for the AMCD/LST system, including the magnetic suspension actuators. Two approaches to control system design are considered. The first approach uses a stochastic linear-quadratic Gaussian controller which utilizes feedback of all states. The second approach considers a more practical control system design in which the axial and radial loops are designed independently.

  1. Defining optimal DEM resolutions and point densities for modelling hydrologically sensitive areas in agricultural catchments dominated by microtopography

    NASA Astrophysics Data System (ADS)

    Thomas, I. A.; Jordan, P.; Shine, O.; Fenton, O.; Mellander, P.-E.; Dunlop, P.; Murphy, P. N. C.

    2017-02-01

    Defining critical source areas (CSAs) of diffuse pollution in agricultural catchments depends upon the accurate delineation of hydrologically sensitive areas (HSAs) at highest risk of generating surface runoff pathways. In topographically complex landscapes, this delineation is constrained by digital elevation model (DEM) resolution and the influence of microtopographic features. To address this, optimal DEM resolutions and point densities for spatially modelling HSAs were investigated, for onward use in delineating CSAs. The surface runoff framework was modelled using the Topographic Wetness Index (TWI) and maps were derived from 0.25 m LiDAR DEMs (40 bare-earth points m-2), resampled 1 m and 2 m LiDAR DEMs, and a radar generated 5 m DEM. Furthermore, the resampled 1 m and 2 m LiDAR DEMs were regenerated with reduced bare-earth point densities (5, 2, 1, 0.5, 0.25 and 0.125 points m-2) to analyse effects on elevation accuracy and important microtopographic features. Results were compared to surface runoff field observations in two 10 km2 agricultural catchments for evaluation. Analysis showed that the accuracy of modelled HSAs using different thresholds (5%, 10% and 15% of the catchment area with the highest TWI values) was much higher using LiDAR data compared to the 5 m DEM (70-100% and 10-84%, respectively). This was attributed to the DEM capturing microtopographic features such as hedgerow banks, roads, tramlines and open agricultural drains, which acted as topographic barriers or channels that diverted runoff away from the hillslope scale flow direction. Furthermore, the identification of 'breakthrough' and 'delivery' points along runoff pathways where runoff and mobilised pollutants could be potentially transported between fields or delivered to the drainage channel network was much higher using LiDAR data compared to the 5 m DEM (75-100% and 0-100%, respectively). Optimal DEM resolutions of 1-2 m were identified for modelling HSAs, which balanced the need

  2. Optimizing the Point-Source Emission Rates and Geometries of Pheromone Mating Disruption Mega-Dispensers.

    PubMed

    Baker, T C; Myrick, A J; Park, K C

    2016-09-01

    High-emission-rate "mega-dispensers" have come into increasing use for sex pheromone mating disruption of moth pests over the past two decades. These commercially available dispensers successfully suppress mating and reduce crop damage when they are deployed at very low to moderate densities, ranging from 1 to 5/ha to 100-1000/ha, depending on the dispenser types and their corresponding pheromone emission rates. Whereas traditionally the emission rates for successful commercial mating disruption formulations have been measured in terms of amounts (usually milligram) emitted by the disruptant application per acre or hectare per day, we suggest that emission rates should be measured on a per-dispenser per-minute basis. In addition we suggest, because of our knowledge concerning upwind flight of male moths being dependent on contact with pheromone plume strands, that more attention needs to be paid to optimizing the flux within plume strands that shear off of any mating disruption dispenser's surface. By measuring the emission rates on a per-minute basis and measuring the plume strand concentrations emanating from the dispensers, it may help improve the ability of the dispensers to initiate upwind flight from males and initiate their habituation to the pheromone farther downwind than can otherwise be achieved. In addition, by optimizing plume strand flux by paying attention to the geometries and compactness of mating disruption mega-dispensers may help reduce the cost of mega-dispenser disruption formulations by improving their behavioral efficacy while maintaining field longevity and using lower loading rates per dispenser.

  3. Point-Counterpoint: What Is the Optimal Approach for Detection of Clostridium difficile Infection?

    PubMed

    Fang, Ferric C; Polage, Christopher R; Wilcox, Mark H

    2017-03-01

    INTRODUCTIONIn 2010, we published an initial Point-Counterpoint on the laboratory diagnosis of Clostridium difficile infection (CDI). At that time, nucleic acid amplification tests (NAATs) were just becoming commercially available, and the idea of algorithmic approaches to CDI was being explored. Now, there are numerous NAATs in the marketplace, and based on recent proficiency test surveys, they have become the predominant method used for CDI diagnosis in the United States. At the same time, there is a body of literature that suggests that NAATs lack clinical specificity and thus inflate CDI rates. Hospital administrators are taking note of institutional CDI rates because they are publicly reported. They have become an important metric impacting hospital safety ratings and value-based purchasing; hospitals may have millions of dollars of reimbursement at risk. In this Point-Counterpoint using a frequently asked question approach, Ferric Fang of the University of Washington, who has been a consistent advocate for a NAAT-only approach for CDI diagnosis, will discuss the value of a NAAT-only approach, while Christopher Polage of the University of California Davis and Mark Wilcox of Leeds University, Leeds, United Kingdom, each of whom has recently written important articles on the value of toxin detection in the diagnosis, will discuss the impact of toxin detection in CDI diagnosis.

  4. AI techniques for optimizing multi-objective reservoir operation upon human and riverine ecosystem demands

    NASA Astrophysics Data System (ADS)

    Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.

    2015-11-01

    Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.

  5. Hazard Analysis and Critical Control Points among Chinese Food Business Operators

    PubMed Central

    Amadei, Paolo; Masotti, Gianfranco; Condoleo, Roberto; Guidi, Alessandra

    2014-01-01

    The purpose of the present paper is to highlight some critical situations emerged during the implementation of long-term projects locally managed by Prevention Services, to control some manufacturing companies in Rome and Prato, Central Italy. In particular, some critical issues on the application of self-control in marketing and catering held by Chinese operators are underlined. The study showed serious flaws in preparing and controlling of manuals for good hygiene practice, participating of the consultants among food business operators (FBOs) to the control of the procedures. Only after regular actions by the Prevention Services, there have been satisfying results. This confirms the need to have qualified and expert partners able to promptly act among FBOs and to give adequate support to authorities in charge in order to guarantee food safety. PMID:27800356

  6. Parameters Optimization for Operational Storm Surge/Tide Forecast Model using a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, W.; You, S.; Ryoo, S.; Global Environment System Research Laboratory

    2010-12-01

    Typhoons generated in northwestern Pacific Ocean annually affect the Korean Peninsula and storm surges generated by strong low pressure and sea winds often cause serious damage to property in the coastal region. To predict storm surges, a lot of researches have been conducted by using numerical models for many years. Various parameters used for calculation of physics process are used in numerical models based on laws of physics, but they are not accurate values. Because those parameters affect to the model performance, these uncertain values can sensitively operate results of the model. Therefore, optimization of these parameters used in numerical model is essential for accurate storm surge predictions. A genetic algorithm (GA) is recently used to estimate optimized values of these parameters. The GA is a stochastic exploration modeling natural phenomenon named genetic heritance and competition for survival. To realize breeding of species and selection, the groups which may be harmed are kept and use genetic operators such as inheritance, mutation, selection and crossover. In this study, we have improved operational storm surge/tide forecast model(STORM) of NIMR/KMA (National Institute of Meteorological Research/Korea Meteorological Administration) that covers 115E - 150E, 20N - 52N based on POM (Princeton Ocean Model) with 8km horizontal resolutions using the GA. Optimized values have been estimated about main 4 parameters which are bottom drag coefficient, background horizontal diffusivity coefficient, Smagoranski’s horizontal viscosity coefficient and sea level pressure scaling coefficient within STORM. These optimized parameters were estimated on typhoon MAEMI in 2003 and 9 typhoons which have affected to Korea peninsula from 2005 to 2007. The 4 estimated parameters were also used to compare one-month predictions in February and August 2008. During the 48h forecast time, the mean and median model accuracies improved by 25 and 51%, respectively.

  7. Generalized Wilson-Fisher Critical Points from the Conformal Operator Product Expansion

    NASA Astrophysics Data System (ADS)

    Gliozzi, Ferdinando; Guerrieri, Andrea L.; Petkou, Anastasios C.; Wen, Congkao

    2017-02-01

    We study possible smooth deformations of the generalized free conformal field theory in arbitrary dimensions by exploiting the singularity structure of the conformal blocks dictated by the null states. We derive in this way, at the first nontrivial order in the ɛ expansion, the anomalous dimensions of an infinite class of scalar local operators, without using the equations of motion. In the cases where other computational methods apply, the results agree.

  8. Optimizing the CEBAF Injector for Beam Operation with a Higher Voltage Electron Gun

    SciTech Connect

    F.E. Hannon, A.S. Hofler, R. Kazimi

    2011-03-01

    Recent developments in the DC gun technology used at CEBAF have allowed an increase in operational voltage from 100kV to 130kV. In the near future this will be extended further to 200kV with the purchase of a new power supply. The injector components and layout at this time have been designed specifically for 100kV operation. It is anticipated that with an increase in gun voltage and optimization of the layout and components for 200kV operation, that the electron bunch length and beam brightness can be improved upon. This paper explores some upgrade possibilities for a 200kV gun CEBAF injector through beam dynamic simulations.

  9. Optimization of non-aqueous electrolytes for Primary lithium/air batteries operated in Ambient Enviroment

    SciTech Connect

    Xu, Wu; Xiao, Jie; Zhang, Jian; Wang, Deyu; Zhang, Jiguang

    2009-07-07

    The selection and optimization of non-aqueous electrolytes for ambient operations of lithium/air batteries has been studied. Organic solvents with low volatility and low moisture absorption are necessary to minimize the change of electrolyte compositions and the reaction between lithium anode and water during discharge process. It is critical to make the electrolytes with high polarity so that it can reduce wetting and flooding of carbon based air electrode and lead to improved battery performance. For ambient operations, the viscosity, ionic conductivity, and oxygen solubility of the electrolyte are less important than the polarity of organic solvents once the electrolyte has reasonable viscosity, conductivity, and oxygen solubility. It has been found that PC/EC mixture is the best solvent system and LiTFSI is the most feasible salt for ambient operations of Li/air batteries. Battery performance is not very sensitive to PC/EC ratio or salt concentration.

  10. Long-term energy capture and the effects of optimizing wind turbine operating strategies

    NASA Technical Reports Server (NTRS)

    Miller, A. H.; Formica, W. J.

    1982-01-01

    Methods of increasing energy capture without affecting the turbine design were investigated. The emphasis was on optimizing the wind turbine operating strategy. The operating strategy embodies the startup and shutdown algorithm as well as the algorithm for determining when to yaw (rotate) the axis of the turbine more directly into the wind. Using data collected at a number of sites, the time-dependent simulation of a MOD-2 wind turbine using various, site-dependent operating strategies provided evidence that site-specific fine tuning can produce significant increases in long-term energy capture as well as reduce the number of start-stop cycles and yawing maneuvers, which may result in reduced fatigue and subsequent maintenance.

  11. Correlation of part-span damper losses through transonic rotors operating near design point

    NASA Technical Reports Server (NTRS)

    Roberts, W. B.

    1977-01-01

    The design-point losses caused by part-span dampers (PSD) were correlated for 21 transonic axial flow fan rotors that had tip speeds varying from 350 to 488 meters per second and design pressure ratios of 1.5 to 2.0. For these rotors a correlation using mean inlet Mach number at the damper location, along with relevant geometric and aerodynamic loading parameters, predicts the variation of total pressure loss coefficient in the region of the damper to a good approximation.

  12. Optimal cut points of waist circumference (WC) and visceral fat area (VFA) predicting for metabolic syndrome (MetS) in elderly population in the Korean Longitudinal Study on Health and Aging (KLoSHA).

    PubMed

    Lim, Soo; Kim, Jung Hee; Yoon, Ji Won; Kang, Seon Mee; Choi, Sung Hee; Park, Young Joo; Kim, Ki Woong; Cho, Nam Han; Shin, Hayley; Park, Kyong Soo; Jang, Hak Chul

    2012-01-01

    Optimal cut points of central obesity identifying subjects at risk for MetS were proposed ethnic-specifically, but have not been established yet. Of particular interest are the values for elderly persons, which have not been identified previously. We investigated the appropriate cut points of WC and VFA for elderly in a community-based cohort in Korea. We recruited 294 men and 313 women aged 65 or over who participated in the KLoSHA. A receiver operating characteristic (ROC) curve analysis was used to estimate the optimal cut points of WC and VFA indicative of MetS. The optimal cut points for predicting MetS were 87 cm for WC, 140 cm(2) for VFA in men, and 85 cm for WC, 100 cm(2) for VFA in women with the Youden index. Similar cut points were obtained with the closest-to-(0, 1) criterion except for VFA in men, which was 122 cm(2). When adjusted for age, exercise, smoking, and alcohol consumption, men with ≥122 cm(2) and women with ≥100 cm(2) of VFA had a higher risk of MetS than subjects with lower values. The cut points of VFA and WC at risk for MetS were higher in men than women. In this community-based elderly cohort, the optimal cut points of WC at risk for MetS were lower than the Western criteria. Compared with the cut points in middle-aged Koreans, the cut points for elderly were lower in men and similar in women.

  13. Deriving multiple near-optimal solutions to deterministic reservoir operation problems

    NASA Astrophysics Data System (ADS)

    Liu, Pan; Cai, Ximing; Guo, Shenglian

    2011-08-01

    Even deterministic reservoir operation problems with a single objective function may have multiple near-optimal solutions (MNOS) whose objective values are equal or sufficiently close to the optimum. MNOS is valuable for practical reservoir operation decisions because having a set of alternatives from which to choose allows reservoir operators to explore multiple options whereas the traditional algorithm that produces a single optimum does not offer them this flexibility. This paper presents three methods: the near-shortest paths (NSP) method, the genetic algorithm (GA) method, and the Markov chain Monte Carlo (MCMC) method, to explore the MNOS. These methods, all of which require a long computation time, find MNOS using different approaches. To reduce the computation time, a narrower subspace, namely a near-optimal space (NOSP, described by the maximum and minimum bounds of MNOS) is derived. By confining the MNOS search within the NOSP, the computation time of the three methods is reduced. The proposed methods are validated with a test function before they are examined with case studies of both a single reservoir (the Three Gorges Reservoir in China) and a multireservoir system (the Qing River Cascade Reservoirs in China). It is found that MNOS exists for the deterministic reservoir operation problems. When comparing the three methods, the NSP method is unsuitable for large-scale problems but provides a benchmark to which solutions of small- and medium-scale problems can be compared. The GA method can produce some MNOS but is not very efficient in terms of the computation time. Finally, the MCMC method performs best in terms of goodness-of-fit to the benchmark and computation time, since it yields a wide variety of MNOS based on all retained intermediate results as potential MNOS. Two case studies demonstrate that the MNOS identified in this study are useful for real-world reservoir operation, such as the identification of important operation time periods and

  14. Identifying the microbial communities and operational conditions for optimized wastewater treatment in microbial fuel cells.

    PubMed

    Ishii, Shun'ichi; Suzuki, Shino; Norden-Krichmar, Trina M; Wu, Angela; Yamanaka, Yuko; Nealson, Kenneth H; Bretschger, Orianna

    2013-12-01

    Microbial fuel cells (MFCs) are devices that exploit microorganisms as "biocatalysts" to recover energy from organic matter in the form of electricity. MFCs have been explored as possible energy neutral wastewater treatment systems; however, fundamental knowledge is still required about how MFC-associated microbial communities are affected by different operational conditions and can be optimized for accelerated wastewater treatment rates. In this study, we explored how electricity-generating microbial biofilms were established at MFC anodes and responded to three different operational conditions during wastewater treatment: 1) MFC operation using a 750 Ω external resistor (0.3 mA current production); 2) set-potential (SP) operation with the anode electrode potentiostatically controlled to +100 mV vs SHE (4.0 mA current production); and 3) open circuit (OC) operation (zero current generation). For all reactors, primary clarifier effluent collected from a municipal wastewater plant was used as the sole carbon and microbial source. Batch operation demonstrated nearly complete organic matter consumption after a residence time of 8-12 days for the MFC condition, 4-6 days for the SP condition, and 15-20 days for the OC condition. These results indicate that higher current generation accelerates organic matter degradation during MFC wastewater treatment. The microbial community analysis was conducted for the three reactors using 16S rRNA gene sequencing. Although the inoculated wastewater was dominated by members of Epsilonproteobacteria, Gammaproteobacteria, and Bacteroidetes species, the electricity-generating biofilms in MFC and SP reactors were dominated by Deltaproteobacteria and Bacteroidetes. Within Deltaproteobacteria, phylotypes classified to family Desulfobulbaceae and Geobacteraceae increased significantly under the SP condition with higher current generation; however those phylotypes were not found in the OC reactor. These analyses suggest that species

  15. The study on optimal operation of compound heat-pump system

    NASA Astrophysics Data System (ADS)

    Shin, Kwan-Woo; Kim, Ilhyun; Kim, Yong-Tae

    2007-12-01

    Heat-pump system has a special feature that provides heating operation in winter season and cooling operation in summer season with a single system. It also has a merit that absorbs and makes use of wastewater heat, terrestrial heat, and heat energy from the air. Because heat-pump system uses midnight electric power, it decreases power peak load and is very economical as a result. By using the property that energy source is converted to low temperature when losing the heat, high temperature energy source is used to provide heating water and low temperature energy source is used to provide cooling water simultaneously in summer season. This study made up a heat-pump system with 4 air heat sources and a water heat source and implemented the optimal operation algorithm that works with numbers of heat pumps to operate them efficiently. With the heat-pump system, we applied it to cooling and heating operation in summer season and in winter season operation mode in a real building.

  16. A Novel Hybrid Clonal Selection Algorithm with Combinatorial Recombination and Modified Hypermutation Operators for Global Optimization

    PubMed Central

    Lin, Jingjing; Jing, Honglei

    2016-01-01

    Artificial immune system is one of the most recently introduced intelligence methods which was inspired by biological immune system. Most immune system inspired algorithms are based on the clonal selection principle, known as clonal selection algorithms (CSAs). When coping with complex optimization problems with the characteristics of multimodality, high dimension, rotation, and composition, the traditional CSAs often suffer from the premature convergence and unsatisfied accuracy. To address these concerning issues, a recombination operator inspired by the biological combinatorial recombination is proposed at first. The recombination operator could generate the promising candidate solution to enhance search ability of the CSA by fusing the information from random chosen parents. Furthermore, a modified hypermutation operator is introduced to construct more promising and efficient candidate solutions. A set of 16 common used benchmark functions are adopted to test the effectiveness and efficiency of the recombination and hypermutation operators. The comparisons with classic CSA, CSA with recombination operator (RCSA), and CSA with recombination and modified hypermutation operator (RHCSA) demonstrate that the proposed algorithm significantly improves the performance of classic CSA. Moreover, comparison with the state-of-the-art algorithms shows that the proposed algorithm is quite competitive. PMID:27698662

  17. Composite laminate failure parameter optimization through four-point flexure experimentation and analysis

    SciTech Connect

    Nelson, Stacy; English, Shawn; Briggs, Timothy

    2016-05-06

    Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be defined through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.

  18. Composite laminate failure parameter optimization through four-point flexure experimentation and analysis

    DOE PAGES

    Nelson, Stacy; English, Shawn; Briggs, Timothy

    2016-05-06

    Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less

  19. A New Tool for Environmental and Economic Optimization of Hydropower Operations

    NASA Astrophysics Data System (ADS)

    Saha, S.; Hayse, J. W.

    2012-12-01

    As part of a project funded by the U.S. Department of Energy, researchers from Argonne, Oak Ridge, Pacific Northwest, and Sandia National Laboratories collaborated on the development of an integrated toolset to enhance hydropower operational decisions related to economic value and environmental performance. As part of this effort, we developed an analytical approach (Index of River Functionality, IRF) and an associated software tool to evaluate how well discharge regimes achieve ecosystem management goals for hydropower facilities. This approach defines site-specific environmental objectives using relationships between environmental metrics and hydropower-influenced flow characteristics (e.g., discharge or temperature), with consideration given to seasonal timing, duration, and return frequency requirements for the environmental objectives. The IRF approach evaluates the degree to which an operational regime meets each objective and produces a score representing how well that regime meets the overall set of defined objectives. When integrated with other components in the toolset that are used to plan hydropower operations based upon hydrologic forecasts and various constraints on operations, the IRF approach allows an optimal release pattern to be developed based upon tradeoffs between environmental performance and economic value. We tested the toolset prototype to generate a virtual planning operation for a hydropower facility located in the Upper Colorado River basin as a demonstration exercise. We conducted planning as if looking five months into the future using data for the recently concluded 2012 water year. The environmental objectives for this demonstration were related to spawning and nursery habitat for endangered fishes using metrics associated with maintenance of instream habitat and reconnection of the main channel with floodplain wetlands in a representative reach of the river. We also applied existing mandatory operational constraints for the

  20. Energy operator demodulating of optimal resonance components for the compound faults diagnosis of gearboxes

    NASA Astrophysics Data System (ADS)

    Zhang, Dingcheng; Yu, Dejie; Zhang, Wenyi

    2015-11-01

    Compound faults diagnosis is a challenge for rotating machinery fault diagnosis. The vibration signals measured from gearboxes are usually complex, non-stationary, and nonlinear. When compound faults occur in a gearbox, weak fault characteristic signals are always submerged by the strong ones. Therefore, it is difficult to detect a weak fault by using the demodulating analysis of vibration signals of gearboxes directly. The key to compound faults diagnosis of gearboxes is to separate different fault characteristic signals from the collected vibration signals. Aiming at that problem, a new method for the compound faults diagnosis of gearboxes is proposed based on the energy operator demodulating of optimal resonance components. In this method, the genetic algorithm is first used to obtain the optimal decomposition parameters. Then the compound faults vibration signals of a gearbox are subject to resonance-based signal sparse decomposition (RSSD) to separate the fault characteristic signals of the gear and the bearing by using the optimal decomposition parameters. Finally, the separated fault characteristic signals are analyzed by energy operator demodulating, and each one’s instantaneous amplitude can be calculated. According to the spectra of instantaneous amplitudes of fault characteristic signals, the faults of the gear and the bearing can be diagnosed, respectively. The performance of the proposed method is validated by using the simulation data and the experiment vibration signals from a gearbox with compound faults.

  1. Heuristic optimization of a continuous flow point-of-use UV-LED disinfection reactor using computational fluid dynamics.

    PubMed

    Jenny, Richard M; Jasper, Micah N; Simmons, Otto D; Shatalov, Max; Ducoste, Joel J

    2015-10-15

    Alternative disinfection sources such as ultraviolet light (UV) are being pursued to inactivate pathogenic microorganisms such as Cryptosporidium and Giardia, while simultaneously reducing the risk of exposure to carcinogenic disinfection by-products (DBPs) in drinking water. UV-LEDs offer a UV disinfecting source that do not contain mercury, have the potential for long lifetimes, are robust, and have a high degree of design flexibility. However, the increased flexibility in design options will add a substantial level of complexity when developing a UV-LED reactor, particularly with regards to reactor shape, size, spatial orientation of light, and germicidal emission wavelength. Anticipating that LEDs are the future of UV disinfection, new methods are needed for designing such reactors. In this research study, the evaluation of a new design paradigm using a point-of-use UV-LED disinfection reactor has been performed. ModeFrontier, a numerical optimization platform, was coupled with COMSOL Multi-physics, a computational fluid dynamics (CFD) software package, to generate an optimized UV-LED continuous flow reactor. Three optimality conditions were considered: 1) single objective analysis minimizing input supply power while achieving at least (2.0) log10 inactivation of Escherichia coli ATCC 11229; and 2) two multi-objective analyses (one of which maximized the log10 inactivation of E. coli ATCC 11229 and minimized the supply power). All tests were completed at a flow rate of 109 mL/min and 92% UVT (measured at 254 nm). The numerical solution for the first objective was validated experimentally using biodosimetry. The optimal design predictions displayed good agreement with the experimental data and contained several non-intuitive features, particularly with the UV-LED spatial arrangement, where the lights were unevenly populated throughout the reactor. The optimal designs may not have been developed from experienced designers due to the increased degrees of

  2. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  3. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Astrophysics Data System (ADS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2010-09-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6–10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process can greatly improve the precision of measurements in the images. This is especially important if the analysis algorithm relies on the mode or the median, which would be similarly quantized if the pixel values are not dithered. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  4. Point focusing using loudspeaker arrays from the perspective of optimal beamforming.

    PubMed

    Bai, Mingsian R; Hsieh, Yu-Hao

    2015-06-01

    Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.

  5. Point-of-care detection of extracellular vesicles: Sensitivity optimization and multiple-target detection.

    PubMed

    Oliveira-Rodríguez, Myriam; Serrano-Pertierra, Esther; García, Agustín Costa; López-Martín, Soraya; Yañez-Mo, María; Cernuda-Morollón, Eva; Blanco-López, M C

    2017-01-15

    Extracellular vesicles (EVs) are membrane-bound nanovesicles delivered by different cellular lineages under physiological and pathological conditions. Although these vesicles have shown relevance as biomarkers for a number of diseases, their isolation and detection still has several technical drawbacks, mainly related with problems of sensitivity and time-consumed. Here, we reported a rapid and multiple-targeted lateral flow immunoassay (LFIA) system for the detection of EVs isolated from human plasma. A range of different labels (colloidal gold, carbon black and magnetic nanoparticles) was compared as detection probe in LFIA, being gold nanoparticles that showed better results. Using this platform, we demonstrated that improvements may be carried out by incorporating additional capture lines with different antibodies. The device exhibited a limit of detection (LOD) of 3.4×10(6)EVs/µL when anti-CD81 and anti-CD9 were selected as capture antibodies in a multiple-targeted format, and anti-CD63 labeled with gold nanoparticles was used as detection probe. This LFIA, coupled to EVs isolation kits, could become a rapid and useful tool for the point-of-care detection of EVs, with a total analysis time of two hours.

  6. Optimal operational conditions for supercontinuum-based ultrahigh-resolution endoscopic OCT imaging.

    PubMed

    Yuan, Wu; Mavadia-Shukla, Jessica; Xi, Jiefeng; Liang, Wenxuan; Yu, Xiaoyun; Yu, Shaoyong; Li, Xingde

    2016-01-15

    We investigated the optimal operational conditions for utilizing a broadband supercontinuum (SC) source in a portable 800 nm spectral-domain (SD) endoscopic OCT system to enable high resolution, high-sensitivity, and high-speed imaging in vivo. A SC source with a 3-dB bandwidth of ∼246  nm was employed to obtain an axial resolution of ∼2.7  μm (in air) and an optimal detection sensitivity of ∼-107  dB with an imaging speed up to 35 frames/s (at 70 k A-scans/s). The performance of the SC-based SD-OCT endoscopy system was demonstrated by imaging guinea pig esophagus in vivo, achieving image quality comparable to that acquired with a broadband home-built Ti:sapphire laser.

  7. Optimization of wire Electrical Discharge turning operations using robust design of experiment

    NASA Astrophysics Data System (ADS)

    Mohammadi, Aminollah; Fadaei Tehrani, Alireza; Safari, Mahdi

    2011-01-01

    In the present study a multi response optimization method using Taguchi's robust design approach is proposed for wire electrical discharge turning (WEDT) operations. Experimentation was planned as per Taguchi's L18 orthogonal array. Each experiment has been performed under different machining conditions of power, servo, voltage, pulse off time, wire tension, wire feed speed, and rotational speed. Three responses namely material removal rate (MRR), surface roughness, and roundness have been considered for each experiment. The machining parameters are optimized with the multi response characteristics of the material removal rate, surface roughness, and roundness. Multi response S/N (MRSN) ratio is applied to measure the performance characteristics deviating from the actual value. Analysis of variance (ANOVA) is employed to identify the level of importance of the machining parameters on the multiple performance considered characteristics. Finally experimental confirmation was carried out to identify the effectiveness of this proposed method.

  8. Intracellular calcium affects prestin's voltage operating point indirectly via turgor-induced membrane tension

    NASA Astrophysics Data System (ADS)

    Song, Lei; Santos-Sacchi, Joseph

    2015-12-01

    Recent identification of a calmodulin binding site within prestin's C-terminus indicates that calcium can significantly alter prestin's operating voltage range as gauged by the Boltzmann parameter Vh (Keller et al., J. Neuroscience, 2014). We reasoned that those experiments may have identified the molecular substrate for the protein's tension sensitivity. In an effort to understand how this may happen, we evaluated the effects of turgor pressure on such shifts produced by calcium. We find that the shifts are induced by calcium's ability to reduce turgor pressure during whole cell voltage clamp recording. Clamping turgor pressure to 1kPa, the cell's normal intracellular pressure, completely counters the calcium effect. Furthermore, following unrestrained shifts, collapsing the cells abolishes induced shifts. We conclude that calcium does not work by direct action on prestin's conformational state. The possibility remains that calcium interaction with prestin alters water movements within the cell, possibly via its anion transport function.

  9. Ground-based telescope pointing and tracking optimization using a neural controller.

    PubMed

    Mancini, D; Brescia, M; Schipani, P

    2003-01-01

    Neural network models (NN) have emerged as important components for applications of adaptive control theories. Their basic generalization capability, based on acquired knowledge, together with execution rapidity and correlation ability between input stimula, are basic attributes to consider NN as an extremely powerful tool for on-line control of complex systems. By a control system point of view, not only accuracy and speed, but also, in some cases, a high level of adaptation capability is required in order to match all working phases of the whole system during its lifetime. This is particularly remarkable for a new generation ground-based telescope control system. Infact, strong changes in terms of system speed and instantaneous position error tolerance are necessary, especially in case of trajectory disturb induced by wind shake. The classical control scheme adopted in such a system is based on the proportional integral (PI) filter, already applied and implemented on a large amount of new generation telescopes, considered as a standard in this technological environment. In this paper we introduce the concept of a new approach, the neural variable structure proportional integral, (NVSPI), related to the implementation of a standard multi layer perceptron network in new generation ground-based Alt-Az telescope control systems. Its main purpose is to improve adaptive capability of the Variable structure proportional integral model, an already innovative control scheme recently introduced by authors [Proc SPIE (1997)], based on a modified version of classical PI control model, in terms of flexibility and accuracy of the dynamic response range also in presence of wind noise effects. The realization of a powerful well tested and validated telescope model simulation system allowed the possibility to directly compare performances of the two control schemes on simulated tracking trajectories, revealing extremely encouraging results in terms of NVSPI control robustness and

  10. Optimization of Design Parameters and Operating Conditions of Electrochemical Capacitors for High Energy and Power Performance

    NASA Astrophysics Data System (ADS)

    Ike, Innocent S.; Sigalas, Iakovos; Iyuke, Sunny E.

    2017-03-01

    Theoretical expressions for performance parameters of different electrochemical capacitors (ECs) have been optimized by solving them using MATLAB scripts as well as via the MATLAB R2014a optimization toolbox. The performance of the different kinds of ECs under given conditions was compared using theoretical equations and simulations of various models based on the conditions of device components, using optimal values for the coefficient associated with the battery-kind material ( K BMopt) and the constant associated with the electrolyte material ( K Eopt), as well as our symmetric electric double-layer capacitor (EDLC) experimental data. Estimation of performance parameters was possible based on values for the mass ratio of electrodes, operating potential range ratio, and specific capacitance of electrolyte. The performance of asymmetric ECs with suitable electrode mass and operating potential range ratios using aqueous or organic electrolyte at appropriate operating potential range and specific capacitance was 2.2 and 5.56 times greater, respectively, than for the symmetric EDLC and asymmetric EC using the same aqueous electrolyte, respectively. This enhancement was accompanied by reduced cell mass and volume. Also, the storable and deliverable energies of the asymmetric EC with suitable electrode mass and operating potential range ratios using the proper organic electrolyte were 12.9 times greater than those of the symmetric EDLC using aqueous electrolyte, again with reduced cell mass and volume. The storable energy, energy density, and power density of the asymmetric EDLC with suitable electrode mass and operating potential range ratios using the proper organic electrolyte were 5.56 times higher than for a similar symmetric EDLC using aqueous electrolyte, with cell mass and volume reduced by a factor of 1.77. Also, the asymmetric EDLC with the same type of electrode and suitable electrode mass ratio, working potential range ratio, and proper organic electrolyte

  11. Optimization of Design Parameters and Operating Conditions of Electrochemical Capacitors for High Energy and Power Performance

    NASA Astrophysics Data System (ADS)

    Ike, Innocent S.; Sigalas, Iakovos; Iyuke, Sunny E.

    2017-01-01

    Theoretical expressions for performance parameters of different electrochemical capacitors (ECs) have been optimized by solving them using MATLAB scripts as well as via the MATLAB R2014a optimization toolbox. The performance of the different kinds of ECs under given conditions was compared using theoretical equations and simulations of various models based on the conditions of device components, using optimal values for the coefficient associated with the battery-kind material (K BMopt) and the constant associated with the electrolyte material (K Eopt), as well as our symmetric electric double-layer capacitor (EDLC) experimental data. Estimation of performance parameters was possible based on values for the mass ratio of electrodes, operating potential range ratio, and specific capacitance of electrolyte. The performance of asymmetric ECs with suitable electrode mass and operating potential range ratios using aqueous or organic electrolyte at appropriate operating potential range and specific capacitance was 2.2 and 5.56 times greater, respectively, than for the symmetric EDLC and asymmetric EC using the same aqueous electrolyte, respectively. This enhancement was accompanied by reduced cell mass and volume. Also, the storable and deliverable energies of the asymmetric EC with suitable electrode mass and operating potential range ratios using the proper organic electrolyte were 12.9 times greater than those of the symmetric EDLC using aqueous electrolyte, again with reduced cell mass and volume. The storable energy, energy density, and power density of the asymmetric EDLC with suitable electrode mass and operating potential range ratios using the proper organic electrolyte were 5.56 times higher than for a similar symmetric EDLC using aqueous electrolyte, with cell mass and volume reduced by a factor of 1.77. Also, the asymmetric EDLC with the same type of electrode and suitable electrode mass ratio, working potential range ratio, and proper organic electrolyte

  12. Optimal operational strategies for a day-ahead electricity market in the presence of market power using multi-objective evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Rodrigo, Deepal

    2007-12-01

    reinforced the selection of these algorithms. The results obtained from each of the three algorithms used in the evaluations are very comparable. Thus one could safely conclude that the results obtained are valid. Three distinct test power systems operating under different conditions were studied for evaluating the suitability of each of these algorithms. The test cases included scenarios in which the power system was unconstrained as well as constrained. Repeated simulations carried out for the same test case with varying starting points provided evidence that the algorithms and the solutions were robust. Influences of different market concentrations on the optimal economic dispatch are evidenced by the pareto-optimal-fronts obtained for each test case studied. Results obtained from a traditional linear programming (LP) based solution algorithm that is used at present by many market operators are also presented for comparison. Very high market-concentration-indices were found for each solution from the LP algorithm. This suggests the need to use a formal method for mitigating market concentration. Operating the market at industry-recommended threshold levels of market concentration for selecting an optimal operational point is presented for all test cases studied. Given that a solution-set instead of a single operating point is found from the multi-objective optimization methods, additional flexibility to select any operational point based on the preference of those operating the market clearly is an added benefit of using multi-objective optimization methods. However, in order to help the market operator, a more logical fuzzy decision criterion was tested for selecting a suitable operating point. The results show that the optimal operating point chosen using the fuzzy decision criterion provides a higher economic benefit to the market, although at a slightly increased market concentration. Since the main objective of this research was to simultaneously optimize the

  13. MagRad: A code to optimize the operation of superconducting magnets in a radiation environment

    SciTech Connect

    Yeaw, Christopher T.

    1995-01-01

    A powerful computational tool, called MagRad, has been developed which optimizes magnet design for operation in radiation fields. Specifically, MagRad has been used for the analysis and design modification of the cable-in-conduit conductors of the TF magnet systems in fusion reactor designs. Since the TF magnets must operate in a radiation environment which damages the material components of the conductor and degrades their performance, the optimization of conductor design must account not only for start-up magnet performance, but also shut-down performance. The degradation in performance consists primarily of three effects: reduced stability margin of the conductor; a transition out of the well-cooled operating regime; and an increased maximum quench temperature attained in the conductor. Full analysis of the magnet performance over the lifetime of the reactor includes: radiation damage to the conductor, stability, protection, steady state heat removal, shielding effectiveness, optimal annealing schedules, and finally costing of the magnet and reactor. Free variables include primary and secondary conductor geometric and compositional parameters, as well as fusion reactor parameters. A means of dealing with the radiation damage to the conductor, namely high temperature superconductor anneals, is proposed, examined, and demonstrated to be both technically feasible and cost effective. Additionally, two relevant reactor designs (ITER CDA and ARIES-II/IV) have been analyzed. Upon addition of pure copper strands to the cable, the ITER CDA TF magnet design was found to be marginally acceptable, although much room for both performance improvement and cost reduction exists. A cost reduction of 10-15% of the capital cost of the reactor can be achieved by adopting a suitable superconductor annealing schedule. In both of these reactor analyses, the performance predictive capability of MagRad and its associated costing techniques have been demonstrated.

  14. Chandra X-Ray Observatory Pointing Control System Performance During Transfer Orbit and Initial On-Orbit Operations

    NASA Technical Reports Server (NTRS)

    Quast, Peter; Tung, Frank; West, Mark; Wider, John

    2000-01-01

    The Chandra X-ray Observatory (CXO, formerly AXAF) is the third of the four NASA great observatories. It was launched from Kennedy Space Flight Center on 23 July 1999 aboard the Space Shuttle Columbia and was successfully inserted in a 330 x 72,000 km orbit by the Inertial Upper Stage (IUS). Through a series of five Integral Propulsion System burns, CXO was placed in a 10,000 x 139,000 km orbit. After initial on-orbit checkout, Chandra's first light images were unveiled to the public on 26 August, 1999. The CXO Pointing Control and Aspect Determination (PCAD) subsystem is designed to perform attitude control and determination functions in support of transfer orbit operations and on-orbit science mission. After a brief description of the PCAD subsystem, the paper highlights the PCAD activities during the transfer orbit and initial on-orbit operations. These activities include: CXO/IUS separation, attitude and gyro bias estimation with earth sensor and sun sensor, attitude control and disturbance torque estimation for delta-v burns, momentum build-up due to gravity gradient and solar pressure, momentum unloading with thrusters, attitude initialization with star measurements, gyro alignment calibration, maneuvering and transition to normal pointing, and PCAD pointing and stability performance.

  15. A Data Filter for Identifying Steady-State Operating Points in Engine Flight Data for Condition Monitoring Applications

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Litt, Jonathan S.

    2010-01-01

    This paper presents an algorithm that automatically identifies and extracts steady-state engine operating points from engine flight data. It calculates the mean and standard deviation of select parameters contained in the incoming flight data stream. If the standard deviation of the data falls below defined constraints, the engine is assumed to be at a steady-state operating point, and the mean measurement data at that point are archived for subsequent condition monitoring purposes. The fundamental design of the steady-state data filter is completely generic and applicable for any dynamic system. Additional domain-specific logic constraints are applied to reduce data outliers and variance within the collected steady-state data. The filter is designed for on-line real-time processing of streaming data as opposed to post-processing of the data in batch mode. Results of applying the steady-state data filter to recorded helicopter engine flight data are shown, demonstrating its utility for engine condition monitoring applications.

  16. Optimal fixed-finite-dimensional compensator for Burgers' equation with unbounded input/output operators

    NASA Technical Reports Server (NTRS)

    Burns, John A.; Marrekchi, Hamadi

    1993-01-01

    The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.

  17. An Optimization Approach to Coexistence of Bluetooth and Wi-Fi Networks Operating in ISM Environment

    NASA Astrophysics Data System (ADS)

    Klajbor, Tomasz; Rak, Jacek; Wozniak, Jozef

    Unlicensed ISM band is used by various wireless technologies. Therefore, issues related to ensuring the required efficiency and quality of operation of coexisting networks become essential. The paper addresses the problem of mutual interferences between IEEE 802.11b transmitters (commercially named Wi-Fi) and Bluetooth (BT) devices.An optimization approach to modeling the topology of BT scatternets is introduced, resulting in more efficient utilization of ISM environment consisting of BT and Wi-Fi networks. To achieve it, the Integer Linear Programming approach has been proposed. Example results presented in the paper illustrate significant benefits of using the proposed modeling strategy.

  18. [Importance of displacement ventilation for operations and small surgical procedures from the infection preventive point of view].

    PubMed

    Kramer, A; Külpmann, R; Wille, F; Christiansen, B; Exner, M; Kohlmann, T; Heidecke, C D; Lippert, H; Oldhafer, K; Schilling, M; Below, H; Harnoss, J C; Assadian, O

    2010-02-01

    Surgical teams need to breathe air that is conducive to their health. An adequate exchange of air ensures oxygen supply, the ventilation of humidity, smells, toxic substances, especially narcotic gases and surgical smoke, pathogens and particles. With regard to the infection risk, DIN 1946 / 4 -differentiates between operation theaters with the highest demand for clean air (operation room class I a), operation theatres with a high demand (operation room class I b) and rooms within the operation theatres without special requirements, meaning that the microbial load in the air is close to or equal to that of normal in-room air quality (room class II). For an operation room class I a, ventilation that displaces the used air is necessary, while a regular ventilation is sufficient for operation room class I b. Because of ambiguous -results in previous studies, the necessity to define a -class I a for operation rooms is being questioned. Therefore, this review focuses on the analysis of the existing publications with respect to this -question. The result of this analysis indicates that so far there is only one surgical procedure, the -implantation of hip endoprosthetics, for which a preventive effect on SSI of a class I a ventilation (displacement of the used air) is documented. One recent study, reviewed critically here, -showed opposite results, but lacks methodological clarity. Thus, it is concluded that evidence for the requirement of operation room classes can only be derived from risk assessment (infection risk by surgical intervention, extent of possible damages), but not from epidemiological studies. Risk assessment must be based on the following criteria: size and depth of the operation field, -duration of the procedure, vascular perfusion of the wound, implantation of alloplastic material and general risk of the patient for an infection. From an infection preventive point of view, no class I a "displacement ventilation" is necessary for small surgical

  19. Optimization of the genetic operators and algorithm parameters for the design of a multilayer anti-reflection coating using the genetic algorithm

    NASA Astrophysics Data System (ADS)

    Patel, Sanjaykumar J.; Kheraj, Vipul

    2015-07-01

    This paper describes a systematic investigation on the use of the genetic algorithm (GA) to accomplish ultra-low reflective multilayer coating designs for optoelectronic device applications. The algorithm is implemented using LabVIEW as a programming tool. The effects of the genetic operators, such as the type of crossover and mutation, as well as algorithm parameters, such as population size and range of search space, on the convergence of design-solution were studied. Finally, the optimal design is obtained in terms of the thickness of each layer for the multilayer AR coating using optimized genetic operators and algorithm parameters. The program is successfully tested to design AR coating in NIR wavelength range to achieve average reflectivity (R) below 10-3 over the spectral bandwidth of 200 nm with different combinations of coating materials in the stack. The random-point crossover operator is found to exhibit a better convergence rate of the solution than single-point and double-point crossover. Periodically re-initializing the thickness value of a randomly selected layer from the stack effectively prevents the solution from becoming trapped in local minima and improves the convergence probability.

  20. Methodology for optimizing the development and operation of gas storage fields

    SciTech Connect

    Mercer, J.C.; Ammer, J.R.; Mroz, T.H.

    1995-04-01

    The Morgantown Energy Technology Center is pursuing the development of a methodology that uses geologic modeling and reservoir simulation for optimizing the development and operation of gas storage fields. Several Cooperative Research and Development Agreements (CRADAs) will serve as the vehicle to implement this product. CRADAs have been signed with National Fuel Gas and Equitrans, Inc. A geologic model is currently being developed for the Equitrans CRADA. Results from the CRADA with National Fuel Gas are discussed here. The first phase of the CRADA, based on original well data, was completed last year and reported at the 1993 Natural Gas RD&D Contractors Review Meeting. Phase 2 analysis was completed based on additional core and geophysical well log data obtained during a deepening/relogging program conducted by the storage operator. Good matches, within 10 percent, of wellhead pressure were obtained using a numerical simulator to history match 2 1/2 injection withdrawal cycles.

  1. An efficient approach to cathode operational parameters optimization for microbial fuel cell using response surface methodology

    PubMed Central

    2014-01-01

    Background In the recent study, optimum operational conditions of cathode compartment of microbial fuel cell were determined by using Response Surface Methodology (RSM) with a central composite design to maximize power density and COD removal. Methods The interactive effects of parameters such as, pH, buffer concentration and ionic strength on power density and COD removal were evaluated in two-chamber microbial batch-mode fuel cell. Results Power density and COD removal for optimal conditions (pH of 6.75, buffer concentration of 0.177 M and ionic strength of cathode chamber of 4.69 mM) improve by 17 and 5%, respectively, in comparison with normal conditions (pH of 7, buffer concentration of 0.1 M and ionic strength of 2.5 mM). Conclusions In conclusion, results verify that response surface methodology could successfully determine cathode chamber optimum operational conditions. PMID:24423039

  2. Characterizing and Optimizing Photocathode Laser Distributions for Ultra-low Emittance Electron Beam Operations

    SciTech Connect

    Zhou, F.; Bohler, D.; Ding, Y.; Gilevich, S.; Huang, Z.; Loos, H.; Ratner, D.; Vetter, S.

    2015-12-07

    Photocathode RF gun has been widely used for generation of high-brightness electron beams for many different applications. We found that the drive laser distributions in such RF guns play important roles in minimizing the electron beam emittance. Characterizing the laser distributions with measurable parameters and optimizing beam emittance versus the laser distribution parameters in both spatial and temporal directions are highly desired for high-brightness electron beam operation. In this paper, we report systematic measurements and simulations of emittance dependence on the measurable parameters represented for spatial and temporal laser distributions at the photocathode RF gun systems of Linac Coherent Light Source. The tolerable parameter ranges for photocathode drive laser distributions in both directions are presented for ultra-low emittance beam operations.

  3. Reducing the noise floor in optoelectronic oscillator by optimizing the operation of modulator

    NASA Astrophysics Data System (ADS)

    Tong, Guochuan; Jin, Tao; Chi, Hao; Zheng, Junchao; Zhu, Xiang; Lai, Tianhao; Wu, Xidong; Shi, Zhiguo

    2016-10-01

    An intensity modulator is a key component in an optoelectronic oscillator (OEO). It is necessary to investigate the effects of some of the operating parameters of the modulator on OEO phase noise. Since the OEO optimized oscillation power is related to the nonlinear effect of the modulator, an extended analysis is made by considering both the saturation effects of the modulator and the radio frequency amplifier. Experimental results show that a 5-dB improvement of phase noise performance is achieved. In addition, a theoretical and experimental study on the DC bias-drifting problem of the modulator and its induced phase noise fluctuations is proposed. It is found that the saturated operation of the amplifier is helpful in reducing the fluctuation range. The presented results can be used to guide the design of high-quality OEOs.

  4. Development of a protocol to optimize electric power consumption and life cycle environmental impacts for operation of wastewater treatment plant.

    PubMed

    Piao, Wenhua; Kim, Changwon; Cho, Sunja; Kim, Hyosoo; Kim, Minsoo; Kim, Yejin

    2016-12-01

    In wastewater treatment plants (WWTPs), the portion of operating costs related to electric power consumption is increasing. If the electric power consumption decreased, however, it would be difficult to comply with the effluent water quality requirements. A protocol was proposed to minimize the environmental impacts as well as to optimize the electric power consumption under the conditions needed to meet the effluent water quality standards in this study. This protocol was comprised of six phases of procedure and was tested using operating data from S-WWTP to prove its applicability. The 11 major operating variables were categorized into three groups using principal component analysis and K-mean cluster analysis. Life cycle assessment (LCA) was conducted for each group to deduce the optimal operating conditions for each operating state. Then, employing mathematical modeling, six improvement plans to reduce electric power consumption were deduced. The electric power consumptions for suggested plans were estimated using an artificial neural network. This was followed by a second round of LCA conducted on the plans. As a result, a set of optimized improvement plans were derived for each group that were able to optimize the electric power consumption and life cycle environmental impact, at the same time. Based on these test results, the WWTP operating management protocol presented in this study is deemed able to suggest optimal operating conditions under which power consumption can be optimized with minimal life cycle environmental impact, while allowing the plant to meet water quality requirements.

  5. Towards optimizing two-qubit operations in three-electron double quantum dots

    NASA Astrophysics Data System (ADS)

    Frees, Adam; Gamble, John King; Mehl, Sebastian; Friesen, Mark; Coppersmith, S. N.

    The successful implementation of single-qubit gates in the quantum dot hybrid qubit motivates our interest in developing a high fidelity two-qubit gate protocol. Recently, extensive work has been done to characterize the theoretical limitations and advantages in performing two-qubit operations at an operation point located in the charge transition region. Additionally, there is evidence to support that single-qubit gate fidelities improve while operating in the so-called ``far-detuned'' region, away from the charge transition. Here we explore the possibility of performing two-qubit gates in this region, considering the challenges and the benefits that may present themselves while implementing such an operational paradigm. This work was supported in part by ARO (W911NF-12-0607) (W911NF-12-R-0012), NSF (PHY-1104660), ONR (N00014-15-1-0029). The authors gratefully acknowledge support from the Sandia National Laboratories Truman Fellowship Program, which is funded by the Laboratory Directed Research and Development (LDRD) Program. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.

  6. Rulemaking Petition to lower the threshold that qualifies animal feeding operations (“AFOs”) as concentrated animal feeding operations (“CAFOs”) and thereby “point sources” under section 402 of the Clean Water Act (“CWA”)

    EPA Pesticide Factsheets

    Rulemaking Petition submitted September 20, 2015 to lower the threshold that qualifies animal feeding operations (AFOs) as concentrated animal feeding operations (CAFOs) and thereby point sources under§ 402 of the Clean Water Act (CWA).

  7. [Garbage incineration plants -- planning, organisation and operation from health point of view].

    PubMed

    Thriene, B

    2004-12-01

    The Waste Disposal Regulation which became effective March 1, 2001 stipulates that from June 1, 2005 biodegradable residential household and commercial waste may only be deposited on landfills after thermal or mechanical-biological pre-treatment. The Regulation aims at preventing generation of landfill gases that are detrimental to health and climate, and discharge of pollutants from landfills into the groundwater. Waste calculations for the year 2005 predict a volume of 28 million tons. Existing incineration and mechanical-biological treatment plants cover volumes of 14 and 2.5 million tons, respectively. Consequently, their capacity does not meet the demand in Germany. Waste disposal plans have been prepared in the German Federal State of Saxony-Anhalt since 1996 and potential sites for garbage incineration plants have been identified. Energy and waste management companies have initiated application procedures for thermal waste treatment plants and utilization of energy. Health Departments and the Hygiene Institute contributed to the approval procedure by providing the required Health Impact Assessment. We recommended selecting sites in the vicinity of large cities and conurbations and - taking into account the main wind direction - preferably in the northeast. Long-distance transport should be avoided. Based on immission forecasts for territorial background pollution, additional noise and air pollution were examined for reasonableness. In addition, providing structural safety of plants and guaranteeing continuous monitoring of emission limit values of air pollutants, was a prerequisite for strict observance of the 17 (th) BImSchV (Federal Decree on the Prevention of Immissions). The paper informs about planning, construction and conditions for operating the combined garbage heating and power station in Magdeburg-Rothensee (600,000 t/a). Saxony-Anhalt's waste legislation requires non-recyclable waste to be disposed of at the place of its generation, if possible

  8. Determining the optimal cutoff points for waist circumference and body mass index for identification of metabolic abnormalities and metabolic syndrome in urban Thai population.

    PubMed

    Worachartcheewan, Apilak; Dansethakul, Prabhop; Nantasenamat, Chanin; Pidetcha, Phannee; Prachayasittikul, Virapong

    2012-11-01

    This study describes the prevalence and optimal waist circumference (WC) and body mass index (BMI) cutoff point for metabolic abnormalities and metabolic syndrome (MS) from urban Thai population. The optimal BMI/WC cutoff has been used for identifying and evaluating metabolic abnormalities for screening individuals having risk factor of MS.

  9. Is there an optimal resting velopharyngeal gap in operated cleft palate patients?

    PubMed Central

    Yellinedi, Rajesh; Damalacheruvu, Mukunda Reddy

    2013-01-01

    Context: Videofluoroscopy in operated cleft palate patients. Aims: To determine the existence of an optimal resting velopharyngeal (VP) gap in operated cleft palate patients Settings and Design: A retrospective analysis of lateral view videofluoroscopy of operated cleft palate patients. Materials and Methods: A total of 117 cases of operated cleft palate underwent videofluoroscopy between 2006 and 2011. The lateral view of videofluoroscopy was utilised in the study. A retrospective analysis of the lateral view of videofluoroscopy of these 117 patients was performed to analyse the resting VP gap and its relationship to VP closure. Statistical analysis used: None. Results: Of the 117 cases, 35 had a resting gap of less than 6 mm, 34 had a resting gap between 6 and 10 mm and 48 patients had a resting gap of more than 10 mm. Conclusions: The conclusive finding was that almost all the patients with a resting gap of <6 mm (group C) achieved radiological closure of the velopharynx with speech; thus, they had the least chance of VP insufficiency (VPI). Those patients with a resting gap of >10 mm (group A) did not achieve VP closure on phonation, thus having full-blown VPI. Therefore, it can be concluded that the ideal resting VP gap is approximately 6 mm so as to get the maximal chance of VP closure and thus prevent VPI. PMID:23960311

  10. Long-term energy capture and the effects of optimizing wind-turbine operating strategies

    SciTech Connect

    Miller, A.H.; Formica, W.J.

    1981-08-01

    One of the major factors driving the evolutionary design of wind turbines is the cost of energy (COE). The COE for electricity produced by any means is based on three primary factors: capital costs plus operating and maintenance (O and M) costs divided by the number of kilowatt hours produced per year. Obviously an increase in production of energy has the positive effect of decreasing the cost of energy produced by a wind turbine. A research effort has been established to determine the possible methods of increasing energy capture without affecting the turbine design. The emphasis has been on optimizing the wind turbine operating strategy. The operating strategy embodies the startup and shutdown algorithm as well as the algorithm for determining when to yaw (rotate) the axis of the turbine more directly into the wind. Using data collected at a number of sites, the time-dependent simulation of a MOD-2 wind turbine using various, site-dependent operating strategies has provided evidence that site-specific fine tuning can produce significant increases in long-term energy capture as well as reduce the number of start-stop cycles and yawing maneuvers, which may result in reduced fatigue and subsequent maintenance.

  11. A dynamic programming model for optimal planning of aquifer storage and recovery facility operations

    NASA Astrophysics Data System (ADS)

    Uddameri, V.

    2007-01-01

    Aquifer storage recovery (ASR) is an innovative technology with the potential to augment dwindling water resources in regions experiencing rapid growth and development. Planning and design of ASR systems requires quantifying how much water should be stored and appropriate times for storage and withdrawals within a planning period. A monthly scale planning model has been developed in this study to derive optimal (least cost) long-term policies for operating ASR systems and is solved using a recursive deterministic dynamic programming approach. The outputs of the model include annual costs of operation, the amount of water to be imported each month as well as the schedule for storage and extraction. A case study modeled after a proposed ASR system for Mustang Island and Padre Island service areas of the city of Corpus Christi is used to illustrate the utility of the developed model. The results indicate that for the assumed baseline demands, the ASR system is to be kept operational for a period of 4 months starting from May through August. Model sensitivity analysis indicated that increased seasonal shortages can be met using ASR with little additional costs. For the assumed cost structure, a 16% shortage increased the costs by 1.6%. However, the operation time of ASR increased from 4 to 8 months. The developed dynamic programming model is a useful tool to assess the feasibility of evaluating the use of ASR systems during regional-scale water resources planning endeavors.

  12. FLOPAK: FLOATING POINT PROGRAMING PACKAGE,

    DTIC Science & Technology

    FLOPAK is a Packard-Bell 250 Computer semi-automatic, floating - point programing system which may be operated simultaneously in either of two modes...250 floating - point system available which may be used in real-time control. The system was originally designed to solve a real-time communication problem....The first is a non-time optimized mode which may be used by inex perienced coders; the second mode is a high-speed, fully time-optimized floating

  13. Point of optimal kinematic error: improvement of the instantaneous helical pivot method for locating centers of rotation.

    PubMed

    De Rosario, Helios; Page, Alvaro; Mata, Vicente

    2014-05-07

    This paper proposes a variation of the instantaneous helical pivot technique for locating centers of rotation. The point of optimal kinematic error (POKE), which minimizes the velocity at the center of rotation, may be obtained by just adding a weighting factor equal to the square of angular velocity in Woltring׳s equation of the pivot of instantaneous helical axes (PIHA). Calculations are simplified with respect to the original method, since it is not necessary to make explicit calculations of the helical axis, and the effect of accidental errors is reduced. The improved performance of this method was validated by simulations based on a functional calibration task for the gleno-humeral joint center. Noisy data caused a systematic dislocation of the calculated center of rotation towards the center of the arm marker cluster. This error in PIHA could even exceed the effect of soft tissue artifacts associated to small and medium deformations, but it was successfully reduced by the POKE estimation.

  14. Study of landscape patterns of variation and optimization based on non-point source pollution control in an estuary.

    PubMed

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui; Wu, Haiyan

    2014-10-15

    Appropriate increases in the "sink" of a landscape can reduce the risk of non-point source pollution (NPSP) to the sea at relatively lower costs and at a higher efficiency. Based on high-resolution remote sensing image data taken between 2003 and 2008, we analyzed the "source" and "sink" landscape pattern variations of nitrogen and phosphorus pollutants in the Jiulongjiang estuary region. The contribution to the sea and distribution of each pollutant in the region was calculated using the LCI and mGLCI models. The results indicated that an increased amount of pollutants was contributed to the sea, and the "source" area of the nitrogen NPSP in the study area increased by 32.75 km(2). We also propose a landscape pattern optimization to reduce pollution in the Jiulongjiang estuary in 2008 through the conversion of cultivated land with slopes greater than 15° and paddy fields near rivers, and an increase in mangrove areas.

  15. A Concept and Implementation of Optimized Operations of Airport Surface Traffic

    NASA Technical Reports Server (NTRS)

    Jung, Yoon C.; Hoang, Ty; Montoya, Justin; Gupta, Gautam; Malik, Waqar; Tobias, Leonard

    2010-01-01

    This paper presents a new concept of optimized surface operations at busy airports to improve the efficiency of taxi operations, as well as reduce environmental impacts. The suggested system architecture consists of the integration of two decoupled optimization algorithms. The Spot Release Planner provides sequence and timing advisories to tower controllers for releasing departure aircraft into the movement area to reduce taxi delay while achieving maximum throughput. The Runway Scheduler provides take-off sequence and arrival runway crossing sequence to the controllers to maximize the runway usage. The description of a prototype implementation of this integrated decision support tool for the airport control tower controllers is also provided. The prototype decision support tool was evaluated through a human-in-the-loop experiment, where both the Spot Release Planner and Runway Scheduler provided advisories to the Ground and Local Controllers. Initial results indicate the average number of stops made by each departure aircraft in the departure runway queue was reduced by more than half when the controllers were using the advisories, which resulted in reduced taxi times in the departure queue.

  16. Local SAR, Global SAR, and Power-Constrained Large-Flip-Angle Pulses with Optimal Control and Virtual Observation Points

    PubMed Central

    Vinding, Mads S.; Guérin, Bastien; Vosegaard, Thomas; Nielsen, Niels Chr.

    2016-01-01

    Purpose To present a constrained optimal-control (OC) framework for designing large-flip-angle parallel-transmit (pTx) pulses satisfying hardware peak-power as well as regulatory local and global specific-absorption-rate (SAR) limits. The application is 2D and 3D spatial-selective 90° and 180° pulses. Theory and Methods The OC gradient-ascent-pulse-engineering method with exact gradients and the limited-memory Broyden-Fletcher-Goldfarb-Shanno method is proposed. Local SAR is constrained by the virtual-observation-points method. Two numerical models facilitated the optimizations, a torso at 3 T and a head at 7 T, both in eight-channel pTx coils and acceleration-factors up to 4. Results The proposed approach yielded excellent flip-angle distributions. Enforcing the local-SAR constraint, as opposed to peak power alone, reduced the local SAR 7 and 5-fold with the 2D torso excitation and inversion pulse, respectively. The root-mean-square errors of the magnetization profiles increased less than 5% with the acceleration factor of 4. Conclusion A local and global SAR, and peak-power constrained OC large-flip-angle pTx pulse design was presented, and numerically validated for 2D and 3D spatial-selective 90° and 180° pulses at 3 T and 7 T. PMID:26715084

  17. How to COAAD Images. I. Optimal Source Detection and Photometry of Point Sources Using Ensembles of Images

    NASA Astrophysics Data System (ADS)

    Zackay, Barak; Ofek, Eran O.

    2017-02-01

    Stacks of digital astronomical images are combined in order to increase image depth. The variable seeing conditions, sky background, and transparency of ground-based observations make the coaddition process nontrivial. We present image coaddition methods that maximize the signal-to-noise ratio (S/N) and optimized for source detection and flux measurement. We show that for these purposes, the best way to combine images is to apply a matched filter to each image using its own point-spread function (PSF) and only then to sum the images with the appropriate weights. Methods that either match the filter after coaddition or perform PSF homogenization prior to coaddition will result in loss of sensitivity. We argue that our method provides an increase of between a few and 25% in the survey speed of deep ground-based imaging surveys compared with weighted coaddition techniques. We demonstrate this claim using simulated data as well as data from the Palomar Transient Factory data release 2. We present a variant of this coaddition method, which is optimal for PSF or aperture photometry. We also provide an analytic formula for calculating the S/N for PSF photometry on single or multiple observations. In the next paper in this series, we present a method for image coaddition in the limit of background-dominated noise, which is optimal for any statistical test or measurement on the constant-in-time image (e.g., source detection, shape or flux measurement, or star–galaxy separation), making the original data redundant. We provide an implementation of these algorithms in MATLAB.

  18. Optimization of quantum Hamiltonian evolution: From two projection operators to local Hamiltonians

    NASA Astrophysics Data System (ADS)

    Patel, Apoorva; Priyadarsini, Anjani

    Given a quantum Hamiltonian and its evolution time, the corresponding unitary evolution operator can be constructed in many different ways, corresponding to different trajectories between the desired end-points and different series expansions. A choice among these possibilities can then be made to obtain the best computational complexity and control over errors. It is shown how a construction based on Grover's algorithm scales linearly in time and logarithmically in the error bound, and is exponentially superior in error complexity to the scheme based on the straightforward application of the Lie-Trotter formula. The strategy is then extended first to simulation of any Hamiltonian that is a linear combination of two projection operators, and then to any local efficiently computable Hamiltonian. The key feature is to construct an evolution in terms of the largest possible steps instead of taking small time steps. Reflection operations and Chebyshev expansions are used to efficiently control the total error on the overall evolution, without worrying about discretization errors for individual steps. We also use a digital implementation of quantum states that makes linear algebra operations rather simple to perform.

  19. Optimization of the weekly operation of a multipurpose hydroelectric development, including a pumped storage plant

    NASA Astrophysics Data System (ADS)

    Popa, R.; Popa, F.; Popa, B.; Zachia-Zlatea, D.

    2010-08-01

    It is presented an optimization model based on genetic algorithms for the operation of a multipurpose hydroelectric power development consisting in a pumped storage plant (PSP) with weekly operation cycle. The lower reservoir of the PSP is supplied upstream from a peak hydropower plant (HPP) with a large reservoir and supplies the own HPP which provides the required discharges towards downstream. Under these conditions, the optimum operation of the assembly consisting in 3 reservoirs and hydropower plants becomes a difficult problem if there are considered the restrictions as regards: the gradients allowed for the reservoirs filling/emptying, compliance with of a long-term policy of the upper reservoir from the hydroelectric development and of the weekly cycle for the PSP upper reservoir, correspondence between the power output/consumption in the weekly load schedule, turning to account of the water resource at maximum overall efficiencies, etc. Maximization of the net energy value (generated minus consumed) was selected as performance function of the model, considering the differentiated price of the electric energy over the week (working or weekend days, peak, half-peak or base hours). The analysis time step was required to be of 3 hours, resulting a weekly horizon of 56 steps and 168 decision variables, respectively, for the 3 HPPs of the system. These were allowed to be the flows turbined at the HPP and the number of working hydrounits at PSP, on each time step. The numerical application has considered the guiding data of Fantanele-Tarnita-Lapustesti hydroelectric development. Results of various simulations carried out proved the qualities of the proposed optimization model, which will allow its use within a decisional support program for such a development.

  20. Evidence-Based Recommendations for Optimizing Light in Day-to-Day Spaceflight Operations

    NASA Technical Reports Server (NTRS)

    Whitmire, Alexandra; Leveton, Lauren; Barger, Laura; Clark, Toni; Bollweg, Laura; Ohnesorge, Kristine; Brainard, George

    2015-01-01

    NASA Behavioral Health and Performance Element (BHP) personnel have previously reported on efforts to transition evidence-based recommendations for a flexible lighting system on the International Space Station (ISS). Based on these recommendations, beginning in 2016 the ISS will replace the current fluorescent-based lights with an LED-based system to optimize visual performance, facilitate circadian alignment, promote sleep, and hasten schedule shifting. Additional efforts related to lighting countermeasures in spaceflight operations have also been underway. As an example, a recent BHP research study led by investigators at Harvard Medical School and Brigham and Women's Hospital, evaluated the acceptability, feasibility, and effectiveness of blue-enriched light exposure during exercise breaks for flight controllers working the overnight shift in the Mission Control Center (MCC) at NASA Johnson Space Center. This effort, along with published laboratory studies that have demonstrated the effectiveness of appropriately timed light for promoting alertness, served as an impetus for new light options, and educational protocols for flight controllers. In addition, a separate set of guidelines related to the light emitted from electronic devices, were provided to the Astronaut Office this past year. These guidelines were based on an assessment led by NASA's Lighting Environment Test Facility that included measuring the spectral power distribution, irradiance, and radiance of light emitted from ISS-grade laptops and I-Pads, as well as Android devices. Evaluations were conducted with and without the use of off-the-shelf screen filters as well as a software application that touts minimizing the short-wave length of the visible light spectrum. This presentation will focus on the transition for operations process related to lighting countermeasures in the MCC, as well as the evidence to support recommendations for optimal use of laptops, I-Pads, and Android devices during all

  1. Multi-objective optimization of water quality, pumps operation, and storage sizing of water distribution systems.

    PubMed

    Kurek, Wojciech; Ostfeld, Avi

    2013-01-30

    A multi-objective methodology utilizing the Strength Pareto Evolutionary Algorithm (SPEA2) linked to EPANET for trading-off pumping costs, water quality, and tanks sizing of water distribution systems is developed and demonstrated. The model integrates variable speed pumps for modeling the pumps operation, two water quality objectives (one based on chlorine disinfectant concentrations and one on water age), and tanks sizing cost which are assumed to vary with location and diameter. The water distribution system is subject to extended period simulations, variable energy tariffs, Kirchhoff's laws 1 and 2 for continuity of flow and pressure, tanks water level closure constraints, and storage-reliability requirements. EPANET Example 3 is employed for demonstrating the methodology on two multi-objective models, which differ in the imposed water quality objective (i.e., either with disinfectant or water age considerations). Three-fold Pareto optimal fronts are presented. Sensitivity analysis on the storage-reliability constraint, its influence on pumping cost, water quality, and tank sizing are explored. The contribution of this study is in tailoring design (tank sizing), pumps operational costs, water quality of two types, and reliability through residual storage requirements, in a single multi-objective framework. The model was found to be stable in generating multi-objective three-fold Pareto fronts, while producing explainable engineering outcomes. The model can be used as a decision tool for both pumps operation, water quality, required storage for reliability considerations, and tank sizing decision-making.

  2. Multi-objective optimization to support rapid air operations mission planning

    NASA Astrophysics Data System (ADS)

    Gonsalves, Paul G.; Burge, Janet E.

    2005-05-01

    Within the context of military air operations, Time-sensitive targets (TSTs) are targets where modifiers such, "emerging, perishable, high-payoff, short dwell, or highly mobile" can be used. Time-critical targets (TCTs) further the criticality of TSTs with respect to achievement of mission objectives and a limited window of opportunity for attack. The importance of TST/TCTs within military air operations has been met with a significant investment in advanced technologies and platforms to meet these challenges. Developments in ISR systems, manned and unmanned air platforms, precision guided munitions, and network-centric warfare have made significant strides for ensuring timely prosecution of TSTs/TCTs. However, additional investments are needed to further decrease the targeting decision cycle. Given the operational needs for decision support systems to enable time-sensitive/time-critical targeting, we present a tool for the rapid generation and analysis of mission plan solutions to address TSTs/TCTs. Our system employs a genetic algorithm-based multi-objective optimization scheme that is well suited to the rapid generation of approximate solutions in a dynamic environment. Genetic Algorithms (GAs) allow for the effective exploration of the search space for potentially novel solutions, while addressing the multiple conflicting objectives that characterize the prosecution of TSTs/TCTs (e.g. probability of target destruction, time to accomplish task, level of disruption to other mission priorities, level of risk to friendly assets, etc.).

  3. Dirac point and transconductance of top-gated graphene field-effect transistors operating at elevated temperature

    SciTech Connect

    Hopf, T.; Vassilevski, K. V. Escobedo-Cousin, E.; King, P. J.; Wright, N. G.; O'Neill, A. G.; Horsfall, A. B.; Goss, J. P.; Wells, G. H.; Hunt, M. R. C.

    2014-10-21

    Top-gated graphene field-effect transistors (GFETs) have been fabricated using bilayer epitaxial graphene grown on the Si-face of 4H-SiC substrates by thermal decomposition of silicon carbide in high vacuum. Graphene films were characterized by Raman spectroscopy, Atomic Force Microscopy, Scanning Tunnelling Microscopy, and Hall measurements to estimate graphene thickness, morphology, and charge transport properties. A 27 nm thick Al₂O₃ gate dielectric was grown by atomic layer deposition with an e-beam evaporated Al seed layer. Electrical characterization of the GFETs has been performed at operating temperatures up to 100 °C limited by deterioration of the gate dielectric performance at higher temperatures. Devices displayed stable operation with the gate oxide dielectric strength exceeding 4.5 MV/cm at 100 °C. Significant shifting of the charge neutrality point and an increase of the peak transconductance were observed in the GFETs as the operating temperature was elevated from room temperature to 100 °C.

  4. Optimizing operational water management with soil moisture data from Sentinel-1 satellites

    NASA Astrophysics Data System (ADS)

    Pezij, Michiel; Augustijn, Denie; Hendriks, Dimmie; Hulscher, Suzanne

    2016-04-01

    In the Netherlands, regional water authorities are responsible for management and maintenance of regional water bodies. Due to socio-economic developments (e.g. agricultural intensification and on-going urbanisation) and an increase in climate variability, the pressure on these water bodies is growing. Optimization of water availability by taking into account the needs of different users, both in wet and dry periods, is crucial for sustainable developments. To support timely and well-directed operational water management, accurate information on the current state of the system as well as reliable models to evaluate water management optimization measures are essential. Previous studies showed that the use of remote sensing data (for example soil moisture data) in water management offers many opportunities (e.g. Wanders et al. (2014)). However, these data are not yet used in operational applications at a large scale. The Sentinel-1 satellites programme offers high spatiotemporal resolution soil moisture data (1 image per 6 days with a spatial resolution of 10 by 10 m) that are freely available. In this study, these data will be used to improve the Netherlands Hydrological Instrument (NHI). The NHI consists of coupled models for the unsaturated zone (MetaSWAP), groundwater (iMODFLOW) and surface water (Mozart and DM). The NHI is used for scenario analyses and operational water management in the Netherlands (De Lange et al., 2014). Due to the lack of soil moisture data, the unsaturated zone model is not yet thoroughly validated and its output is not used by regional water authorities for decision-making. Therefore, the newly acquired remotely sensed soil moisture data will be used to improve the skill of the MetaSWAP-model and the NHI as whole. The research will focus among other things on the calibration of soil parameters by comparing model output (MetaSWAP) with the remotely sensed soil moisture data. Eventually, we want to apply data-assimilation to improve

  5. Pilot/vehicle control optimization using averaged operational mode and subsystem relative performance index sensitivities

    NASA Technical Reports Server (NTRS)

    Leininger, G. G.; Lehtinen, B.; Riehl, J. P.

    1972-01-01

    A method is presented for designing optimal feedback controllers for systems having subsystem sensitivity constraints. Such constraints reflect the presence of subsystem performance indices which are in conflict with the performance index of the overall system. The key to the approach is the use of relative performance index sensitivity (a measure of the deviation of a performance index from its optimum value). The weighted sum of subsystem and/or operational mode relative performance index sensitivies is defined as an overall performance index. A method is developed to handle linear systems with quadratic performance indices and either full or partial state feedback. The usefulness of this method is demonstrated by applying it to the design of a stability augmentation system (SAS) for a VTOL aircraft. A desirable VTOL SAS design is one that produces good VTOL transient response both with and without active pilot control. The system designed using this method is shown to effect a satisfactory compromise solution to this problem.

  6. Optimizing operational efficiencies in early phase trials: the Pediatric Trials Network experience

    PubMed Central

    England, Amanda; Wade, Kelly; Smith, P. Brian; Berezny, Katherine; Laughon, Matthew

    2016-01-01

    Performing drug trials in pediatrics is challenging. In support of the Best Pharmaceuticals for Children Act, the Eunice Kennedy Shriver National Institute of Child Health and Human Development funded the formation of the Pediatric Trials Network (PTN) in 2010. Since its inception, the PTN has developed strategies to increase both efficiency and safety of pediatric drug trials. Through use of innovative techniques such as sparse and scavenged blood sampling as well as opportunistic study design, participation in trials has grown. The PTN has also strived to improve consistency of adverse event reporting in neonatal drug trials through the development of a standardized adverse event table. We review how the PTN is optimizing operational efficiencies in pediatric drug trials to increase the safety of drugs in children. PMID:26968616

  7. Optimal Technology Selection and Operation of Microgrids inCommercial Buildings

    SciTech Connect

    Marnay, Chris; Venkataramanan, Giri; Stadler, Michael; Siddiqui,Afzal; Firestone, Ryan; Chandran, Bala

    2007-01-15

    The deployment of small (<1-2 MW) clusters of generators,heat and electrical storage, efficiency investments, and combined heatand power (CHP) applications (particularly involving heat activatedcooling) in commercial buildings promises significant benefits but posesmany technical and financial challenges, both in system choice and itsoperation; if successful, such systems may be precursors to widespreadmicrogrid deployment. The presented optimization approach to choosingsuch systems and their operating schedules uses Berkeley Lab'sDistributed Energy Resources Customer Adoption Model [DER-CAM], extendedto incorporate electrical storage options. DER-CAM chooses annual energybill minimizing systems in a fully technology-neutral manner. Anillustrative example for a San Francisco hotel is reported. The chosensystem includes two engines and an absorption chiller, providing anestimated 11 percent cost savings and 10 percent carbon emissionreductions, under idealized circumstances.

  8. Reducing wait times through operations research: optimizing the use of surge capacity.

    PubMed

    Patrick, Jonathan; Puterman, Martin L

    2008-01-01

    Widespread public demand for improved access, political pressure for shorter wait times, a stretched workforce, an aging population and overutilized equipment and facilities challenge healthcare leaders to adopt new management approaches. This paper highlights the significant benefits that can be achieved by applying operations research (OR) methods to healthcare management. It shows how queuing theory provides managers with insights into the causes for excessive wait times and the relationship between wait times and capacity. It provides a case study of the use of several OR methods, including Markov decision processes, linear programming and simulation, to optimize the scheduling of patients with multiple priorities. The study shows that by applying this approach, wait time targets can be attained with the judicious use of surge capacity in the form of overtime. It concludes with some policy insights.

  9. Reducing Wait Times through Operations Research: Optimizing the Use of Surge Capacity.

    PubMed

    Patrick, Jonathan; Puterman, Martin L

    2008-02-01

    Widespread public demand for improved access, political pressure for shorter wait times, a stretched workforce, an aging population and overutilized equipment and facilities challenge healthcare leaders to adopt new management approaches. This paper highlights the significant benefits that can be achieved by applying operations research (OR) methods to healthcare management. It shows how queuing theory provides managers with insights into the causes for excessive wait times and the relationship between wait times and capacity. It provides a case study of the use of several OR methods, including Markov decision processes, linear programming and simulation, to optimize the scheduling of patients with multiple priorities. The study shows that by applying this approach, wait time targets can be attained with the judicious use of surge capacity in the form of overtime. It concludes with some policy insights.

  10. Operating wavelengths optimization for a spaceborne lidar measuring atmospheric CO2.

    PubMed

    Caron, Jérôme; Durand, Yannig

    2009-10-01

    The Advanced Space Carbon and Climate Observation of Planet Earth (A-SCOPE) mission, a candidate for the next generation of European Space Agency Earth Explorer Core Missions, aims at measuring CO(2) concentration from space with an integrated path differential absorption (IPDA) lidar. We report the optimization of the lidar instrument operating wavelengths, building on two performance models developed to assess measurement random errors from the instrument, as well as knowledge errors on geophysical and spectral parameters required for the measurement processing. A promising approach to decrease sensitivity to water vapor errors by 1 order of magnitude is reported and illustrated. The presented methods are applicable for any airborne or spaceborne IPDA lidar.

  11. Estimates of Optimal Operating Conditions for Hydrogen-Oxygen Cesium-Seeded Magnetohydrodynamic Power Generator

    NASA Technical Reports Server (NTRS)

    Smith, J. M.; Nichols, L. D.

    1977-01-01

    The value of percent seed, oxygen to fuel ratio, combustion pressure, Mach number, and magnetic field strength which maximize either the electrical conductivity or power density at the entrance of an MHD power generator was obtained. The working fluid is the combustion product of H2 and O2 seeded with CsOH. The ideal theoretical segmented Faraday generator along with an empirical form found from correlating the data of many experimenters working with generators of different sizes, electrode configurations, and working fluids, are investigated. The conductivity and power densities optimize at a seed fraction of 3.5 mole percent and an oxygen to hydrogen weight ratio of 7.5. The optimum values of combustion pressure and Mach number depend on the operating magnetic field strength.

  12. Energetic optimization of a piezo-based touch-operated button for man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Sun, Hao; de Vries, Theo J. A.; de Vries, Rene; van Dalen, Harry

    2012-03-01

    This paper discusses the optimization of a touch-operated button for man-machine interfaces based on piezoelectric energy harvesting techniques. In the mechanical button, a common piezoelectric diaphragm, is assembled to harvest the ambient energy from the source, i.e. the operator’s touch. Under touch force load, the integrated diaphragm will have a bending deformation. Then, its mechanical strain is converted into the required electrical energy by means of the piezoelectric effect presented to the diaphragm. Structural design (i) makes the piezoceramic work under static compressive stress instead of static or dynamic tensile stress, (ii) achieves a satisfactory stress level and (iii) provides the diaphragm and the button with a fatigue lifetime in excess of millions of touch operations. To improve the button’s function, the effect of some key properties consisting of dimension, boundary condition and load condition on electrical behavior of the piezoelectric diaphragm are evaluated by electromechanical coupling analysis in ANSYS. The finite element analysis (FEA) results indicate that the modification of these properties could enhance the diaphragm significantly. Based on the key properties’ different contributions to the improvement of the diaphragm’s electrical energy output, they are incorporated into the piezoelectric diaphragm’s redesign or the structural design of the piezo-based button. The comparison of the original structure and the optimal result shows that electrical energy stored in the diaphragm and the voltage output are increased by 1576% and 120%, respectively, and the volume of the piezoceramic is reduced to 33.6%. These results will be adopted to update the design of the self-powered button, thus enabling a large decrease of energy consumption and lifetime cost of the MMI.

  13. Optimization of Preprocessing and Densification of Sorghum Stover at Full-scale Operation

    SciTech Connect

    Neal A. Yancey; Jaya Shankar Tumuluru; Craig C. Conner; Christopher T. Wright

    2011-08-01

    Transportation costs can be a prohibitive step in bringing biomass to a preprocessing location or biofuel refinery. One alternative to transporting biomass in baled or loose format to a preprocessing location, is to utilize a mobile preprocessing system that can be relocated to various locations where biomass is stored, preprocess and densify the biomass, then ship it to the refinery as needed. The Idaho National Laboratory has a full scale 'Process Demonstration Unit' PDU which includes a stage 1 grinder, hammer mill, drier, pellet mill, and cooler with the associated conveyance system components. Testing at bench and pilot scale has been conducted to determine effects of moisture on preprocessing, crop varieties on preprocessing efficiency and product quality. The INLs PDU provides an opportunity to test the conclusions made at the bench and pilot scale on full industrial scale systems. Each component of the PDU is operated from a central operating station where data is collected to determine power consumption rates for each step in the process. The power for each electrical motor in the system is monitored from the control station to monitor for problems and determine optimal conditions for the system performance. The data can then be viewed to observe how changes in biomass input parameters (moisture and crop type for example), mechanical changes (screen size, biomass drying, pellet size, grinding speed, etc.,), or other variations effect the power consumption of the system. Sorgum in four foot round bales was tested in the system using a series of 6 different screen sizes including: 3/16 in., 1 in., 2 in., 3 in., 4 in., and 6 in. The effect on power consumption, product quality, and production rate were measured to determine optimal conditions.

  14. Long Series Multi-objectives Optimal Operation of Water And Sediment Regulation

    NASA Astrophysics Data System (ADS)

    Bai, T.; Jin, W.

    2015-12-01

    Secondary suspended river in Inner Mongolia reaches have formed and the security of reach and ecological health of the river are threatened. Therefore, researches on water-sediment regulation by cascade reservoirs are urgent and necessary. Under this emergency background, multi-objectives water and sediment regulation are studied in this paper. Firstly, multi-objective optimal operation models of Longyangxia and Liujiaxia cascade reservoirs are established. Secondly, based on constraints handling and feasible search space techniques, the Non-dominated Sorting Genetic Algorithm (NSGA-II) is greatly improved to solve the model. Thirdly, four different scenarios are set. It is demonstrated that: (1) scatter diagrams of perato front are obtained to show optimal solutions of power generation maximization, sediment maximization and the global equilibrium solutions between the two; (2) the potentiality of water-sediment regulation by Longyangxia and Liujiaxia cascade reservoirs are analyzed; (3) with the increasing water supply in future, conflict between water supply and water-sediment regulation occurred, and the sustainability of water and sediment regulation will confront with negative influences for decreasing transferable water in cascade reservoirs; (4) the transfer project has less benefit for water-sediment regulation. The research results have an important practical significance and application on water-sediment regulation by cascade reservoirs in the Upper Yellow River, to construct water and sediment control system in the whole Yellow River basin.

  15. Nonlinear bioheat transfer models and multi-objective numerical optimization of the cryosurgery operations

    NASA Astrophysics Data System (ADS)

    Kudryashov, Nikolay A.; Shilnikov, Kirill E.

    2016-06-01

    Numerical computation of the three dimensional problem of the freezing interface propagation during the cryosurgery coupled with the multi-objective optimization methods is used in order to improve the efficiency and safety of the cryosurgery operations performing. Prostate cancer treatment and cutaneous cryosurgery are considered. The heat transfer in soft tissue during the thermal exposure to low temperature is described by the Pennes bioheat model and is coupled with an enthalpy method for blurred phase change computations. The finite volume method combined with the control volume approximation of the heat fluxes is applied for the cryosurgery numerical modeling on the tumor tissue of a quite arbitrary shape. The flux relaxation approach is used for the stability improvement of the explicit finite difference schemes. The method of the additional heating elements mounting is studied as an approach to control the cellular necrosis front propagation. Whereas the undestucted tumor tissue and destucted healthy tissue volumes are considered as objective functions, the locations of additional heating elements in cutaneous cryosurgery and cryotips in prostate cancer cryotreatment are considered as objective variables in multi-objective problem. The quasi-gradient method is proposed for the searching of the Pareto front segments as the multi-objective optimization problem solutions.

  16. Optimization of an Optical Inspection System Based on the Taguchi Method for Quantitative Analysis of Point-of-Care Testing

    PubMed Central

    Yeh, Chia-Hsien; Zhao, Zi-Qi; Shen, Pi-Lan; Lin, Yu-Cheng

    2014-01-01

    This study presents an optical inspection system for detecting a commercial point-of-care testing product and a new detection model covering from qualitative to quantitative analysis. Human chorionic gonadotropin (hCG) strips (cut-off value of the hCG commercial product is 25 mIU/mL) were the detection target in our study. We used a complementary metal-oxide semiconductor (CMOS) sensor to detect the colors of the test line and control line in the specific strips and to reduce the observation errors by the naked eye. To achieve better linearity between the grayscale and the concentration, and to decrease the standard deviation (increase the signal to noise ratio, S/N), the Taguchi method was used to find the optimal parameters for the optical inspection system. The pregnancy test used the principles of the lateral flow immunoassay, and the colors of the test and control line were caused by the gold nanoparticles. Because of the sandwich immunoassay model, the color of the gold nanoparticles in the test line was darkened by increasing the hCG concentration. As the results reveal, the S/N increased from 43.48 dB to 53.38 dB, and the hCG concentration detection increased from 6.25 to 50 mIU/mL with a standard deviation of less than 10%. With the optimal parameters to decrease the detection limit and to increase the linearity determined by the Taguchi method, the optical inspection system can be applied to various commercial rapid tests for the detection of ketamine, troponin I, and fatty acid binding protein (FABP). PMID:25256108

  17. Optimization of an optical inspection system based on the Taguchi method for quantitative analysis of point-of-care testing.

    PubMed

    Yeh, Chia-Hsien; Zhao, Zi-Qi; Shen, Pi-Lan; Lin, Yu-Cheng

    2014-09-01

    This study presents an optical inspection system for detecting a commercial point-of-care testing product and a new detection model covering from qualitative to quantitative analysis. Human chorionic gonadotropin (hCG) strips (cut-off value of the hCG commercial product is 25 mIU/mL) were the detection target in our study. We used a complementary metal-oxide semiconductor (CMOS) sensor to detect the colors of the test line and control line in the specific strips and to reduce the observation errors by the naked eye. To achieve better linearity between the grayscale and the concentration, and to decrease the standard deviation (increase the signal to noise ratio, S/N), the Taguchi method was used to find the optimal parameters for the optical inspection system. The pregnancy test used the principles of the lateral flow immunoassay, and the colors of the test and control line were caused by the gold nanoparticles. Because of the sandwich immunoassay model, the color of the gold nanoparticles in the test line was darkened by increasing the hCG concentration. As the results reveal, the S/N increased from 43.48 dB to 53.38 dB, and the hCG concentration detection increased from 6.25 to 50 mIU/mL with a standard deviation of less than 10%. With the optimal parameters to decrease the detection limit and to increase the linearity determined by the Taguchi method, the optical inspection system can be applied to various commercial rapid tests for the detection of ketamine, troponin I, and fatty acid binding protein (FABP).

  18. The role of crossover operator in evolutionary-based approach to the problem of genetic code optimization.

    PubMed

    Błażej, Paweł; Wnȩtrzak, Małgorzata; Mackiewicz, Paweł

    2016-12-01

    One of theories explaining the present structure of canonical genetic code assumes that it was optimized to minimize harmful effects of amino acid replacements resulting from nucleotide substitutions and translational errors. A way to testify this concept is to find the optimal code under given criteria and compare it with the canonical genetic code. Unfortunately, the huge number of possible alternatives makes it impossible to find the optimal code using exhaustive methods in sensible time. Therefore, heuristic methods should be applied to search the space of possible solutions. Evolutionary algorithms (EA) seem to be ones of such promising approaches. This class of methods is founded both on mutation and crossover operators, which are responsible for creating and maintaining the diversity of candidate solutions. These operators possess dissimilar characteristics and consequently play different roles in the process of finding the best solutions under given criteria. Therefore, the effective searching for the potential solutions can be improved by applying both of them, especially when these operators are devised specifically for a given problem. To study this subject, we analyze the effectiveness of algorithms for various combinations of mutation and crossover probabilities under three models of the genetic code assuming different restrictions on its structure. To achieve that, we adapt the position based crossover operator for the most restricted model and develop a new type of crossover operator for the more general models. The applied fitness function describes costs of amino acid replacement regarding their polarity. Our results indicate that the usage of crossover operators can significantly improve the quality of the solutions. Moreover, the simulations with the crossover operator optimize the fitness function in the smaller number of generations than simulations without this operator. The optimal genetic codes without restrictions on their structure

  19. Design optimization of MR-compatible rotating anode x-ray tubes for stable operation

    SciTech Connect

    Shin, Mihye; Lillaney, Prasheel; Hinshaw, Waldo; Fahrig, Rebecca

    2013-11-15

    Purpose: Hybrid x-ray/MR systems can enhance the diagnosis and treatment of endovascular, cardiac, and neurologic disorders by using the complementary advantages of both modalities for image guidance during interventional procedures. Conventional rotating anode x-ray tubes fail near an MR imaging system, since MR fringe fields create eddy currents in the metal rotor which cause a reduction in the rotation speed of the x-ray tube motor. A new x-ray tube motor prototype has been designed and built to be operated close to a magnet. To ensure the stability and safety of the motor operation, dynamic characteristics must be analyzed to identify possible modes of mechanical failure. In this study a 3D finite element method (FEM) model was developed in order to explore possible modifications, and to optimize the motor design. The FEM provides a valuable tool that permits testing and evaluation using numerical simulation instead of building multiple prototypes.Methods: Two experimental approaches were used to measure resonance characteristics: the first obtained the angular speed curves of the x-ray tube motor employing an angle encoder; the second measured the power spectrum using a spectrum analyzer, in which the large amplitude of peaks indicates large vibrations. An estimate of the bearing stiffness is required to generate an accurate FEM model of motor operation. This stiffness depends on both the bearing geometry and adjacent structures (e.g., the number of balls, clearances, preload, etc.) in an assembly, and is therefore unknown. This parameter was set by matching the FEM results to measurements carried out with the anode attached to the motor, and verified by comparing FEM predictions and measurements with the anode removed. The validated FEM model was then used to sweep through design parameters [bearing stiffness (1×10{sup 5}–5×10{sup 7} N/m), shaft diameter (0.372–0.625 in.), rotor diameter (2.4–2.9 in.), and total length of motor (5.66–7.36 in.)] to

  20. Design optimization of MR-compatible rotating anode x-ray tubes for stable operation

    PubMed Central

    Shin, Mihye; Lillaney, Prasheel; Hinshaw, Waldo; Fahrig, Rebecca

    2013-01-01

    Purpose: Hybrid x-ray/MR systems can enhance the diagnosis and treatment of endovascular, cardiac, and neurologic disorders by using the complementary advantages of both modalities for image guidance during interventional procedures. Conventional rotating anode x-ray tubes fail near an MR imaging system, since MR fringe fields create eddy currents in the metal rotor which cause a reduction in the rotation speed of the x-ray tube motor. A new x-ray tube motor prototype has been designed and built to be operated close to a magnet. To ensure the stability and safety of the motor operation, dynamic characteristics must be analyzed to identify possible modes of mechanical failure. In this study a 3D finite element method (FEM) model was developed in order to explore possible modifications, and to optimize the motor design. The FEM provides a valuable tool that permits testing and evaluation using numerical simulation instead of building multiple prototypes. Methods: Two experimental approaches were used to measure resonance characteristics: the first obtained the angular speed curves of the x-ray tube motor employing an angle encoder; the second measured the power spectrum using a spectrum analyzer, in which the large amplitude of peaks indicates large vibrations. An estimate of the bearing stiffness is required to generate an accurate FEM model of motor operation. This stiffness depends on both the bearing geometry and adjacent structures (e.g., the number of balls, clearances, preload, etc.) in an assembly, and is therefore unknown. This parameter was set by matching the FEM results to measurements carried out with the anode attached to the motor, and verified by comparing FEM predictions and measurements with the anode removed. The validated FEM model was then used to sweep through design parameters [bearing stiffness (1×105–5×107 N/m), shaft diameter (0.372–0.625 in.), rotor diameter (2.4–2.9 in.), and total length of motor (5.66–7.36 in.)] to increase the

  1. Sensitivity and alternative operating point studies on a high charge CW FEL injector test stand at CEBAF

    SciTech Connect

    Liu, H.; Kehne, D.; Benson, S.

    1995-12-31

    A high charge CW FEL injector test stand is being built at CEBAF based on a 500 kV DC laser gun, a 1500 MHz room-temperature buncher, and a high-gradient ({approx}10 MV/m) CEBAF cryounit containing two 1500 MHz CEBAF SRF cavities. Space-charge-dominated beam dynamics simulations show that this injector should be an excellent high-brightness electron beam source for CW UV FELs if the nominal parameters assigned to each component of the system are experimentally achieved. Extensive sensitivity and alternative operating point studies have been conducted numerically to establish tolerances on the parameters of various injector system components. The consequences of degraded injector performance, due to failure to establish and/or maintain the nominal system design parameters, on the performance of the main accelerator and the FEL itself are discussed.

  2. Canine Sense and Sensibility: Tipping Points and Response Latency Variability as an Optimism Index in a Canine Judgement Bias Assessment

    PubMed Central

    Starling, Melissa J.; Branson, Nicholas; Cody, Denis; Starling, Timothy R.; McGreevy, Paul D.

    2014-01-01

    Recent advances in animal welfare science used judgement bias, a type of cognitive bias, as a means to objectively measure an animal's affective state. It is postulated that animals showing heightened expectation of positive outcomes may be categorised optimistic, while those showing heightened expectations of negative outcomes may be considered pessimistic. This study pioneers the use of a portable, automated apparatus to train and test the judgement bias of dogs. Dogs were trained in a discrimination task in which they learned to touch a target after a tone associated with a lactose-free milk reward and abstain from touching the target after a tone associated with water. Their judgement bias was then probed by presenting tones between those learned in the discrimination task and measuring their latency to respond by touching the target. A Cox's Proportional Hazards model was used to analyse censored response latency data. Dog and Cue both had a highly significant effect on latency and risk of touching a target. This indicates that judgement bias both exists in dogs and differs between dogs. Test number also had a significant effect, indicating that dogs were less likely to touch the target over successive tests. Detailed examination of the response latencies revealed tipping points where average latency increased by 100% or more, giving an indication of where dogs began to treat ambiguous cues as predicting more negative outcomes than positive ones. Variability scores were calculated to provide an index of optimism using average latency and standard deviation at cues after the tipping point. The use of a mathematical approach to assessing judgement bias data in animal studies offers a more detailed interpretation than traditional statistical analyses. This study provides proof of concept for the use of an automated apparatus for measuring cognitive bias in dogs. PMID:25229458

  3. GIS based location optimization for mobile produced water treatment facilities in shale gas operations

    NASA Astrophysics Data System (ADS)

    Kitwadkar, Amol Hanmant

    Over 60% of the nation's total energy is supplied by oil and natural gas together and this demand for energy will continue to grow in the future (Radler et al. 2012). The growing demand is pushing the exploration and exploitation of onshore oil and natural gas reservoirs. Hydraulic fracturing has proven to not only create jobs and achieve economic growth, but also has proven to exert a lot of stress on natural resources---such as water. As water is one of the most important factors in the world of hydraulic fracturing, proper fluids management during the development of a field of operation is perhaps the key element to address a lot of these issues. Almost 30% of the water used during hydraulic fracturing comes out of the well in the form of flowback water during the first month after the well is fractured (Bai et. al. 2012). Handling this large amount of water coming out of the newly fractured wells is one of the major issues as the volume of the water after this period drops off and remains constant for a long time (Bai et. al. 2012) and permanent facilities can be constructed to take care of the water over a longer period. This paper illustrates development of a GIS based tool for optimizing the location of a mobile produced water treatment facility while development is still occurring. A methodology was developed based on a multi criteria decision analysis (MCDA) to optimize the location of the mobile treatment facilities. The criteria for MCDA include well density, ease of access (from roads considering truck hauls) and piping minimization if piping is used and water volume produced. The area of study is 72 square miles east of Greeley, CO in the Wattenberg Field in northeastern Colorado that will be developed for oil and gas production starting in the year 2014. A quarterly analysis is done so that we can observe the effect of future development plans and current circumstances on the location as we move from quarter to quarter. This will help the operators to

  4. Operational optimization of irrigation scheduling for citrus trees using an ensemble based data assimilation approach

    NASA Astrophysics Data System (ADS)

    Hendricks Franssen, H.; Han, X.; Martinez, F.; Jimenez, M.; Manzano, J.; Chanzy, A.; Vereecken, H.

    2013-12-01

    Data assimilation (DA) techniques, like the local ensemble transform Kalman filter (LETKF) not only offer the opportunity to update model predictions by assimilating new measurement data in real time, but also provide an improved basis for real-time (DA-based) control. This study focuses on the optimization of real-time irrigation scheduling for fields of citrus trees near Picassent (Spain). For three selected fields the irrigation was optimized with DA-based control, and for other fields irrigation was optimized on the basis of a more traditional approach where reference evapotranspiration for citrus trees was estimated using the FAO-method. The performance of the two methods is compared for the year 2013. The DA-based real-time control approach is based on ensemble predictions of soil moisture profiles, using the Community Land Model (CLM). The uncertainty in the model predictions is introduced by feeding the model with weather predictions from an ensemble prediction system (EPS) and uncertain soil hydraulic parameters. The model predictions are updated daily by assimilating soil moisture data measured by capacitance probes. The measurement data are assimilated with help of LETKF. The irrigation need was calculated for each of the ensemble members, averaged, and logistic constraints (hydraulics, energy costs) were taken into account for the final assigning of irrigation in space and time. For the operational scheduling based on this approach only model states and no model parameters were updated by the model. Other, non-operational simulation experiments for the same period were carried out where (1) neither ensemble weather forecast nor DA were used (open loop), (2) Only ensemble weather forecast was used, (3) Only DA was used, (4) also soil hydraulic parameters were updated in data assimilation and (5) both soil hydraulic and plant specific parameters were updated. The FAO-based and DA-based real-time irrigation control are compared in terms of soil moisture

  5. SU-E-T-539: Fixed Versus Variable Optimization Points in Combined-Mode Modulated Arc Therapy Planning

    SciTech Connect

    Kainz, K; Prah, D; Ahunbay, E; Li, X

    2014-06-01

    Purpose: A novel modulated arc therapy technique, mARC, enables superposition of step-and-shoot IMRT segments upon a subset of the optimization points (OPs) of a continuous-arc delivery. We compare two approaches to mARC planning: one with the number of OPs fixed throughout optimization, and another where the planning system determines the number of OPs in the final plan, subject to an upper limit defined at the outset. Methods: Fixed-OP mARC planning was performed for representative cases using Panther v. 5.01 (Prowess, Inc.), while variable-OP mARC planning used Monaco v. 5.00 (Elekta, Inc.). All Monaco planning used an upper limit of 91 OPs; those OPs with minimal MU were removed during optimization. Plans were delivered, and delivery times recorded, on a Siemens Artiste accelerator using a flat 6MV beam with 300 MU/min rate. Dose distributions measured using ArcCheck (Sun Nuclear Corporation, Inc.) were compared with the plan calculation; the two were deemed consistent if they agreed to within 3.5% in absolute dose and 3.5 mm in distance-to-agreement among > 95% of the diodes within the direct beam. Results: Example cases included a prostate and a head-and-neck planned with a single arc and fraction doses of 1.8 and 2.0 Gy, respectively. Aside from slightly more uniform target dose for the variable-OP plans, the DVHs for the two techniques were similar. For the fixed-OP technique, the number of OPs was 38 and 39, and the delivery time was 228 and 259 seconds, respectively, for the prostate and head-and-neck cases. For the final variable-OP plans, there were 91 and 85 OPs, and the delivery time was 296 and 440 seconds, correspondingly longer than for fixed-OP. Conclusion: For mARC, both the fixed-OP and variable-OP approaches produced comparable-quality plans whose delivery was successfully verified. To keep delivery time per fraction short, a fixed-OP planning approach is preferred.

  6. Development of a fixed bed gasifier model and optimal operating conditions determination

    NASA Astrophysics Data System (ADS)

    Dahmani, Manel; Périlhon, Christelle; Marvillet, Christophe; Hajjaji, Noureddine; Houas, Ammar; Khila, Zouhour

    2017-02-01

    The main objective of this study was to develop a fixed bed gasifier model of palm waste and to identify the optimal operating conditions to produce electricity from synthesis gas. First, the gasifier was simulated using Aspen PlusTM software. Gasification is a thermo-chemical process that has long been used, but it remains a perfectible technology. It means incomplete combustion of biomass solid fuel into synthesis gas through partial oxidation. The operating parameters (temperature and equivalence ratio (ER)) were thereafter varied to investigate their effect on the synthesis gas composition and to provide guidance for future research and development efforts in process design. The equivalence ratio is defined as the ratio of the amount of air actually supplied to the gasifier and the stoichiometric amount of air. Increasing ER decreases the production of CO and H2 and increases the production of CO2 and H2O while an increase in temperature increases the fraction of CO and H2. The results show that the optimum temperature to have a syngas able to be effectively used for power generation is 900°C and the optimum equivalence ratio is 0.1.

  7. Process optimization of helium cryo plant operation for SST-1 superconducting magnet system

    NASA Astrophysics Data System (ADS)

    Panchal, P.; Panchal, R.; Patel, R.; Mahesuriya, G.; Sonara, D.; Srikanth G, L. N.; Garg, A.; Christian, D.; Bairagi, N.; Sharma, R.; Patel, K.; Shah, P.; Nimavat, H.; Purwar, G.; Patel, J.; Tanna, V.; Pradhan, S.

    2017-02-01

    Several plasma discharge campaigns have been carried out in steady state superconducting tokamak (SST-1). SST-1 has toroidal field (TF) and poloidal field (PF) superconducting magnet system (SCMS). The TF coils system is cooled to 4.5 - 4.8 K at 1.5 – 1.7 bar(a) under two phase flow condition using 1.3 kW helium cryo plant. Experience revealed that the PF coils demand higher pressure heads even at lower temperatures in comparison to TF coils because of its longer hydraulic path lengths. Thermal run away are observed within PF coils because of single common control valve for all PF coils in distribution system having non-uniform lengths. Thus it is routine practice to stop the cooling of PF path and continue only TF cooling at SCMS inlet temperature of ∼ 14 K. In order to achieve uniform cool down, different control logic is adopted to make cryo stable system. In adopted control logic, the SCMS are cooled down to 80 K at constant inlet pressure of 9 bar(a). After authorization of turbine A/B, the SCMS inlet pressure is gradually controlled by refrigeration J-T valve to achieve stable operation window for cryo system. This paper presents process optimization for cryo plant operation for SST-1 SCMS.

  8. Performance, stability and operation voltage optimization of screen-printed aqueous supercapacitors

    PubMed Central

    Lehtimäki, Suvi; Railanmaa, Anna; Keskinen, Jari; Kujala, Manu; Tuukkanen, Sampo; Lupo, Donald

    2017-01-01

    Harvesting micropower energy from the ambient environment requires an intermediate energy storage, for which printed aqueous supercapacitors are well suited due to their low cost and environmental friendliness. In this work, a systematic study of a large set of devices is used to investigate the effect of process variability and operating voltage on the performance and stability of screen printed aqueous supercapacitors. The current collectors and active layers are printed with graphite and activated carbon inks, respectively, and aqueous NaCl used as the electrolyte. The devices are characterized through galvanostatic discharge measurements for quantitative determination of capacitance and equivalent series resistance (ESR), as well as impedance spectroscopy for a detailed study of the factors contributing to ESR. The capacitances are 200–360 mF and the ESRs 7.9–12.7 Ω, depending on the layer thicknesses. The ESR is found to be dominated by the resistance of the graphite current collectors and is compatible with applications in low-power distributed electronics. The effects of different operating voltages on the capacitance, leakage and aging rate of the supercapacitors are tested, and 1.0 V found to be the optimal choice for using the devices in energy harvesting applications. PMID:28382962

  9. Optimization of a tuned vibration absorber in a multibody system by operational analysis

    NASA Astrophysics Data System (ADS)

    Infante, F.; Perfetto, S.; Mayer, D.; Herold, S.

    2016-09-01

    Mechanical vibration in a drive-train can affect the operation of the system and must be kept below structural thresholds. For this reason tuned vibration absorbers (TVA) are usually employed. They are optimally designed for a single degree of freedom system using the Den Hartog technique. On the other hand, vibrations can be used to produce electrical energy exploitable locally avoiding the issues to transfer it from stationary devices to rating parts. Thus, the design of an integrated device for energy harvesting and vibration reduction is proposed to be employed in the drive-train. By investigation of the dynamic torque in the system under real operation, the accuracy of a numerical model for the multibody is evaluated. In this study, this model is initially used for the definition of the TVA. An energetic procedure is applied in order to reduce the multibody in an equivalent single degree of freedom system for a particular natural mode. Hence, the design parameters of the absorber are obtained. Furthermore, the introduction of the TVA in the model is considered to evaluate the vibration reduction. Finally, an evaluation of the power generated by the piezo transducer and its feedback on the dynamic of the drive-train is performed.

  10. Parameter Optimization and Operating Strategy of a TEG System for Railway Vehicles

    NASA Astrophysics Data System (ADS)

    Heghmanns, A.; Wilbrecht, S.; Beitelschmidt, M.; Geradts, K.

    2016-03-01

    A thermoelectric generator (TEG) system demonstrator for diesel electric locomotives with the objective of reducing the mechanical load on the thermoelectric modules (TEM) is developed and constructed to validate a one-dimensional thermo-fluid flow simulation model. The model is in good agreement with the measurements and basis for the optimization of the TEG's geometry by a genetic multi objective algorithm. The best solution has a maximum power output of approx. 2.7 kW and does not exceed the maximum back pressure of the diesel engine nor the maximum TEM hot side temperature. To maximize the reduction of the fuel consumption, an operating strategy regarding the system power output for the TEG system is developed. Finally, the potential consumption reduction in passenger and freight traffic operating modes is estimated under realistic driving conditions by means of a power train and lateral dynamics model. The fuel savings are between 0.5% and 0.7%, depending on the driving style.

  11. Modeling of delamination in carbon/epoxy composite laminates under four point bending for damage detection and sensor placement optimization

    NASA Astrophysics Data System (ADS)

    Adu, Stephen Aboagye

    Laminated carbon fiber-reinforced polymer composites (CFRPs) possess very high specific strength and stiffness and this has accounted for their wide use in structural applications, most especially in the aerospace industry, where the trade-off between weight and strength is critical. Even though they possess much larger strength ratio as compared to metals like aluminum and lithium, damage in the metals mentioned is rather localized. However, CFRPs generate complex damage zones at stress concentration, with damage progression in the form of matrix cracking, delamination and fiber fracture or fiber/matrix de-bonding. This thesis is aimed at performing; stiffness degradation analysis on composite coupons, containing embedded delamination using the Four-Point Bend Test. The Lamb wave-based approach as a structural health monitoring (SHM) technique is used for damage detection in the composite coupons. Tests were carried-out on unidirectional composite coupons, obtained from panels manufactured with pre-existing defect in the form of embedded delamination in a laminate of stacking sequence [06/904/0 6]T. Composite coupons were obtained from panels, fabricated using vacuum assisted resin transfer molding (VARTM), a liquid composite molding (LCM) process. The discontinuity in the laminate structure due to the de-bonding of the middle plies caused by the insertion of a 0.3 mm thick wax, in-between the middle four (4) ninety degree (90°) plies, is detected using lamb waves generated by surface mounted piezoelectric (PZT) actuators. From the surface mounted piezoelectric sensors, response for both undamaged (coupon with no defect) and damaged (delaminated coupon) is obtained. A numerical study of the embedded crack propagation in the composite coupon under four-point and three-point bending was carried out using FEM. Model validation was then carried out comparing the numerical results with the experimental. Here, surface-to-surface contact property was used to model the

  12. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    DOE PAGES

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; ...

    2016-06-10

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty informationmore » on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.« less

  13. Optimal Merging of Point Sources Extracted from Spitzer Space Telescope Data in Multiple Infrared Passbands Versus Simple General Source Association

    NASA Astrophysics Data System (ADS)

    Laher, R. R.; Fowler, J. W.

    2008-08-01

    For collating point-source flux measurements derived from multiple infrared passbands of Spitzer-Space-Telescope data -- e.g., channels 1-4 of the Infrared Array Camera (IRAC) and channels 1-3 of the Multiband Imaging Photometer for Spitzer (MIPS) -- it is best to use the `bandmerge' software developed at the Spitzer Science Center rather than the relatively simple method of general source association (GSA). The former method uses both source positions and positional uncertainties to form a chi-squared statistic that can be thresholded for optimal matching, while the latter method finds nearest neighbors across bands that fall within a user-specified radius of the primary source. Our assertion is supported by our study of completeness (C) vs. reliability (R) for the two methods, which involved MIPS-24/IRAC-1 matches in the SWIRE Chandra Deep Field South. Both methods can achieve C = 98%, but with R=92.7% for GSA vs. R=97.4% for bandmerge. With almost a factor of three lower in unreliability (1-R), bandmerge is the clear winner of this comparison.

  14. Optimization of cloud point extraction and solid phase extraction methods for speciation of arsenic in natural water using multivariate technique.

    PubMed

    Baig, Jameel A; Kazi, Tasneem G; Shah, Abdul Q; Arain, Mohammad B; Afridi, Hassan I; Kandhro, Ghulam A; Khan, Sumaira

    2009-09-28

    The simple and rapid pre-concentration techniques viz. cloud point extraction (CPE) and solid phase extraction (SPE) were applied for the determination of As(3+) and total inorganic arsenic (iAs) in surface and ground water samples. The As(3+) was formed complex with ammonium pyrrolidinedithiocarbamate (APDC) and extracted by surfactant-rich phases in the non-ionic surfactant Triton X-114, after centrifugation the surfactant-rich phase was diluted with 0.1 mol L(-1) HNO(3) in methanol. While total iAs in water samples was adsorbed on titanium dioxide (TiO(2)); after centrifugation, the solid phase was prepared to be slurry for determination. The extracted As species were determined by electrothermal atomic absorption spectrometry. The multivariate strategy was applied to estimate the optimum values of experimental factors for the recovery of As(3+) and total iAs by CPE and SPE. The standard addition method was used to validate the optimized methods. The obtained result showed sufficient recoveries for As(3+) and iAs (>98.0%). The concentration factor in both cases was found to be 40.

  15. Downstream process synthesis for biochemical production of butanol, ethanol, and acetone from grains: generation of optimal and near-optimal flowsheets with conventional operating units.

    PubMed

    Liu, Jiahong; Fan, L T; Seib, Paul; Friedler, Ferenc; Bertok, Botond

    2004-01-01

    Manufacturing butanol, ethanol, and acetone through grain fermentation has been attracting increasing research interest. In the production of these chemicals from fermentation, the cost of product recovery constitutes the major portion of the total production cost. Developing cost-effective flowsheets for the downstream processing is, therefore, crucial to enhancing the economic viability of this manufacturing method. The present work is concerned with the synthesis of such a process that minimizes the cost of the downstream processing. At the outset, a wide variety of processing equipment and unit operations, i.e., operating units, is selected for possible inclusion in the process. Subsequently, the exactly defined superstructure with minimal complexity, termed maximal structure, is constructed from these operating units with the rigorous and highly efficient graph-theoretic method for process synthesis based on process graphs (P-graphs). Finally, the optimal and near-optimal flowsheets in terms of cost are identified.

  16. Optimizing selection of training and auxiliary data for operational land cover classification for the LCMAP initiative

    NASA Astrophysics Data System (ADS)

    Zhu, Zhe; Gallant, Alisa L.; Woodcock, Curtis E.; Pengra, Bruce; Olofsson, Pontus; Loveland, Thomas R.; Jin, Suming; Dahal, Devendra; Yang, Limin; Auch, Roger F.

    2016-12-01

    probability, and cloud probability improved the accuracy of land cover classification. Compared to the original strategy of the CCDC algorithm (500 pixels per class), the use of the optimal strategy improved the classification accuracies substantially (15-percentage point increase in overall accuracy and 4-percentage point increase in minimum accuracy).

  17. Determination of the accuracy and optimal cut-off point for ELISA test in diagnosis of human brucellosis in Iran.

    PubMed

    Hasibi, Mehrdad; Jafari, Sirus; Mortazavi, Habibollah; Asadollahi, Marjan; Esmaeeli Djavid, Gholamreza

    2013-01-01

    In endemic area the most challenging problem for brucellosis is to find a reliable diagnostic method. In this case-control study, we investigated the accuracy of ELISA test for diagnosis of human brucellosis and determined the optimal cut-off value for ELISA results in Iran. The laboratory diagnosis of brucellosis was performed by blood isolation of Brucella organism with a BACTEC 9240 system and/or detection of Brucella antibodies by standard agglutination test (titer ≥ 1:160). Serum level of ELISA IgG and ELISA IgM from 56 confirmed cases of brucellosis and 126 controls were compared with each other by Box plot graph and Receiver Operating Characteristic (ROC) curve. Box plot graphs showed the high degree of dispersion for IgG and IgM data in patients compared with all controls. We observed partially overlapping for IgM data (not for IgG) between cases and controls in graphs. The area under ROC curve for distinguishing between cases and controls was larger for IgG compared to IgM. Based on results of this study, ELISA IgG test was more reliable than ELISA IgM test in diagnosis of human brucellosis in Iran. Using a cut-off of 10 IU/ml and 50 IU/ml had most sensitivity (92.9%) and most specificity (100%) for ELISA IgG test, respectively.

  18. High-fidelity two-qubit gates via dynamical decoupling of local 1 /f noise at the optimal point

    NASA Astrophysics Data System (ADS)

    D'Arrigo, A.; Falci, G.; Paladino, E.

    2016-08-01

    We investigate the possibility of achieving high-fidelity universal two-qubit gates by supplementing optimal tuning of individual qubits with dynamical decoupling (DD) of local 1 /f noise. We consider simultaneous local pulse sequences applied during the gate operation and compare the efficiencies of periodic, Carr-Purcell, and Uhrig DD with hard π pulses along two directions (πz /y pulses). We present analytical perturbative results (Magnus expansion) in the quasistatic noise approximation combined with numerical simulations for realistic 1 /f noise spectra. The gate efficiency is studied as a function of the gate duration, of the number n of pulses, and of the high-frequency roll-off. We find that the gate error is nonmonotonic in n , decreasing as n-α in the asymptotic limit, α ≥2 , depending on the DD sequence. In this limit πz-Urhig is the most efficient scheme for quasistatic 1 /f noise, but it is highly sensitive to the soft UV cutoff. For small number of pulses, πz control yields anti-Zeno behavior, whereas πy pulses minimize the error for a finite n . For the current noise figures in superconducting qubits, two-qubit gate errors ˜10-6 , meeting the requirements for fault-tolerant quantum computation, can be achieved. The Carr-Purcell-Meiboom-Gill sequence is the most efficient procedure, stable for 1 /f noise with UV cutoff up to gigahertz.

  19. Live Operation Data Collection Optimization and Communication for the Domestic Nuclear Detection Office’s Rail Test Center

    SciTech Connect

    Gelston, Gariann M.

    2010-04-06

    For the Domestic Nuclear Detection Office’s Rail Test Center (i.e., DNDO’s RTC), port operation knowledge with flexible collection tools and technique are essential in both technology testing design and implementation intended for live operational settings. Increased contextual data, flexibility in procedures, and rapid availability of information are keys to addressing the challenges of optimization, validation, and analysis within live operational setting data collection. These concepts need to be integrated into technology testing designs, data collection, validation, and analysis processes. A modified data collection technique with a two phased live operation test method is proposed.

  20. A Time Scheduling Model of Logistics Service Supply Chain Based on the Customer Order Decoupling Point: A Perspective from the Constant Service Operation Time

    PubMed Central

    Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng

    2014-01-01

    In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818