Construction of Low Dissipative High Order Well-Balanced Filter Schemes for Non-Equilibrium Flows
NASA Technical Reports Server (NTRS)
Wang, Wei; Yee, H. C.; Sjogreen, Bjorn; Magin, Thierry; Shu, Chi-Wang
2009-01-01
The goal of this paper is to generalize the well-balanced approach for non-equilibrium flow studied by Wang et al. [26] to a class of low dissipative high order shock-capturing filter schemes and to explore more advantages of well-balanced schemes in reacting flows. The class of filter schemes developed by Yee et al. [30], Sjoegreen & Yee [24] and Yee & Sjoegreen [35] consist of two steps, a full time step of spatially high order non-dissipative base scheme and an adaptive nonlinear filter containing shock-capturing dissipation. A good property of the filter scheme is that the base scheme and the filter are stand alone modules in designing. Therefore, the idea of designing a well-balanced filter scheme is straightforward, i.e., choosing a well-balanced base scheme with a well-balanced filter (both with high order). A typical class of these schemes shown in this paper is the high order central difference schemes/predictor-corrector (PC) schemes with a high order well-balanced WENO filter. The new filter scheme with the well-balanced property will gather the features of both filter methods and well-balanced properties: it can preserve certain steady state solutions exactly; it is able to capture small perturbations, e.g., turbulence fluctuations; it adaptively controls numerical dissipation. Thus it shows high accuracy, efficiency and stability in shock/turbulence interactions. Numerical examples containing 1D and 2D smooth problems, 1D stationary contact discontinuity problem and 1D turbulence/shock interactions are included to verify the improved accuracy, in addition to the well-balanced behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xing, Yulong; Shu, Chi-wang; Noelle, Sebastian
This note aims at demonstrating the advantage of moving-water well-balanced schemes over still-water well-balanced schemes for the shallow water equations. We concentrate on numerical examples with solutions near a moving-water equilibrium. For such examples, still-water well-balanced methods are not capable of capturing the small perturbations of the moving-water equilibrium and may generate significant spurious oscillations, unless an extremely refined mesh is used. On the other hand, moving-water well-balanced methods perform well in these tests. The numerical examples in this note clearly demonstrate the importance of utilizing moving-water well-balanced methods for solutions near a moving-water equilibrium.
NASA Astrophysics Data System (ADS)
Käppeli, R.; Mishra, S.
2016-03-01
Context. Many problems in astrophysics feature flows which are close to hydrostatic equilibrium. However, standard numerical schemes for compressible hydrodynamics may be deficient in approximating this stationary state, where the pressure gradient is nearly balanced by gravitational forces. Aims: We aim to develop a second-order well-balanced scheme for the Euler equations. The scheme is designed to mimic a discrete version of the hydrostatic balance. It therefore can resolve a discrete hydrostatic equilibrium exactly (up to machine precision) and propagate perturbations, on top of this equilibrium, very accurately. Methods: A local second-order hydrostatic equilibrium preserving pressure reconstruction is developed. Combined with a standard central gravitational source term discretization and numerical fluxes that resolve stationary contact discontinuities exactly, the well-balanced property is achieved. Results: The resulting well-balanced scheme is robust and simple enough to be very easily implemented within any existing computer code that solves time explicitly or implicitly the compressible hydrodynamics equations. We demonstrate the performance of the well-balanced scheme for several astrophysically relevant applications: wave propagation in stellar atmospheres, a toy model for core-collapse supernovae, convection in carbon shell burning, and a realistic proto-neutron star.
A Well-Balanced Central-Upwind Scheme for the 2D Shallow Water Equations on Triangular Meshes
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron
2004-01-01
We are interested in approximating solutions of the two-dimensional shallow water equations with a bottom topography on triangular meshes. We show that there is a certain flexibility in choosing the numerical fluxes in the design of semi-discrete Godunov-type central schemes. We take advantage of this fact to generate a new second-order, central-upwind method for the two-dimensional shallow water equations that is well-balanced. We demonstrate the accuracy of our method as well as its balance properties in a variety of examples.
Building fast well-balanced two-stage numerical schemes for a model of two-phase flows
NASA Astrophysics Data System (ADS)
Thanh, Mai Duc
2014-06-01
We present a set of well-balanced two-stage schemes for an isentropic model of two-phase flows arisen from the modeling of deflagration-to-detonation transition in granular materials. The first stage is to absorb the source term in nonconservative form into equilibria. Then in the second stage, these equilibria will be composed into a numerical flux formed by using a convex combination of the numerical flux of a stable Lax-Friedrichs-type scheme and the one of a higher-order Richtmyer-type scheme. Numerical schemes constructed in such a way are expected to get the interesting property: they are fast and stable. Tests show that the method works out until the parameter takes on the value CFL, and so any value of the parameter between zero and this value is expected to work as well. All the schemes in this family are shown to capture stationary waves and preserves the positivity of the volume fractions. The special values of the parameter 0,1/2,1/(1+CFL), and CFL in this family define the Lax-Friedrichs-type, FAST1, FAST2, and FAST3 schemes, respectively. These schemes are shown to give a desirable accuracy. The errors and the CPU time of these schemes and the Roe-type scheme are calculated and compared. The constructed schemes are shown to be well-balanced and faster than the Roe-type scheme.
Paul C. Van Deusen; Linda S. Heath
2010-01-01
Weighted estimation methods for analysis of mapped plot forest inventory data are discussed. The appropriate weighting scheme can vary depending on the type of analysis and graphical display. Both statistical issues and user expectations need to be considered in these methods. A weighting scheme is proposed that balances statistical considerations and the logical...
NASA Astrophysics Data System (ADS)
Canestrelli, Alberto; Dumbser, Michael; Siviglia, Annunziato; Toro, Eleuterio F.
2010-03-01
In this paper, we study the numerical approximation of the two-dimensional morphodynamic model governed by the shallow water equations and bed-load transport following a coupled solution strategy. The resulting system of governing equations contains non-conservative products and it is solved simultaneously within each time step. The numerical solution is obtained using a new high-order accurate centered scheme of the finite volume type on unstructured meshes, which is an extension of the one-dimensional PRICE-C scheme recently proposed in Canestrelli et al. (2009) [5]. The resulting first-order accurate centered method is then extended to high order of accuracy in space via a high order WENO reconstruction technique and in time via a local continuous space-time Galerkin predictor method. The scheme is applied to the shallow water equations and the well-balanced properties of the method are investigated. Finally, we apply the new scheme to different test cases with both fixed and movable bed. An attractive future of the proposed method is that it is particularly suitable for engineering applications since it allows practitioners to adopt the most suitable sediment transport formula which better fits the field data.
Enhanced method of fast re-routing with load balancing in software-defined networks
NASA Astrophysics Data System (ADS)
Lemeshko, Oleksandr; Yeremenko, Oleksandra
2017-11-01
A two-level method of fast re-routing with load balancing in a software-defined network (SDN) is proposed. The novelty of the method consists, firstly, in the introduction of a two-level hierarchy of calculating the routing variables responsible for the formation of the primary and backup paths, and secondly, in ensuring a balanced load of the communication links of the network, which meets the requirements of the traffic engineering concept. The method provides implementation of link, node, path, and bandwidth protection schemes for fast re-routing in SDN. The separation in accordance with the interaction prediction principle along two hierarchical levels of the calculation functions of the primary (lower level) and backup (upper level) routes allowed to abandon the initial sufficiently large and nonlinear optimization problem by transiting to the iterative solution of linear optimization problems of half the dimension. The analysis of the proposed method confirmed its efficiency and effectiveness in terms of obtaining optimal solutions for ensuring balanced load of communication links and implementing the required network element protection schemes for fast re-routing in SDN.
Effects of partitioning and scheduling sparse matrix factorization on communication and load balance
NASA Technical Reports Server (NTRS)
Venugopal, Sesh; Naik, Vijay K.
1991-01-01
A block based, automatic partitioning and scheduling methodology is presented for sparse matrix factorization on distributed memory systems. Using experimental results, this technique is analyzed for communication and load imbalance overhead. To study the performance effects, these overheads were compared with those obtained from a straightforward 'wrap mapped' column assignment scheme. All experimental results were obtained using test sparse matrices from the Harwell-Boeing data set. The results show that there is a communication and load balance tradeoff. The block based method results in lower communication cost whereas the wrap mapped scheme gives better load balance.
Balanced Central Schemes for the Shallow Water Equations on Unstructured Grids
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron
2004-01-01
We present a two-dimensional, well-balanced, central-upwind scheme for approximating solutions of the shallow water equations in the presence of a stationary bottom topography on triangular meshes. Our starting point is the recent central scheme of Kurganov and Petrova (KP) for approximating solutions of conservation laws on triangular meshes. In order to extend this scheme from systems of conservation laws to systems of balance laws one has to find an appropriate discretization of the source terms. We first show that for general triangulations there is no discretization of the source terms that corresponds to a well-balanced form of the KP scheme. We then derive a new variant of a central scheme that can be balanced on triangular meshes. We note in passing that it is straightforward to extend the KP scheme to general unstructured conformal meshes. This extension allows us to recover our previous well-balanced scheme on Cartesian grids. We conclude with several simulations, verifying the second-order accuracy of our scheme as well as its well-balanced properties.
Saleem, M Rehan; Ashraf, Waqas; Zia, Saqib; Ali, Ishtiaq; Qamar, Shamsul
2018-01-01
This paper is concerned with the derivation of a well-balanced kinetic scheme to approximate a shallow flow model incorporating non-flat bottom topography and horizontal temperature gradients. The considered model equations, also called as Ripa system, are the non-homogeneous shallow water equations considering temperature gradients and non-uniform bottom topography. Due to the presence of temperature gradient terms, the steady state at rest is of primary interest from the physical point of view. However, capturing of this steady state is a challenging task for the applied numerical methods. The proposed well-balanced kinetic flux vector splitting (KFVS) scheme is non-oscillatory and second order accurate. The second order accuracy of the scheme is obtained by considering a MUSCL-type initial reconstruction and Runge-Kutta time stepping method. The scheme is applied to solve the model equations in one and two space dimensions. Several numerical case studies are carried out to validate the proposed numerical algorithm. The numerical results obtained are compared with those of staggered central NT scheme. The results obtained are also in good agreement with the recently published results in the literature, verifying the potential, efficiency, accuracy and robustness of the suggested numerical scheme.
2018-01-01
This paper is concerned with the derivation of a well-balanced kinetic scheme to approximate a shallow flow model incorporating non-flat bottom topography and horizontal temperature gradients. The considered model equations, also called as Ripa system, are the non-homogeneous shallow water equations considering temperature gradients and non-uniform bottom topography. Due to the presence of temperature gradient terms, the steady state at rest is of primary interest from the physical point of view. However, capturing of this steady state is a challenging task for the applied numerical methods. The proposed well-balanced kinetic flux vector splitting (KFVS) scheme is non-oscillatory and second order accurate. The second order accuracy of the scheme is obtained by considering a MUSCL-type initial reconstruction and Runge-Kutta time stepping method. The scheme is applied to solve the model equations in one and two space dimensions. Several numerical case studies are carried out to validate the proposed numerical algorithm. The numerical results obtained are compared with those of staggered central NT scheme. The results obtained are also in good agreement with the recently published results in the literature, verifying the potential, efficiency, accuracy and robustness of the suggested numerical scheme. PMID:29851978
A self-optimizing scheme for energy balanced routing in Wireless Sensor Networks using SensorAnt.
Shamsan Saleh, Ahmed M; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A; Ismail, Alyani
2012-01-01
Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency.
Well-balanced Schemes for Gravitationally Stratified Media
NASA Astrophysics Data System (ADS)
Käppeli, R.; Mishra, S.
2015-10-01
We present a well-balanced scheme for the Euler equations with gravitation. The scheme is capable of maintaining exactly (up to machine precision) a discrete hydrostatic equilibrium without any assumption on a thermodynamic variable such as specific entropy or temperature. The well-balanced scheme is based on a local hydrostatic pressure reconstruction. Moreover, it is computationally efficient and can be incorporated into any existing algorithm in a straightforward manner. The presented scheme improves over standard ones especially when flows close to a hydrostatic equilibrium have to be simulated. The performance of the well-balanced scheme is demonstrated on an astrophysically relevant application: a toy model for core-collapse supernovae.
A well-balanced scheme for Ten-Moment Gaussian closure equations with source term
NASA Astrophysics Data System (ADS)
Meena, Asha Kumari; Kumar, Harish
2018-02-01
In this article, we consider the Ten-Moment equations with source term, which occurs in many applications related to plasma flows. We present a well-balanced second-order finite volume scheme. The scheme is well-balanced for general equation of state, provided we can write the hydrostatic solution as a function of the space variables. This is achieved by combining hydrostatic reconstruction with contact preserving, consistent numerical flux, and appropriate source discretization. Several numerical experiments are presented to demonstrate the well-balanced property and resulting accuracy of the proposed scheme.
New coherent laser communication detection scheme based on channel-switching method.
Liu, Fuchuan; Sun, Jianfeng; Ma, Xiaoping; Hou, Peipei; Cai, Guangyu; Sun, Zhiwei; Lu, Zhiyong; Liu, Liren
2015-04-01
A new coherent laser communication detection scheme based on the channel-switching method is proposed. The detection front end of this scheme comprises a 90° optical hybrid and two balanced photodetectors which outputs the in-phase (I) channel and quadrature-phase (Q) channel signal current, respectively. With this method, the ultrahigh speed analog/digital transform of the signal of the I or Q channel is not required. The phase error between the signal and local lasers is obtained by simple analog circuit. Using the phase error signal, the signals of the I/Q channel are switched alternately. The principle of this detection scheme is presented. Moreover, the comparison of the sensitivity of this scheme with that of homodyne detection with an optical phase-locked loop is discussed. An experimental setup was constructed to verify the proposed detection scheme. The offline processing procedure and results are presented. This scheme could be realized through simple structure and has potential applications in cost-effective high-speed laser communication.
NASA Astrophysics Data System (ADS)
Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg
2015-05-01
In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.
On delay adjustment for dynamic load balancing in distributed virtual environments.
Deng, Yunhua; Lau, Rynson W H
2012-04-01
Distributed virtual environments (DVEs) are becoming very popular in recent years, due to the rapid growing of applications, such as massive multiplayer online games (MMOGs). As the number of concurrent users increases, scalability becomes one of the major challenges in designing an interactive DVE system. One solution to address this scalability problem is to adopt a multi-server architecture. While some methods focus on the quality of partitioning the load among the servers, others focus on the efficiency of the partitioning process itself. However, all these methods neglect the effect of network delay among the servers on the accuracy of the load balancing solutions. As we show in this paper, the change in the load of the servers due to network delay would affect the performance of the load balancing algorithm. In this work, we conduct a formal analysis of this problem and discuss two efficient delay adjustment schemes to address the problem. Our experimental results show that our proposed schemes can significantly improve the performance of the load balancing algorithm with neglectable computation overhead.
NASA Astrophysics Data System (ADS)
Želi, Velibor; Zorica, Dušan
2018-02-01
Generalization of the heat conduction equation is obtained by considering the system of equations consisting of the energy balance equation and fractional-order constitutive heat conduction law, assumed in the form of the distributed-order Cattaneo type. The Cauchy problem for system of energy balance equation and constitutive heat conduction law is treated analytically through Fourier and Laplace integral transform methods, as well as numerically by the method of finite differences through Adams-Bashforth and Grünwald-Letnikov schemes for approximation derivatives in temporal domain and leap frog scheme for spatial derivatives. Numerical examples, showing time evolution of temperature and heat flux spatial profiles, demonstrate applicability and good agreement of both methods in cases of multi-term and power-type distributed-order heat conduction laws.
Well-balanced compressible cut-cell simulation of atmospheric flow.
Klein, R; Bates, K R; Nikiforakis, N
2009-11-28
Cut-cell meshes present an attractive alternative to terrain-following coordinates for the representation of topography within atmospheric flow simulations, particularly in regions of steep topographic gradients. In this paper, we present an explicit two-dimensional method for the numerical solution on such meshes of atmospheric flow equations including gravitational sources. This method is fully conservative and allows for time steps determined by the regular grid spacing, avoiding potential stability issues due to arbitrarily small boundary cells. We believe that the scheme is unique in that it is developed within a dimensionally split framework, in which each coordinate direction in the flow is solved independently at each time step. Other notable features of the scheme are: (i) its conceptual and practical simplicity, (ii) its flexibility with regard to the one-dimensional flux approximation scheme employed, and (iii) the well-balancing of the gravitational sources allowing for stable simulation of near-hydrostatic flows. The presented method is applied to a selection of test problems including buoyant bubble rise interacting with geometry and lee-wave generation due to topography.
NASA Astrophysics Data System (ADS)
Chertock, Alina; Cui, Shumo; Kurganov, Alexander; Özcan, Şeyma Nur; Tadmor, Eitan
2018-04-01
We develop a second-order well-balanced central-upwind scheme for the compressible Euler equations with gravitational source term. Here, we advocate a new paradigm based on a purely conservative reformulation of the equations using global fluxes. The proposed scheme is capable of exactly preserving steady-state solutions expressed in terms of a nonlocal equilibrium variable. A crucial step in the construction of the second-order scheme is a well-balanced piecewise linear reconstruction of equilibrium variables combined with a well-balanced central-upwind evolution in time, which is adapted to reduce the amount of numerical viscosity when the flow is at (near) steady-state regime. We show the performance of our newly developed central-upwind scheme and demonstrate importance of perfect balance between the fluxes and gravitational forces in a series of one- and two-dimensional examples.
BALANCING THE LOAD: A VORONOI BASED SCHEME FOR PARALLEL COMPUTATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinberg, Elad; Yalinewich, Almog; Sari, Re'em
2015-01-01
One of the key issues when running a simulation on multiple CPUs is maintaining a proper load balance throughout the run and minimizing communications between CPUs. We propose a novel method of utilizing a Voronoi diagram to achieve a nearly perfect load balance without the need of any global redistributions of data. As a show case, we implement our method in RICH, a two-dimensional moving mesh hydrodynamical code, but it can be extended trivially to other codes in two or three dimensions. Our tests show that this method is indeed efficient and can be used in a large variety ofmore » existing hydrodynamical codes.« less
2007-12-06
high order well-balanced schemes to a class of hyperbolic systems with source terms, Boletin de la Sociedad Espanola de Matematica Aplicada, v34 (2006...schemes to a class of hyperbolic systems with source terms, Boletin de la Sociedad Espanola de Matematica Aplicada, v34 (2006), pp.69-80. 39. Y. Xu and C.-W
Balancing fast-rotating parts of hand-held machine drive
NASA Astrophysics Data System (ADS)
Korotkov, V. S.; Sicora, E. A.; Nadeina, L. V.; Yongzheng, Wang
2018-03-01
The article considers the issues related to the balancing of fast rotating parts of the hand-held machine drive including a wave transmission with intermediate rolling elements, which is constructed on the basis of the single-phase collector motor with a useful power of 1 kW and a nominal rotation frequency of 15000 rpm. The forms of balancers and their location are chosen. The method of balancing is described. The scheme for determining of residual unbalance in two correction planes is presented. Measurement results are given in tables.
Analysis of periodically excited non-linear systems by a parametric continuation technique
NASA Astrophysics Data System (ADS)
Padmanabhan, C.; Singh, R.
1995-07-01
The dynamic behavior and frequency response of harmonically excited piecewise linear and/or non-linear systems has been the subject of several recent investigations. Most of the prior studies employed harmonic balance or Galerkin schemes, piecewise linear techniques, analog simulation and/or direct numerical integration (digital simulation). Such techniques are somewhat limited in their ability to predict all of the dynamic characteristics, including bifurcations leading to the occurrence of unstable, subharmonic, quasi-periodic and/or chaotic solutions. To overcome this problem, a parametric continuation scheme, based on the shooting method, is applied specifically to a periodically excited piecewise linear/non-linear system, in order to improve understanding as well as to obtain the complete dynamic response. Parameter regions exhibiting bifurcations to harmonic, subharmonic or quasi-periodic solutions are obtained quite efficiently and systematically. Unlike other techniques, the proposed scheme can follow period-doubling bifurcations, and with some modifications obtain stable quasi-periodic solutions and their bifurcations. This knowledge is essential in establishing conditions for the occurrence of chaotic oscillations in any non-linear system. The method is first validated through the Duffing oscillator example, the solutions to which are also obtained by conventional one-term harmonic balance and perturbation methods. The second example deals with a clearance non-linearity problem for both harmonic and periodic excitations. Predictions from the proposed scheme match well with available analog simulation data as well as with multi-term harmonic balance results. Potential savings in computational time over direct numerical integration is demonstrated for some of the example cases. Also, this work has filled in some of the solution regimes for an impact pair, which were missed previously in the literature. Finally, one main limitation associated with the proposed procedure is discussed.
Analysis of operator splitting errors for near-limit flame simulations
NASA Astrophysics Data System (ADS)
Lu, Zhen; Zhou, Hua; Li, Shan; Ren, Zhuyin; Lu, Tianfeng; Law, Chung K.
2017-04-01
High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction-diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction of ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.
Analysis of operator splitting errors for near-limit flame simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhen; Zhou, Hua; Li, Shan
High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction–diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction ofmore » ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.« less
A Noise-Filtered Under-Sampling Scheme for Imbalanced Classification.
Kang, Qi; Chen, XiaoShuang; Li, SiSi; Zhou, MengChu
2017-12-01
Under-sampling is a popular data preprocessing method in dealing with class imbalance problems, with the purposes of balancing datasets to achieve a high classification rate and avoiding the bias toward majority class examples. It always uses full minority data in a training dataset. However, some noisy minority examples may reduce the performance of classifiers. In this paper, a new under-sampling scheme is proposed by incorporating a noise filter before executing resampling. In order to verify the efficiency, this scheme is implemented based on four popular under-sampling methods, i.e., Undersampling + Adaboost, RUSBoost, UnderBagging, and EasyEnsemble through benchmarks and significance analysis. Furthermore, this paper also summarizes the relationship between algorithm performance and imbalanced ratio. Experimental results indicate that the proposed scheme can improve the original undersampling-based methods with significance in terms of three popular metrics for imbalanced classification, i.e., the area under the curve, -measure, and -mean.
Valiant load-balanced robust routing under hose model for WDM mesh networks
NASA Astrophysics Data System (ADS)
Zhang, Xiaoning; Li, Lemin; Wang, Sheng
2006-09-01
In this paper, we propose Valiant Load-Balanced robust routing scheme for WDM mesh networks under the model of polyhedral uncertainty (i.e., hose model), and the proposed routing scheme is implemented with traffic grooming approach. Our Objective is to maximize the hose model throughput. A mathematic formulation of Valiant Load-Balanced robust routing is presented and three fast heuristic algorithms are also proposed. When implementing Valiant Load-Balanced robust routing scheme to WDM mesh networks, a novel traffic-grooming algorithm called MHF (minimizing hop first) is proposed. We compare the three heuristic algorithms with the VPN tree under the hose model. Finally we demonstrate in the simulation results that MHF with Valiant Load-Balanced robust routing scheme outperforms the traditional traffic-grooming algorithm in terms of the throughput for the uniform/non-uniform traffic matrix under the hose model.
Implicit schemes and parallel computing in unstructured grid CFD
NASA Technical Reports Server (NTRS)
Venkatakrishnam, V.
1995-01-01
The development of implicit schemes for obtaining steady state solutions to the Euler and Navier-Stokes equations on unstructured grids is outlined. Applications are presented that compare the convergence characteristics of various implicit methods. Next, the development of explicit and implicit schemes to compute unsteady flows on unstructured grids is discussed. Next, the issues involved in parallelizing finite volume schemes on unstructured meshes in an MIMD (multiple instruction/multiple data stream) fashion are outlined. Techniques for partitioning unstructured grids among processors and for extracting parallelism in explicit and implicit solvers are discussed. Finally, some dynamic load balancing ideas, which are useful in adaptive transient computations, are presented.
NASA Astrophysics Data System (ADS)
Gaburro, Elena; Castro, Manuel J.; Dumbser, Michael
2018-06-01
In this work, we present a novel second-order accurate well-balanced arbitrary Lagrangian-Eulerian (ALE) finite volume scheme on moving nonconforming meshes for the Euler equations of compressible gas dynamics with gravity in cylindrical coordinates. The main feature of the proposed algorithm is the capability of preserving many of the physical properties of the system exactly also on the discrete level: besides being conservative for mass, momentum and total energy, also any known steady equilibrium between pressure gradient, centrifugal force, and gravity force can be exactly maintained up to machine precision. Perturbations around such equilibrium solutions are resolved with high accuracy and with minimal dissipation on moving contact discontinuities even for very long computational times. This is achieved by the novel combination of well-balanced path-conservative finite volume schemes, which are expressly designed to deal with source terms written via non-conservative products, with ALE schemes on moving grids, which exhibit only very little numerical dissipation on moving contact waves. In particular, we have formulated a new HLL-type and a novel Osher-type flux that are both able to guarantee the well balancing in a gas cloud rotating around a central object. Moreover, to maintain a high level of quality of the moving mesh, we have adopted a nonconforming treatment of the sliding interfaces that appear due to the differential rotation. A large set of numerical tests has been carried out in order to check the accuracy of the method close and far away from the equilibrium, both, in one- and two-space dimensions.
NASA Astrophysics Data System (ADS)
Luo, H.; Zhang, H.; Gao, J.
2016-12-01
Seismic and magnetotelluric (MT) imaging methods are generally used to characterize subsurface structures at various scales. The two methods are complementary to each other and the integration of them is helpful for more reliably determining the resistivity and velocity models of the target region. Because of the difficulty in finding empirical relationship between resistivity and velocity parameters, Gallardo and Meju [2003] proposed a joint inversion method enforcing resistivity and velocity models consistent in structure, which is realized by minimizing cross gradients between two models. However, it is extremely challenging to combine two different inversion systems together along with the cross gradient constraints. For this reason, Gallardo [2007] proposed a joint inversion scheme that decouples the seismic and MT inversion systems by iteratively performing seismic and MT inversions as well as cross gradient minimization separately. This scheme avoids the complexity of combining two different systems together but it suffers the issue of balancing between data fitting and structure constraint. In this study, we have developed a new joint inversion scheme that avoids the problem encountered by the scheme of Gallardo [2007]. In the new scheme, seismic and MT inversions are still separately performed but the cross gradient minimization is also constrained by model perturbations from separate inversions. In this way, the new scheme still avoids the complexity of combining two different systems together and at the same time the balance between data fitting and structure consistency constraint can be enforced. We have tested our joint inversion algorithm for both 2D and 3D cases. Synthetic tests show that joint inversion better reconstructed the velocity and resistivity models than separate inversions. Compared to separate inversions, joint inversion can remove artifacts in the resistivity model and can improve the resolution for deeper resistivity structures. We will also show results applying the new joint seismic and MT inversion scheme to southwest China, where several MT profiles are available and earthquakes are very active.
NASA Astrophysics Data System (ADS)
Eldred, Christopher; Randall, David
2017-02-01
The shallow water equations provide a useful analogue of the fully compressible Euler equations since they have similar characteristics: conservation laws, inertia-gravity and Rossby waves, and a (quasi-) balanced state. In order to obtain realistic simulation results, it is desirable that numerical models have discrete analogues of these properties. Two prototypical examples of such schemes are the 1981 Arakawa and Lamb (AL81) C-grid total energy and potential enstrophy conserving scheme, and the 2007 Salmon (S07) Z-grid total energy and potential enstrophy conserving scheme. Unfortunately, the AL81 scheme is restricted to logically square, orthogonal grids, and the S07 scheme is restricted to uniform square grids. The current work extends the AL81 scheme to arbitrary non-orthogonal polygonal grids and the S07 scheme to arbitrary orthogonal spherical polygonal grids in a manner that allows for both total energy and potential enstrophy conservation, by combining Hamiltonian methods (work done by Salmon, Gassmann, Dubos, and others) and discrete exterior calculus (Thuburn, Cotter, Dubos, Ringler, Skamarock, Klemp, and others). Detailed results of the schemes applied to standard test cases are deferred to part 2 of this series of papers.
Comparing the Performance of Two Dynamic Load Distribution Methods
NASA Technical Reports Server (NTRS)
Kale, L. V.
1987-01-01
Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.
Linear and nonlinear properties of numerical methods for the rotating shallow water equations
NASA Astrophysics Data System (ADS)
Eldred, Chris
The shallow water equations provide a useful analogue of the fully compressible Euler equations since they have similar conservation laws, many of the same types of waves and a similar (quasi-) balanced state. It is desirable that numerical models posses similar properties, and the prototypical example of such a scheme is the 1981 Arakawa and Lamb (AL81) staggered (C-grid) total energy and potential enstrophy conserving scheme, based on the vector invariant form of the continuous equations. However, this scheme is restricted to a subset of logically square, orthogonal grids. The current work extends the AL81 scheme to arbitrary non-orthogonal polygonal grids, by combining Hamiltonian methods (work done by Salmon, Gassmann, Dubos and others) and Discrete Exterior Calculus (Thuburn, Cotter, Dubos, Ringler, Skamarock, Klemp and others). It is also possible to obtain these properties (along with arguably superior wave dispersion properties) through the use of a collocated (Z-grid) scheme based on the vorticity-divergence form of the continuous equations. Unfortunately, existing examples of these schemes in the literature for general, spherical grids either contain computational modes; or do not conserve total energy and potential enstrophy. This dissertation extends an existing scheme for planar grids to spherical grids, through the use of Nambu brackets (as pioneered by Rick Salmon). To compare these two schemes, the linear modes (balanced states, stationary modes and propagating modes; with and without dissipation) are examined on both uniform planar grids (square, hexagonal) and quasi-uniform spherical grids (geodesic, cubed-sphere). In addition to evaluating the linear modes, the results of the two schemes applied to a set of standard shallow water test cases and a recently developed forced-dissipative turbulence test case from John Thuburn (intended to evaluate the ability the suitability of schemes as the basis for a climate model) on both hexagonal-pentagonal icosahedral grids and cubed-sphere grids are presented. Finally, some remarks and thoughts about the suitability of these two schemes as the basis for atmospheric dynamical development are given.
Directional virtual backbone based data aggregation scheme for Wireless Visual Sensor Networks.
Zhang, Jing; Liu, Shi-Jian; Tsai, Pei-Wei; Zou, Fu-Min; Ji, Xiao-Rong
2018-01-01
Data gathering is a fundamental task in Wireless Visual Sensor Networks (WVSNs). Features of directional antennas and the visual data make WVSNs more complex than the conventional Wireless Sensor Network (WSN). The virtual backbone is a technique, which is capable of constructing clusters. The version associating with the aggregation operation is also referred to as the virtual backbone tree. In most of the existing literature, the main focus is on the efficiency brought by the construction of clusters that the existing methods neglect local-balance problems in general. To fill up this gap, Directional Virtual Backbone based Data Aggregation Scheme (DVBDAS) for the WVSNs is proposed in this paper. In addition, a measurement called the energy consumption density is proposed for evaluating the adequacy of results in the cluster-based construction problems. Moreover, the directional virtual backbone construction scheme is proposed by considering the local-balanced factor. Furthermore, the associated network coding mechanism is utilized to construct DVBDAS. Finally, both the theoretical analysis of the proposed DVBDAS and the simulations are given for evaluating the performance. The experimental results prove that the proposed DVBDAS achieves higher performance in terms of both the energy preservation and the network lifetime extension than the existing methods.
Operator splitting method for simulation of dynamic flows in natural gas pipeline networks
Dyachenko, Sergey A.; Zlotnik, Anatoly; Korotkevich, Alexander O.; ...
2017-09-19
Here, we develop an operator splitting method to simulate flows of isothermal compressible natural gas over transmission pipelines. The method solves a system of nonlinear hyperbolic partial differential equations (PDEs) of hydrodynamic type for mass flow and pressure on a metric graph, where turbulent losses of momentum are modeled by phenomenological Darcy-Weisbach friction. Mass flow balance is maintained through the boundary conditions at the network nodes, where natural gas is injected or withdrawn from the system. Gas flow through the network is controlled by compressors boosting pressure at the inlet of the adjoint pipe. Our operator splitting numerical scheme ismore » unconditionally stable and it is second order accurate in space and time. The scheme is explicit, and it is formulated to work with general networks with loops. We test the scheme over range of regimes and network configurations, also comparing its performance with performance of two other state of the art implicit schemes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyachenko, Sergey A.; Zlotnik, Anatoly; Korotkevich, Alexander O.
Here, we develop an operator splitting method to simulate flows of isothermal compressible natural gas over transmission pipelines. The method solves a system of nonlinear hyperbolic partial differential equations (PDEs) of hydrodynamic type for mass flow and pressure on a metric graph, where turbulent losses of momentum are modeled by phenomenological Darcy-Weisbach friction. Mass flow balance is maintained through the boundary conditions at the network nodes, where natural gas is injected or withdrawn from the system. Gas flow through the network is controlled by compressors boosting pressure at the inlet of the adjoint pipe. Our operator splitting numerical scheme ismore » unconditionally stable and it is second order accurate in space and time. The scheme is explicit, and it is formulated to work with general networks with loops. We test the scheme over range of regimes and network configurations, also comparing its performance with performance of two other state of the art implicit schemes.« less
High order finite volume WENO schemes for the Euler equations under gravitational fields
NASA Astrophysics Data System (ADS)
Li, Gang; Xing, Yulong
2016-07-01
Euler equations with gravitational source terms are used to model many astrophysical and atmospheric phenomena. This system admits hydrostatic balance where the flux produced by the pressure is exactly canceled by the gravitational source term, and two commonly seen equilibria are the isothermal and polytropic hydrostatic solutions. Exact preservation of these equilibria is desirable as many practical problems are small perturbations of such balance. High order finite difference weighted essentially non-oscillatory (WENO) schemes have been proposed in [22], but only for the isothermal equilibrium state. In this paper, we design high order well-balanced finite volume WENO schemes, which can preserve not only the isothermal equilibrium but also the polytropic hydrostatic balance state exactly, and maintain genuine high order accuracy for general solutions. The well-balanced property is obtained by novel source term reformulation and discretization, combined with well-balanced numerical fluxes. Extensive one- and two-dimensional simulations are performed to verify well-balanced property, high order accuracy, as well as good resolution for smooth and discontinuous solutions.
NASA Astrophysics Data System (ADS)
Zhu, Wenbin; Jia, Shaofeng; Lv, Aifeng
2017-10-01
The triangle method based on the spatial relationship between remotely sensed land surface temperature (Ts) and vegetation index (VI) has been widely used for the estimates of evaporative fraction (EF). In the present study, a universal triangle method was proposed by transforming the Ts-VI feature space from a regional scale to a pixel scale. The retrieval of EF is only related to the boundary conditions at pixel scale, regardless of the Ts-VI configuration over the spatial domain. The boundary conditions of each pixel are composed of the theoretical dry edge determined by the surface energy balance principle and the wet edge determined by the average air temperature of open water. The universal triangle method was validated using the EF observations collected by the Energy Balance Bowen Ratio systems in the Southern Great Plains of the United States of America (USA). Two parameterization schemes of EF were used to demonstrate their applicability with Terra Moderate Resolution Imaging Spectroradiometer (MODIS) products over the whole year 2004. The results of this study show that the accuracy produced by both of these two parameterization schemes is comparable to that produced by the traditional triangle method, although the universal triangle method seems specifically suited to the parameterization scheme proposed in our previous research. The independence of the universal triangle method from the Ts-VI feature space makes it possible to conduct a continuous monitoring of evapotranspiration and soil moisture. That is just the ability the traditional triangle method does not possess.
Rodrigues, Joel J. P. C.
2014-01-01
This paper exploits sink mobility to prolong the lifetime of sensor networks while maintaining the data transmission delay relatively low. A location predictive and time adaptive data gathering scheme is proposed. In this paper, we introduce a sink location prediction principle based on loose time synchronization and deduce the time-location formulas of the mobile sink. According to local clocks and the time-location formulas of the mobile sink, nodes in the network are able to calculate the current location of the mobile sink accurately and route data packets timely toward the mobile sink by multihop relay. Considering that data packets generating from different areas may be different greatly, an adaptive dwelling time adjustment method is also proposed to balance energy consumption among nodes in the network. Simulation results show that our data gathering scheme enables data routing with less data transmission time delay and balance energy consumption among nodes. PMID:25302327
Eldred, Christopher; Randall, David
2017-02-17
The shallow water equations provide a useful analogue of the fully compressible Euler equations since they have similar characteristics: conservation laws, inertia-gravity and Rossby waves, and a (quasi-) balanced state. In order to obtain realistic simulation results, it is desirable that numerical models have discrete analogues of these properties. Two prototypical examples of such schemes are the 1981 Arakawa and Lamb (AL81) C-grid total energy and potential enstrophy conserving scheme, and the 2007 Salmon (S07) Z-grid total energy and potential enstrophy conserving scheme. Unfortunately, the AL81 scheme is restricted to logically square, orthogonal grids, and the S07 scheme is restrictedmore » to uniform square grids. The current work extends the AL81 scheme to arbitrary non-orthogonal polygonal grids and the S07 scheme to arbitrary orthogonal spherical polygonal grids in a manner that allows for both total energy and potential enstrophy conservation, by combining Hamiltonian methods (work done by Salmon, Gassmann, Dubos, and others) and discrete exterior calculus (Thuburn, Cotter, Dubos, Ringler, Skamarock, Klemp, and others). Lastly, detailed results of the schemes applied to standard test cases are deferred to part 2 of this series of papers.« less
An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks.
Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing
2017-03-20
In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods.
An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks
Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing
2017-01-01
In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods. PMID:28335537
Simulation of the West African Monsoon using the MIT Regional Climate Model
NASA Astrophysics Data System (ADS)
Im, Eun-Soon; Gianotti, Rebecca L.; Eltahir, Elfatih A. B.
2013-04-01
We test the performance of the MIT Regional Climate Model (MRCM) in simulating the West African Monsoon. MRCM introduces several improvements over Regional Climate Model version 3 (RegCM3) including coupling of Integrated Biosphere Simulator (IBIS) land surface scheme, a new albedo assignment method, a new convective cloud and rainfall auto-conversion scheme, and a modified boundary layer height and cloud scheme. Using MRCM, we carried out a series of experiments implementing two different land surface schemes (IBIS and BATS) and three convection schemes (Grell with the Fritsch-Chappell closure, standard Emanuel, and modified Emanuel that includes the new convective cloud scheme). Our analysis primarily focused on comparing the precipitation characteristics, surface energy balance and large scale circulations against various observations. We document a significant sensitivity of the West African monsoon simulation to the choices of the land surface and convection schemes. In spite of several deficiencies, the simulation with the combination of IBIS and modified Emanuel schemes shows the best performance reflected in a marked improvement of precipitation in terms of spatial distribution and monsoon features. In particular, the coupling of IBIS leads to representations of the surface energy balance and partitioning that are consistent with observations. Therefore, the major components of the surface energy budget (including radiation fluxes) in the IBIS simulations are in better agreement with observation than those from our BATS simulation, or from previous similar studies (e.g Steiner et al., 2009), both qualitatively and quantitatively. The IBIS simulations also reasonably reproduce the dynamical structure of vertically stratified behavior of the atmospheric circulation with three major components: westerly monsoon flow, African Easterly Jet (AEJ), and Tropical Easterly Jet (TEJ). In addition, since the modified Emanuel scheme tends to reduce the precipitation amount, it improves the precipitation over regions suffering from systematic wet bias.
Self-balanced real-time photonic scheme for ultrafast random number generation
NASA Astrophysics Data System (ADS)
Li, Pu; Guo, Ya; Guo, Yanqiang; Fan, Yuanlong; Guo, Xiaomin; Liu, Xianglian; Shore, K. Alan; Dubrova, Elena; Xu, Bingjie; Wang, Yuncai; Wang, Anbang
2018-06-01
We propose a real-time self-balanced photonic method for extracting ultrafast random numbers from broadband randomness sources. In place of electronic analog-to-digital converters (ADCs), the balanced photo-detection technology is used to directly quantize optically sampled chaotic pulses into a continuous random number stream. Benefitting from ultrafast photo-detection, our method can efficiently eliminate the generation rate bottleneck from electronic ADCs which are required in nearly all the available fast physical random number generators. A proof-of-principle experiment demonstrates that using our approach 10 Gb/s real-time and statistically unbiased random numbers are successfully extracted from a bandwidth-enhanced chaotic source. The generation rate achieved experimentally here is being limited by the bandwidth of the chaotic source. The method described has the potential to attain a real-time rate of 100 Gb/s.
NASA Technical Reports Server (NTRS)
Hailperin, M.
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.
NASA Technical Reports Server (NTRS)
Hailperin, Max
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.
Hyperbolic reformulation of a 1D viscoelastic blood flow model and ADER finite volume schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montecinos, Gino I.; Müller, Lucas O.; Toro, Eleuterio F.
2014-06-01
The applicability of ADER finite volume methods to solve hyperbolic balance laws with stiff source terms in the context of well-balanced and non-conservative schemes is extended to solve a one-dimensional blood flow model for viscoelastic vessels, reformulated as a hyperbolic system, via a relaxation time. A criterion for selecting relaxation times is found and an empirical convergence rate assessment is carried out to support this result. The proposed methodology is validated by applying it to a network of viscoelastic vessels for which experimental and numerical results are available. The agreement between the results obtained in the present paper and thosemore » available in the literature is satisfactory. Key features of the present formulation and numerical methodologies, such as accuracy, efficiency and robustness, are fully discussed in the paper.« less
Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai
2015-12-01
Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.
Mathematical modeling of the stress-strain state of the outlet guide vane made of various materials
NASA Astrophysics Data System (ADS)
Grinev, M. A.; Anoshkin, A. N.; Pisarev, P. V.; Zuiko, V. Yu.; Shipunov, G. S.
2016-11-01
The present work is devoted to the detailed stress-strain analysis of the composite outlet guide vane (OGV) for aircraft engines with a special focus on areas with twisted layers where the initiation of high interlaminar stresses is most expected. Various polymer composite materials and reinforcing schemes are researched. The technological scheme of laying-out of anisotropic plies and the fastening method are taken into account in the model. The numerical simulation is carried out by the finite element method (FEM) with the ANSYS Workbench software. It is shown that interlaminar shear stresses are most dangerous. It is found that balanced carbon fiber reinforced plastic (CFRP) with the [0°/±45°] reinforcing scheme allows us to provide the double strength margin under working loads for the developed OGV.
NASA Astrophysics Data System (ADS)
Zhang, Jinfang; Yan, Xiaoqing; Wang, Hongfu
2018-02-01
With the rapid development of renewable energy in Northwest China, curtailment phenomena is becoming more and more serve owing to lack of adjustment ability and enough transmission capacity. Based on the existing HVDC projects, exploring the hybrid transmission mode associated with thermal power and renewable power will be necessary and important. This paper has proposed a method on optimal thermal power and renewable energy combination for HVDC lines, based on multi-scheme comparison. Having established the mathematic model for electric power balance in time series mode, ten different schemes have been picked for figuring out the suitable one by test simulation. By the proposed related discriminated principle, including generation device utilization hours, renewable energy electricity proportion and curtailment level, the recommendation scheme has been found. The result has also validated the efficiency of the method.
NASA Astrophysics Data System (ADS)
Wintermeyer, Niklas; Winters, Andrew R.; Gassner, Gregor J.; Kopriva, David A.
2017-07-01
We design an arbitrary high-order accurate nodal discontinuous Galerkin spectral element approximation for the non-linear two dimensional shallow water equations with non-constant, possibly discontinuous, bathymetry on unstructured, possibly curved, quadrilateral meshes. The scheme is derived from an equivalent flux differencing formulation of the split form of the equations. We prove that this discretization exactly preserves the local mass and momentum. Furthermore, combined with a special numerical interface flux function, the method exactly preserves the mathematical entropy, which is the total energy for the shallow water equations. By adding a specific form of interface dissipation to the baseline entropy conserving scheme we create a provably entropy stable scheme. That is, the numerical scheme discretely satisfies the second law of thermodynamics. Finally, with a particular discretization of the bathymetry source term we prove that the numerical approximation is well-balanced. We provide numerical examples that verify the theoretical findings and furthermore provide an application of the scheme for a partial break of a curved dam test problem.
NASA Astrophysics Data System (ADS)
Lee, Min Soo; Park, Byung Kwon; Woo, Min Ki; Park, Chang Hoon; Kim, Yong-Su; Han, Sang-Wook; Moon, Sung
2016-12-01
We developed a countermeasure against blinding attacks on low-noise detectors with a background-noise-cancellation scheme in quantum key distribution (QKD) systems. Background-noise cancellation includes self-differencing and balanced avalanche photon diode (APD) schemes and is considered a promising solution for low-noise APDs, which are critical components in high-performance QKD systems. However, its vulnerability to blinding attacks has been recently reported. In this work, we propose a countermeasure that prevents this potential security loophole from being used in detector blinding attacks. An experimental QKD setup is implemented and various tests are conducted to verify the feasibility and performance of the proposed method. The obtained measurement results show that the proposed scheme successfully detects occurring blinding-attack-based hacking attempts.
Balanced detection for self-mixing interferometry.
Li, Kun; Cavedo, Federico; Pesatori, Alessandro; Zhao, Changming; Norgia, Michele
2017-01-15
We propose a new detection scheme for self-mixing interferometry using two photodiodes for implementing a differential acquisition. The method is based on the phase opposition of the self-mixing signal measured between the two laser diode facet outputs. The subtraction of the two outputs implements a sort of balanced detection that improves the signal quality, and allows canceling of unwanted signals due to laser modulation and disturbances on laser supply and transimpedance amplifier. Experimental results demonstrate the benefits of differential acquisition in a system for both absolute distance and displacement-vibration measurement. This Letter provides guidance for the design of self-mixing interferometers using balanced detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldred, Christopher; Randall, David
The shallow water equations provide a useful analogue of the fully compressible Euler equations since they have similar characteristics: conservation laws, inertia-gravity and Rossby waves, and a (quasi-) balanced state. In order to obtain realistic simulation results, it is desirable that numerical models have discrete analogues of these properties. Two prototypical examples of such schemes are the 1981 Arakawa and Lamb (AL81) C-grid total energy and potential enstrophy conserving scheme, and the 2007 Salmon (S07) Z-grid total energy and potential enstrophy conserving scheme. Unfortunately, the AL81 scheme is restricted to logically square, orthogonal grids, and the S07 scheme is restrictedmore » to uniform square grids. The current work extends the AL81 scheme to arbitrary non-orthogonal polygonal grids and the S07 scheme to arbitrary orthogonal spherical polygonal grids in a manner that allows for both total energy and potential enstrophy conservation, by combining Hamiltonian methods (work done by Salmon, Gassmann, Dubos, and others) and discrete exterior calculus (Thuburn, Cotter, Dubos, Ringler, Skamarock, Klemp, and others). Lastly, detailed results of the schemes applied to standard test cases are deferred to part 2 of this series of papers.« less
Hierarchical Parallelism in Finite Difference Analysis of Heat Conduction
NASA Technical Reports Server (NTRS)
Padovan, Joseph; Krishna, Lala; Gute, Douglas
1997-01-01
Based on the concept of hierarchical parallelism, this research effort resulted in highly efficient parallel solution strategies for very large scale heat conduction problems. Overall, the method of hierarchical parallelism involves the partitioning of thermal models into several substructured levels wherein an optimal balance into various associated bandwidths is achieved. The details are described in this report. Overall, the report is organized into two parts. Part 1 describes the parallel modelling methodology and associated multilevel direct, iterative and mixed solution schemes. Part 2 establishes both the formal and computational properties of the scheme.
James, Conrad D; Galambos, Paul C; Derzon, Mark S; Graf, Darin C; Pohl, Kenneth R; Bourdon, Chris J
2012-10-23
Systems and methods for combining dielectrophoresis, magnetic forces, and hydrodynamic forces to manipulate particles in channels formed on top of an electrode substrate are discussed. A magnet placed in contact under the electrode substrate while particles are flowing within the channel above the electrode substrate allows these three forces to be balanced when the system is in operation. An optical detection scheme using near-confocal microscopy for simultaneously detecting two wavelengths of light emitted from the flowing particles is also discussed.
Digitally balanced detection for optical tomography.
Hafiz, Rehan; Ozanyan, Krikor B
2007-10-01
Analog balanced Photodetection has found extensive usage for sensing of a weak absorption signal buried in laser intensity noise. This paper proposes schemes for compact, affordable, and flexible digital implementation of the already established analog balanced detection, as part of a multichannel digital tomography system. Variants of digitally balanced detection (DBD) schemes, suitable for weak signals on a largely varying background or weakly varying envelopes of high frequency carrier waves, are introduced analytically and elaborated in terms of algorithmic and hardware flow. The DBD algorithms are implemented on a low-cost general purpose reconfigurable hardware (field-programmable gate array), utilizing less than half of its resources. The performance of the DBD schemes compare favorably with their analog counterpart: A common mode rejection ratio of 50 dB was observed over a bandwidth of 300 kHz, limited mainly by the host digital hardware. The close relationship between the DBD outputs and those of known analog balancing circuits is discussed in principle and shown experimentally in the example case of propane gas detection.
NASA Astrophysics Data System (ADS)
Kruis, Nathanael J. F.
Heat transfer from building foundations varies significantly in all three spatial dimensions and has important dynamic effects at all timescales, from one hour to several years. With the additional consideration of moisture transport, ground freezing, evapotranspiration, and other physical phenomena, the estimation of foundation heat transfer becomes increasingly sophisticated and computationally intensive to the point where accuracy must be compromised for reasonable computation time. The tools currently available to calculate foundation heat transfer are often either too limited in their capabilities to draw meaningful conclusions or too sophisticated to use in common practices. This work presents Kiva, a new foundation heat transfer computational framework. Kiva provides a flexible environment for testing different numerical schemes, initialization methods, spatial and temporal discretizations, and geometric approximations. Comparisons within this framework provide insight into the balance of computation speed and accuracy relative to highly detailed reference solutions. The accuracy and computational performance of six finite difference numerical schemes are verified against established IEA BESTEST test cases for slab-on-grade heat conduction. Of the schemes tested, the Alternating Direction Implicit (ADI) scheme demonstrates the best balance between accuracy, performance, and numerical stability. Kiva features four approaches of initializing soil temperatures for an annual simulation. A new accelerated initialization approach is shown to significantly reduce the required years of presimulation. Methods of approximating three-dimensional heat transfer within a representative two-dimensional context further improve computational performance. A new approximation called the boundary layer adjustment method is shown to improve accuracy over other established methods with a negligible increase in computation time. This method accounts for the reduced heat transfer from concave foundation shapes, which has not been adequately addressed to date. Within the Kiva framework, three-dimensional heat transfer that can require several days to simulate is approximated in two-dimensions in a matter of seconds while maintaining a mean absolute deviation within 3%.
Loans for Learning. Briefing Note
ERIC Educational Resources Information Center
Cedefop - European Centre for the Development of Vocational Training, 2011
2011-01-01
A good loan scheme must balance costs with coverage. If loans are too expensive then people will not borrow. Governments are not banks, but they provide or support loans for many things, including education and training. Governments too need to get the balance right. Cedefop surveyed 35 education and training loan schemes in Europe, examining…
Well-balanced high-order solver for blood flow in networks of vessels with variable properties.
Müller, Lucas O; Toro, Eleuterio F
2013-12-01
We present a well-balanced, high-order non-linear numerical scheme for solving a hyperbolic system that models one-dimensional flow in blood vessels with variable mechanical and geometrical properties along their length. Using a suitable set of test problems with exact solution, we rigorously assess the performance of the scheme. In particular, we assess the well-balanced property and the effective order of accuracy through an empirical convergence rate study. Schemes of up to fifth order of accuracy in both space and time are implemented and assessed. The numerical methodology is then extended to realistic networks of elastic vessels and is validated against published state-of-the-art numerical solutions and experimental measurements. It is envisaged that the present scheme will constitute the building block for a closed, global model for the human circulation system involving arteries, veins, capillaries and cerebrospinal fluid. Copyright © 2013 John Wiley & Sons, Ltd.
Asymptotic analysis of discrete schemes for non-equilibrium radiation diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Xia, E-mail: cui_xia@iapcm.ac.cn; Yuan, Guang-wei; Shen, Zhi-jun
Motivated by providing well-behaved fully discrete schemes in practice, this paper extends the asymptotic analysis on time integration methods for non-equilibrium radiation diffusion in [2] to space discretizations. Therein studies were carried out on a two-temperature model with Larsen's flux-limited diffusion operator, both the implicitly balanced (IB) and linearly implicit (LI) methods were shown asymptotic-preserving. In this paper, we focus on asymptotic analysis for space discrete schemes in dimensions one and two. First, in construction of the schemes, in contrast to traditional first-order approximations, asymmetric second-order accurate spatial approximations are devised for flux-limiters on boundary, and discrete schemes with second-ordermore » accuracy on global spatial domain are acquired consequently. Then by employing formal asymptotic analysis, the first-order asymptotic-preserving property for these schemes and furthermore for the fully discrete schemes is shown. Finally, with the help of manufactured solutions, numerical tests are performed, which demonstrate quantitatively the fully discrete schemes with IB time evolution indeed have the accuracy and asymptotic convergence as theory predicts, hence are well qualified for both non-equilibrium and equilibrium radiation diffusion. - Highlights: • Provide AP fully discrete schemes for non-equilibrium radiation diffusion. • Propose second order accurate schemes by asymmetric approach for boundary flux-limiter. • Show first order AP property of spatially and fully discrete schemes with IB evolution. • Devise subtle artificial solutions; verify accuracy and AP property quantitatively. • Ideas can be generalized to 3-dimensional problems and higher order implicit schemes.« less
Control of birhythmicity: A self-feedback approach
NASA Astrophysics Data System (ADS)
Biswas, Debabrata; Banerjee, Tanmoy; Kurths, Jürgen
2017-06-01
Birhythmicity occurs in many natural and artificial systems. In this paper, we propose a self-feedback scheme to control birhythmicity. To establish the efficacy and generality of the proposed control scheme, we apply it on three birhythmic oscillators from diverse fields of natural science, namely, an energy harvesting system, the p53-Mdm2 network for protein genesis (the OAK model), and a glycolysis model (modified Decroly-Goldbeter model). Using the harmonic decomposition technique and energy balance method, we derive the analytical conditions for the control of birhythmicity. A detailed numerical bifurcation analysis in the parameter space establishes that the control scheme is capable of eliminating birhythmicity and it can also induce transitions between different forms of bistability. As the proposed control scheme is quite general, it can be applied for control of several real systems, particularly in biochemical and engineering systems.
High Order Well-balanced WENO Scheme for the Gas Dynamics Equations under Gravitational Fields
2011-11-12
there exists the hydrostatic balance where the flux produced by the pressure is canceled by the gravitational source term. Many astro - physical...approximation to W (x) to obtain an approximation to W ′(xi) = fx (U(xi, yj)). See again [7, 15] for more details of finite difference WENO schemes in
Dispersive detection of radio-frequency-dressed states
NASA Astrophysics Data System (ADS)
Jammi, Sindhu; Pyragius, Tadas; Bason, Mark G.; Florez, Hans Marin; Fernholz, Thomas
2018-04-01
We introduce a method to dispersively detect alkali-metal atoms in radio-frequency-dressed states. In particular, we use dressed detection to measure populations and population differences of atoms prepared in their clock states. Linear birefringence of the atomic medium enables atom number detection via polarization homodyning, a form of common path interferometry. In order to achieve low technical noise levels, we perform optical sideband detection after adiabatic transformation of bare states into dressed states. The balanced homodyne signal then oscillates independently of field fluctuations at twice the dressing frequency, thus allowing for robust, phase-locked detection that circumvents low-frequency noise. Using probe pulses of two optical frequencies, we can detect both clock states simultaneously and obtain population difference as well as the total atom number. The scheme also allows for difference measurements by direct subtraction of the homodyne signals at the balanced detector, which should technically enable quantum noise limited measurements with prospects for the preparation of spin squeezed states. The method extends to other Zeeman sublevels and can be employed in a range of atomic clock schemes, atom interferometers, and other experiments using dressed atoms.
NASA Astrophysics Data System (ADS)
Quezada de Luna, M.; Farthing, M.; Guermond, J. L.; Kees, C. E.; Popov, B.
2017-12-01
The Shallow Water Equations (SWEs) are popular for modeling non-dispersive incompressible water waves where the horizontal wavelength is much larger than the vertical scales. They can be derived from the incompressible Navier-Stokes equations assuming a constant vertical velocity. The SWEs are important in Geophysical Fluid Dynamics for modeling surface gravity waves in shallow regimes; e.g., in the deep ocean. Some common geophysical applications are the evolution of tsunamis, river flooding and dam breaks, storm surge simulations, atmospheric flows and others. This work is concerned with the approximation of the time-dependent Shallow Water Equations with friction using explicit time stepping and continuous finite elements. The objective is to construct a method that is at least second-order accurate in space and third or higher-order accurate in time, positivity preserving, well-balanced with respect to rest states, well-balanced with respect to steady sliding solutions on inclined planes and robust with respect to dry states. Methods fulfilling the desired goals are common within the finite volume literature. However, to the best of our knowledge, schemes with the above properties are not well developed in the context of continuous finite elements. We start this work based on a finite element method that is second-order accurate in space, positivity preserving and well-balanced with respect to rest states. We extend it by: modifying the artificial viscosity (via the entropy viscosity method) to deal with issues of loss of accuracy around local extrema, considering a singular Manning friction term handled via an explicit discretization under the usual CFL condition, considering a water height regularization that depends on the mesh size and is consistent with the polynomial approximation, reducing dispersive errors introduced by lumping the mass matrix and others. After presenting the details of the method we show numerical tests that demonstrate the well-balanced nature of the scheme and its convergence properties. We conclude with well-known benchmark problems including the Malpasset dam break (see the attached figure). All numerical experiments are performed and available in the Proteus toolkit, which is an open source python package for modeling continuum mechanical processes and fluid flow.
NASA Astrophysics Data System (ADS)
Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi
2018-06-01
Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.
Olejniczak, Małgorzata; Bast, Radovan; Saue, Trond; Pecul, Magdalena
2012-01-07
We report the implementation of nuclear magnetic resonance (NMR) shielding tensors within the four-component relativistic Kohn-Sham density functional theory including non-collinear spin magnetization and employing London atomic orbitals to ensure gauge origin independent results, together with a new and efficient scheme for assuring correct balance between the large and small components of a molecular four-component spinor in the presence of an external magnetic field (simple magnetic balance). To test our formalism we have carried out calculations of NMR shielding tensors for the HX series (X = F, Cl, Br, I, At), the Xe atom, and the Xe dimer. The advantage of simple magnetic balance scheme combined with the use of London atomic orbitals is the fast convergence of results (when compared with restricted kinetic balance) and elimination of linear dependencies in the basis set (when compared to unrestricted kinetic balance). The effect of including spin magnetization in the description of NMR shielding tensor has been found important for hydrogen atoms in heavy HX molecules, causing an increase of isotropic values of 10%, but negligible for heavy atoms.
NASA Astrophysics Data System (ADS)
Navas-Montilla, A.; Murillo, J.
2016-07-01
In this work, an arbitrary order HLL-type numerical scheme is constructed using the flux-ADER methodology. The proposed scheme is based on an augmented Derivative Riemann solver that was used for the first time in Navas-Montilla and Murillo (2015) [1]. Such solver, hereafter referred to as Flux-Source (FS) solver, was conceived as a high order extension of the augmented Roe solver and led to the generation of a novel numerical scheme called AR-ADER scheme. Here, we provide a general definition of the FS solver independently of the Riemann solver used in it. Moreover, a simplified version of the solver, referred to as Linearized-Flux-Source (LFS) solver, is presented. This novel version of the FS solver allows to compute the solution without requiring reconstruction of derivatives of the fluxes, nevertheless some drawbacks are evidenced. In contrast to other previously defined Derivative Riemann solvers, the proposed FS and LFS solvers take into account the presence of the source term in the resolution of the Derivative Riemann Problem (DRP), which is of particular interest when dealing with geometric source terms. When applied to the shallow water equations, the proposed HLLS-ADER and AR-ADER schemes can be constructed to fulfill the exactly well-balanced property, showing that an arbitrary quadrature of the integral of the source inside the cell does not ensure energy balanced solutions. As a result of this work, energy balanced flux-ADER schemes that provide the exact solution for steady cases and that converge to the exact solution with arbitrary order for transient cases are constructed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel
Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less
A family of chaotic pure analog coding schemes based on baker's map function
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun
2015-12-01
This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.
A comparison study of two snow models using data from different Alpine sites
NASA Astrophysics Data System (ADS)
Piazzi, Gaia; Riboust, Philippe; Campo, Lorenzo; Cremonese, Edoardo; Gabellani, Simone; Le Moine, Nicolas; Morra di Cella, Umberto; Ribstein, Pierre; Thirel, Guillaume
2017-04-01
The hydrological balance of an Alpine catchment is strongly affected by snowpack dynamics. Melt-water supplies a significant component of the annual water budget, both in terms of soil moisture and runoff, which play a critical role in floods generation and impact water resource management in snow-dominated basins. Several snow models have been developed with variable degrees of complexity, mainly depending on their target application and the availability of computational resources and data. According to the level of detail, snow models range from statistical snowmelt-runoff and degree-day methods using composite snow-soil or explicit snow layer(s), to physically-based and energy balance snow models, consisting of detailed internal snow-process schemes. Intermediate-complexity approaches have been widely developed resulting in simplified versions of the physical parameterization schemes with a reduced snowpack layering. Nevertheless, an increasing model complexity does not necessarily entail improved model simulations. This study presents a comparison analysis between two snow models designed for hydrological purposes. The snow module developed at UPMC and IRSTEA is a mono-layer energy balance model analytically resolving heat and phase change equations into the snowpack. Vertical mass exchange into the snowpack is also analytically resolved. The model is intended to be used for hydrological studies but also to give a realistic estimation of the snowpack state at watershed scale (SWE and snow depth). The structure of the model allows it to be easily calibrated using snow observation. This model is further presented in EGU2017-7492. The snow module of SMASH (Snow Multidata Assimilation System for Hydrology) consists in a multi-layer snow dynamic scheme. It is physically based on mass and energy balances and it reproduces the main physical processes occurring within the snowpack: accumulation, density dynamics, melting, sublimation, radiative balance, heat and mass exchanges. The model is driven by observed forcing meteorological data (air temperature, wind velocity, relative air humidity, precipitation and incident solar radiation) to provide an estimation of the snowpack state. In this study, no DA is used. For more details on the DA scheme, please see EGU2017-7777. Observed data supplied by meteorological stations located in three experimental Alpine sites are used: Col de Porte (1325 m, France); Torgnon (2160 m, Italy); Weissfluhjoch (2540 m, Switzerland). Performances of the two models are compared through evaluations of snow mass, snow depth, albedo and surface temperature simulations in order to better understand and pinpoint limits and potentialities of the analyzed schemes and the impact of different parameterizations on models simulations.
A Tikhonov Regularization Scheme for Focus Rotations with Focused Ultrasound Phased Arrays
Hughes, Alec; Hynynen, Kullervo
2016-01-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually-driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations. PMID:27913323
A Tikhonov Regularization Scheme for Focus Rotations With Focused Ultrasound-Phased Arrays.
Hughes, Alec; Hynynen, Kullervo
2016-12-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound-phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations.
Tree Colors: Color Schemes for Tree-Structured Data.
Tennekes, Martijn; de Jonge, Edwin
2014-12-01
We present a method to map tree structures to colors from the Hue-Chroma-Luminance color model, which is known for its well balanced perceptual properties. The Tree Colors method can be tuned with several parameters, whose effect on the resulting color schemes is discussed in detail. We provide a free and open source implementation with sensible parameter defaults. Categorical data are very common in statistical graphics, and often these categories form a classification tree. We evaluate applying Tree Colors to tree structured data with a survey on a large group of users from a national statistical institute. Our user study suggests that Tree Colors are useful, not only for improving node-link diagrams, but also for unveiling tree structure in non-hierarchical visualizations.
Performance tradeoffs in static and dynamic load balancing strategies
NASA Technical Reports Server (NTRS)
Iqbal, M. A.; Saltz, J. H.; Bokhart, S. H.
1986-01-01
The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but sub-optimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the other three strategies.
Lee, Hyun-Soo; Choi, Seung Hong; Park, Sung-Hong
2017-07-01
To develop single and double acquisition methods to compensate for artifacts from eddy currents and transient oscillations in balanced steady-state free precession (bSSFP) with centric phase-encoding (PE) order for magnetization-prepared bSSFP imaging. A single and four different double acquisition methods were developed and evaluated with Bloch equation simulations, phantom/in vivo experiments, and quantitative analyses. For the single acquisition method, multiple PE groups, each of which was composed of N linearly changing PE lines, were ordered in a pseudocentric manner for optimal contrast and minimal signal fluctuations. Double acquisition methods used complex averaging of two images that had opposite artifact patterns from different acquisition orders or from different numbers of dummy scans. Simulation results showed high sensitivity of eddy-current and transient-oscillation artifacts to off-resonance frequency and PE schemes. The artifacts were reduced with the PE-grouping with N values from 3 to 8, similar to or better than the conventional pairing scheme of N = 2. The proposed double acquisition methods removed the remaining artifacts significantly. The proposed methods conserved detailed structures in magnetization transfer imaging well, compared with the conventional methods. The proposed single and double acquisition methods can be useful for artifact-free magnetization-prepared bSSFP imaging with desired contrast and minimized dummy scans. Magn Reson Med 78:254-263, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Dynamic Transfers Of Tasks Among Computers
NASA Technical Reports Server (NTRS)
Liu, Howard T.; Silvester, John A.
1989-01-01
Allocation scheme gives jobs to idle computers. Ideal resource-sharing algorithm should have following characteristics: Dynamics, decentralized, and heterogeneous. Proposed enhanced receiver-initiated dynamic algorithm (ERIDA) for resource sharing fulfills all above criteria. Provides method balancing workload among hosts, resulting in improvement in response time and throughput performance of total system. Adjusts dynamically to traffic load of each station.
Improved Regression Analysis of Temperature-Dependent Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2015-01-01
An improved approach is discussed that may be used to directly include first and second order temperature effects in the load prediction algorithm of a wind tunnel strain-gage balance. The improved approach was designed for the Iterative Method that fits strain-gage outputs as a function of calibration loads and uses a load iteration scheme during the wind tunnel test to predict loads from measured gage outputs. The improved approach assumes that the strain-gage balance is at a constant uniform temperature when it is calibrated and used. First, the method introduces a new independent variable for the regression analysis of the balance calibration data. The new variable is designed as the difference between the uniform temperature of the balance and a global reference temperature. This reference temperature should be the primary calibration temperature of the balance so that, if needed, a tare load iteration can be performed. Then, two temperature{dependent terms are included in the regression models of the gage outputs. They are the temperature difference itself and the square of the temperature difference. Simulated temperature{dependent data obtained from Triumph Aerospace's 2013 calibration of NASA's ARC-30K five component semi{span balance is used to illustrate the application of the improved approach.
Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Giorgos, E-mail: garab@math.uoc.gr; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Plechac, Petr, E-mail: plechac@math.udel.edu
2012-10-01
We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decompositionmore » corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing scheme based on probabilistic mass transport methods.« less
Green's function enriched Poisson solver for electrostatics in many-particle systems
NASA Astrophysics Data System (ADS)
Sutmann, Godehard
2016-06-01
A highly accurate method is presented for the construction of the charge density for the solution of the Poisson equation in particle simulations. The method is based on an operator adjusted source term which can be shown to produce exact results up to numerical precision in the case of a large support of the charge distribution, therefore compensating the discretization error of finite difference schemes. This is achieved by balancing an exact representation of the known Green's function of regularized electrostatic problem with a discretized representation of the Laplace operator. It is shown that the exact calculation of the potential is possible independent of the order of the finite difference scheme but the computational efficiency for higher order methods is found to be superior due to a faster convergence to the exact result as a function of the charge support.
NASA Astrophysics Data System (ADS)
Harshan, S.; Roth, M.; Velasco, E.
2014-12-01
Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.
Calibration and Data Analysis of the MC-130 Air Balance
NASA Technical Reports Server (NTRS)
Booth, Dennis; Ulbrich, N.
2012-01-01
Design, calibration, calibration analysis, and intended use of the MC-130 air balance are discussed. The MC-130 balance is an 8.0 inch diameter force balance that has two separate internal air flow systems and one external bellows system. The manual calibration of the balance consisted of a total of 1854 data points with both unpressurized and pressurized air flowing through the balance. A subset of 1160 data points was chosen for the calibration data analysis. The regression analysis of the subset was performed using two fundamentally different analysis approaches. First, the data analysis was performed using a recently developed extension of the Iterative Method. This approach fits gage outputs as a function of both applied balance loads and bellows pressures while still allowing the application of the iteration scheme that is used with the Iterative Method. Then, for comparison, the axial force was also analyzed using the Non-Iterative Method. This alternate approach directly fits loads as a function of measured gage outputs and bellows pressures and does not require a load iteration. The regression models used by both the extended Iterative and Non-Iterative Method were constructed such that they met a set of widely accepted statistical quality requirements. These requirements lead to reliable regression models and prevent overfitting of data because they ensure that no hidden near-linear dependencies between regression model terms exist and that only statistically significant terms are included. Finally, a comparison of the axial force residuals was performed. Overall, axial force estimates obtained from both methods show excellent agreement as the differences of the standard deviation of the axial force residuals are on the order of 0.001 % of the axial force capacity.
NASA Astrophysics Data System (ADS)
Liu, Maw-Yang; Hsu, Yi-Kai
2017-03-01
Three-arm dual-balanced detection scheme is studied in an optical code division multiple access system. As the MAI and beat noise are the main deleterious source of system performance, we utilize optical hard-limiters to alleviate such channel impairment. In addition, once the channel condition is improved effectively, the proposed two-dimensional error correction code can remarkably enhance the system performance. In our proposed scheme, the optimal thresholds of optical hard-limiters and decision circuitry are fixed, and they will not change with other system parameters. Our proposed scheme can accommodate a large number of users simultaneously and is suitable for burst traffic with asynchronous transmission. Therefore, it is highly recommended as the platform for broadband optical access network.
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.
Global Existence Analysis of Cross-Diffusion Population Systems for Multiple Species
NASA Astrophysics Data System (ADS)
Chen, Xiuqing; Daus, Esther S.; Jüngel, Ansgar
2018-02-01
The existence of global-in-time weak solutions to reaction-cross-diffusion systems for an arbitrary number of competing population species is proved. The equations can be derived from an on-lattice random-walk model with general transition rates. In the case of linear transition rates, it extends the two-species population model of Shigesada, Kawasaki, and Teramoto. The equations are considered in a bounded domain with homogeneous Neumann boundary conditions. The existence proof is based on a refined entropy method and a new approximation scheme. Global existence follows under a detailed balance or weak cross-diffusion condition. The detailed balance condition is related to the symmetry of the mobility matrix, which mirrors Onsager's principle in thermodynamics. Under detailed balance (and without reaction) the entropy is nonincreasing in time, but counter-examples show that the entropy may increase initially if detailed balance does not hold.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.
Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E
2007-02-15
Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.
NASA Astrophysics Data System (ADS)
Luo, D.; Guan, Z.; Wang, C.; Yue, L.; Peng, L.
2017-06-01
Distribution of different parts to the assembly lines is significant for companies to improve production. Current research investigates the problem of distribution method optimization of a logistics system in a third party logistic company that provide professional services to an automobile manufacturing case company in China. Current research investigates the logistics leveling the material distribution and unloading platform of the automobile logistics enterprise and proposed logistics distribution strategy, material classification method, as well as logistics scheduling. Moreover, the simulation technology Simio is employed on assembly line logistics system which helps to find and validate an optimization distribution scheme through simulation experiments. Experimental results indicate that the proposed scheme can solve the logistic balance and levels the material problem and congestion of the unloading pattern in an efficient way as compared to the original method employed by the case company.
Hsu, Wen-Yang; Schmid, Alexandre
2017-08-01
Safety and energy efficiency are two major concerns for implantable neural stimulators. This paper presents a novel high-frequency, switched capacitor (HFSC) stimulation and active charge balancing scheme, which achieves high energy efficiency and well-controlled stimulation charge in the presence of large electrode impedance variations. Furthermore, the HFSC can be implemented in a compact size without any external component to simultaneously enable multichannel stimulation by deploying multiple stimulators. The theoretical analysis shows significant benefits over the constant-current and voltage-mode stimulation methods. The proposed solution was fabricated using a 0.18 μm high-voltage technology, and occupies only 0.035 mm 2 for a single stimulator. The measurement result shows 50% peak energy efficiency and confirms the effectiveness of active charge balancing to prevent the electrode dissolution.
Improving the XAJ Model on the Basis of Mass-Energy Balance
NASA Astrophysics Data System (ADS)
Fang, Yuanhao; Corbari, Chiara; Zhang, Xingnan; Mancini, Marco
2014-11-01
Introduction: The Xin'anjiang(XAJ) model is a conceptual model developed by the group led by Prof. Ren-Jun Zhao, which takes the pan evaporation as one of its input and then computes the effective evapotranspiration (ET) of the catchment by mass balance. Such scheme can ensure a good performance of discharge simulation but has obvious defects, one of which is that the effective ET is spatially-constant over the computation unit, neglecting the spatial variation of variables that influence the effective ET and therefore the simulation of ET and SM by the XAJ model, comparing with discharge, is less reliable. In this study, The XAJ model was improved to employ both energy and mass balance to compute the ET following the energy-mass balance scheme of FEST-EWB. model.
Improving the XAJ Model on the Basis of Mass-Energy Balance
NASA Astrophysics Data System (ADS)
Fang, Yuanghao; Corbari, Chiara; Zhang, Xingnan; Mancini, Marco
2014-11-01
The Xin’anjiang(XAJ) model is a conceptual model developed by the group led by Prof. Ren-Jun Zhao, which takes the pan evaporation as one of its input and then computes the effective evapotranspiration (ET) of the catchment by mass balance. Such scheme can ensure a good performance of discharge simulation but has obvious defects, one of which is that the effective ET is spatially-constant over the computation unit, neglecting the spatial variation of variables that influence the effective ET and therefore the simulation of ET and SM by the XAJ model, comparing with discharge, is less reliable. In this study, The XAJ model was improved to employ both energy and mass balance to compute the ET following the energy-mass balance scheme of FEST-EWB. model.
Pyrometer with tracking balancing
NASA Astrophysics Data System (ADS)
Ponomarev, D. B.; Zakharenko, V. A.; Shkaev, A. G.
2018-04-01
Currently, one of the main metrological noncontact temperature measurement challenges is the emissivity uncertainty. This paper describes a pyrometer with emissivity effect diminishing through the use of a measuring scheme with tracking balancing in which the radiation receiver is a null-indicator. In this paper the results of the prototype pyrometer absolute error study in surfaces temperature measurement of aluminum and nickel samples are presented. There is absolute error calculated values comparison considering the emissivity table values with errors on the results of experimental measurements by the proposed method. The practical implementation of the proposed technical solution has allowed two times to reduce the error due to the emissivity uncertainty.
Active tower damping and pitch balancing - design, simulation and field test
NASA Astrophysics Data System (ADS)
Duckwitz, Daniel; Shan, Martin
2014-12-01
The tower is one of the major components in wind turbines with a contribution to the cost of energy of 8 to 12% [1]. In this overview the load situation of the tower will be described in terms of sources of loads, load components and fatigue contribution. Then two load reduction control schemes are described along with simulation and field test results. Pitch Balancing is described as a method to reduce aerodynamic asymmetry and the resulting fatigue loads. Active Tower Damping is reducing the tower oscillations by applying appropiate pitch angle changes. A field test was conducted on an Areva M5000 wind turbine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wintermeyer, Niklas; Winters, Andrew R., E-mail: awinters@math.uni-koeln.de; Gassner, Gregor J.
We design an arbitrary high-order accurate nodal discontinuous Galerkin spectral element approximation for the non-linear two dimensional shallow water equations with non-constant, possibly discontinuous, bathymetry on unstructured, possibly curved, quadrilateral meshes. The scheme is derived from an equivalent flux differencing formulation of the split form of the equations. We prove that this discretization exactly preserves the local mass and momentum. Furthermore, combined with a special numerical interface flux function, the method exactly preserves the mathematical entropy, which is the total energy for the shallow water equations. By adding a specific form of interface dissipation to the baseline entropy conserving schememore » we create a provably entropy stable scheme. That is, the numerical scheme discretely satisfies the second law of thermodynamics. Finally, with a particular discretization of the bathymetry source term we prove that the numerical approximation is well-balanced. We provide numerical examples that verify the theoretical findings and furthermore provide an application of the scheme for a partial break of a curved dam test problem.« less
1997-09-30
research is multiscale , interdisciplinary and generic. The methods are applicable to an arbitrary region of the coastal and/or deep ocean and across the...dynamics. OBJECTIVES General objectives are: (I) To determine for the coastal and/or coupled deep ocean the multiscale processes which occur: i) in...Straits and the eastern basin; iii) extension and application of our balance of terms scheme (EVA) to multiscale , interdisciplinary fields with data
Liu, Xikai; Ma, Dong; Chen, Liang; Liu, Xiangdong
2018-02-08
Tuning the stiffness balance is crucial to full-band common-mode rejection for a superconducting gravity gradiometer (SGG). A reliable method to do so has been proposed and experimentally tested. In the tuning scheme, the frequency response functions of the displacement of individual test mass upon common-mode accelerations were measured and thus determined a characteristic frequency for each test mass. A reduced difference in characteristic frequencies between the two test masses was utilized as the criterion for an effective tuning. Since the measurement of the characteristic frequencies does not depend on the scale factors of displacement detection, stiffness tuning can be done independently. We have tested this new method on a single-component SGG and obtained a reduction of two orders of magnitude in stiffness mismatch.
NASA Astrophysics Data System (ADS)
Johnson, E. S.; Rupper, S.; Steenburgh, W. J.; Strong, C.; Kochanski, A.
2017-12-01
Climate model outputs are often used as inputs to glacier energy and mass balance models, which are essential glaciological tools for testing glacier sensitivity, providing mass balance estimates in regions with little glaciological data, and providing a means to model future changes. Climate model outputs, however, are sensitive to the choice of physical parameterizations, such as those for cloud microphysics, land-surface schemes, surface layer options, etc. Furthermore, glacier mass balance (MB) estimates that use these climate model outputs as inputs are likely sensitive to the specific parameterization schemes, but this sensitivity has not been carefully assessed. Here we evaluate the sensitivity of glacier MB estimates across the Indus Basin to the selection of cloud microphysics parameterizations in the Weather Research and Forecasting Model (WRF). Cloud microphysics parameterizations differ in how they specify the size distributions of hydrometeors, the rate of graupel and snow production, their fall speed assumptions, the rates at which they convert from one hydrometeor type to the other, etc. While glacier MB estimates are likely sensitive to other parameterizations in WRF, our preliminary results suggest that glacier MB is highly sensitive to the timing, frequency, and amount of snowfall, which is influenced by the cloud microphysics parameterization. To this end, the Indus Basin is an ideal study site, as it has both westerly (winter) and monsoonal (summer) precipitation influences, is a data-sparse region (so models are critical), and still has lingering questions as to glacier importance for local and regional resources. WRF is run at a 4 km grid scale using two commonly used parameterizations: the Thompson scheme and the Goddard scheme. On average, these parameterizations result in minimal differences in annual precipitation. However, localized regions exhibit differences in precipitation of up to 3 m w.e. a-1. The different schemes also impact the radiative budgets over the glacierized areas. Our results show that glacier MB estimates can differ by up to 45% depending on the chosen cloud microphysics scheme. These findings highlight the need to better account for uncertainties in meteorological inputs into glacier energy and mass balance models.
Learning and tuning fuzzy logic controllers through reinforcements.
Berenji, H R; Khedkar, P
1992-01-01
A method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. It is shown that: the generalized approximate-reasoning-based intelligent control (GARIC) architecture learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
Path Searching Based Fault Automated Recovery Scheme for Distribution Grid with DG
NASA Astrophysics Data System (ADS)
Xia, Lin; Qun, Wang; Hui, Xue; Simeng, Zhu
2016-12-01
Applying the method of path searching based on distribution network topology in setting software has a good effect, and the path searching method containing DG power source is also applicable to the automatic generation and division of planned islands after the fault. This paper applies path searching algorithm in the automatic division of planned islands after faults: starting from the switch of fault isolation, ending in each power source, and according to the line load that the searching path traverses and the load integrated by important optimized searching path, forming optimized division scheme of planned islands that uses each DG as power source and is balanced to local important load. Finally, COBASE software and distribution network automation software applied are used to illustrate the effectiveness of the realization of such automatic restoration program.
NASA Astrophysics Data System (ADS)
Nuber, André; Manukyan, Edgar; Maurer, Hansruedi
2014-05-01
Conventional methods of interpreting seismic data rely on filtering and processing limited portions of the recorded wavefield. Typically, either reflections, refractions or surface waves are considered in isolation. Particularly in near-surface engineering and environmental investigations (depths less than, say 100 m), these wave types often overlap in time and are difficult to separate. Full waveform inversion is a technique that seeks to exploit and interpret the full information content of the seismic records without the need for separating events first; it yields models of the subsurface at sub-wavelength resolution. We use a finite element modelling code to solve the 2D elastic isotropic wave equation in the frequency domain. This code is part of a Gauss-Newton inversion scheme which we employ to invert for the P- and S-wave velocities as well as for density in the subsurface. For shallow surface data the use of an elastic forward solver is essential because surface waves often dominate the seismograms. This leads to high sensitivities (partial derivatives contained in the Jacobian matrix of the Gauss-Newton inversion scheme) and thus large model updates close to the surface. Reflections from deeper structures may also include useful information, but the large sensitivities of the surface waves often preclude this information from being fully exploited. We have developed two methods that balance the sensitivity distributions and thus may help resolve the deeper structures. The first method includes equilibrating the columns of the Jacobian matrix prior to every inversion step by multiplying them with individual scaling factors. This is expected to also balance the model updates throughout the entire subsurface model. It can be shown that this procedure is mathematically equivalent to balancing the regularization weights of the individual model parameters. A proper choice of the scaling factors required to balance the Jacobian matrix is critical. We decided to normalise the columns of the Jacobian based on their absolute column sum, but defining an upper threshold for the scaling factors. This avoids particularly small and therefore insignificant sensitivities being over-boosted, which would produce unstable results. The second method proposed includes adjusting the inversion cell size with depth. Multiple cells of the forward modelling grid are merged to form larger inversion cells (typical ratios between forward and inversion cells are in the order of 1:100). The irregular inversion grid is adapted to the expected resolution power of full waveform inversion. Besides stabilizing the inversion, this approach also reduces the number of model parameters to be recovered. Consequently, the computational costs and the memory consumption are reduced significantly. This is particularly critical when Gauss-Newton type inversion schemes are employed. Extensive tests with synthetic data demonstrated that both methods stabilise the inversion and improve the inversion results. The two methods have some redundancy, which can be seen when both are applied simultaneously, that is, when scaling of the Jacobian matrix is applied to an irregular inversion grid. The calculated scaling factors are quite balanced and span a much smaller range than in the case of a regular inversion grid.
Silva, Bhagya Nathali; Khan, Murad; Han, Kijun
2018-02-25
The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST) algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism.
Projected Hybrid Orbitals: A General QM/MM Method
2015-01-01
A projected hybrid orbital (PHO) method was described to model the covalent boundary in a hybrid quantum mechanical and molecular mechanical (QM/MM) system. The PHO approach can be used in ab initio wave function theory and in density functional theory with any basis set without introducing system-dependent parameters. In this method, a secondary basis set on the boundary atom is introduced to formulate a set of hybrid atomic orbtials. The primary basis set on the boundary atom used for the QM subsystem is projected onto the secondary basis to yield a representation that provides a good approximation to the electron-withdrawing power of the primary basis set to balance electronic interactions between QM and MM subsystems. The PHO method has been tested on a range of molecules and properties. Comparison with results obtained from QM calculations on the entire system shows that the present PHO method is a robust and balanced QM/MM scheme that preserves the structural and electronic properties of the QM region. PMID:25317748
NASA Astrophysics Data System (ADS)
Maxwell, D.; Odang, R. W.; Koesmaningati, H.
2017-08-01
Balanced occlusion is commonly used in complete denture occlusion scheme; however, canine guidance offers a simpler process and reduces alveolar ridge resorption. Correlative research of these two occlusion schemes is required. This study was done to analyze the correlation between masticatory muscle activity and masticatory ability of the subjects with canine guidance and balanced occlusion complete dentures. Ten denture wearers participated in this cross-over clinical trial, and five subjects were randomly selected to wear balanced occlusion followed by canine guidance complete dentures and vice versa. Electromyogram (EMG) activities of superficial masseter and anterior temporal muscles were measured and masticatory ability questionnaires were collected 30 days after the subjects wore each occlusal scheme. There were significant differences between the EMG activities of masticatory muscles in subjects who were given canine guidance and balanced occlusion complete dentures (p < 0.05). Subjects rated their masticatory ability as being significantly better when using canine guidance dentures (p = 0.046). There was a significant and strong correlation (p = 0.045; r = 0.642) between the EMG activity of anterior temporal muscles and masticatory ability when the subjects wore balanced occlusion dentures and between the EMG activity of superficial masseter muscles and masticatory ability (p = 0.043; r = 0.648) when wearing canine guidance dentures. Masticatory ability is better when using canine guidance dentures. There is a significant and strong correlation between masticatory muscle activity and masticatory ability.
Analysis/forecast experiments with a multivariate statistical analysis scheme using FGGE data
NASA Technical Reports Server (NTRS)
Baker, W. E.; Bloom, S. C.; Nestler, M. S.
1985-01-01
A three-dimensional, multivariate, statistical analysis method, optimal interpolation (OI) is described for modeling meteorological data from widely dispersed sites. The model was developed to analyze FGGE data at the NASA-Goddard Laboratory of Atmospherics. The model features a multivariate surface analysis over the oceans, including maintenance of the Ekman balance and a geographically dependent correlation function. Preliminary comparisons are made between the OI model and similar schemes employed at the European Center for Medium Range Weather Forecasts and the National Meteorological Center. The OI scheme is used to provide input to a GCM, and model error correlations are calculated for forecasts of 500 mb vertical water mixing ratios and the wind profiles. Comparisons are made between the predictions and measured data. The model is shown to be as accurate as a successive corrections model out to 4.5 days.
NASA Technical Reports Server (NTRS)
Koster, Rindal D.; Milly, P. C. D.
1997-01-01
The Project for Intercomparison of Land-surface Parameterization Schemes (PILPS) has shown that different land surface models (LSMS) driven by the same meteorological forcing can produce markedly different surface energy and water budgets, even when certain critical aspects of the LSMs (vegetation cover, albedo, turbulent drag coefficient, and snow cover) are carefully controlled. To help explain these differences, the authors devised a monthly water balance model that successfully reproduces the annual and seasonal water balances of the different PILPS schemes. Analysis of this model leads to the identification of two quantities that characterize an LSM's formulation of soil water balance dynamics: (1) the efficiency of the soil's evaporation sink integrated over the active soil moisture range, and (2) the fraction of this range over which runoff is generated. Regardless of the LSM's complexity, the combination of these two derived parameters with rates of interception loss, potential evaporation, and precipitation provides a reasonable estimate for the LSM's simulated annual water balance. The two derived parameters shed light on how evaporation and runoff formulations interact in an LSM, and the analysis as a whole underscores the need for compatibility in these formulations.
Koster, R.D.; Milly, P.C.D.
1997-01-01
The Project for Intercomparison of Land-surface Parameterization Schemes (PILPS) has shown that different land surface models (LSMs) driven by the same meteorological forcing can produce markedly different surface energy and water budgets, even when certain critical aspects of the LSMs (vegetation cover, albedo, turbulent drag coefficient, and snowcover) are carefully controlled. To help explain these differences, the authors devised a monthly water balance model that successfully reproduces the annual and seasonal water balances of the different PILPS schemes. Analysis of this model leads to the identification of two quantities that characterize an LSM's formulation of soil water balance dynamics: 1) the efficiency of the soil's evaporation sink integrated over the active soil moisture range, and 2) the fraction of this range over which runoff is generated. Regardless of the LSM's complexity, the combination of these two derived parameters with rates of interception loss, potential evaporation, and precipitation provides a reasonable estimate for the LSM's simulated annual water balance. The two derived parameters shed light on how evaporation and runoff formulations interact in an LSM, and the analysis as a whole underscores the need for compatibility in these formulations.
Gu, Xiangping; Zhou, Xiaofeng; Sun, Yanjing
2018-02-28
Compressive sensing (CS)-based data gathering is a promising method to reduce energy consumption in wireless sensor networks (WSNs). Traditional CS-based data-gathering approaches require a large number of sensor nodes to participate in each CS measurement task, resulting in high energy consumption, and do not guarantee load balance. In this paper, we propose a sparser analysis that depends on modified diffusion wavelets, which exploit sensor readings' spatial correlation in WSNs. In particular, a novel data-gathering scheme with joint routing and CS is presented. A modified ant colony algorithm is adopted, where next hop node selection takes a node's residual energy and path length into consideration simultaneously. Moreover, in order to speed up the coverage rate and avoid the local optimal of the algorithm, an improved pheromone impact factor is put forward. More importantly, theoretical proof is given that the equivalent sensing matrix generated can satisfy the restricted isometric property (RIP). The simulation results demonstrate that the modified diffusion wavelets' sparsity affects the sensor signal and has better reconstruction performance than DFT. Furthermore, our data gathering with joint routing and CS can dramatically reduce the energy consumption of WSNs, balance the load, and prolong the network lifetime in comparison to state-of-the-art CS-based methods.
NASA Astrophysics Data System (ADS)
Zeng, Qiusun; Chen, Dehong; Wang, Minghuang
2017-12-01
In order to improve the fusion energy gain (Q) of a gas dynamic trap (GDT)-based fusion neutron source, a method in which the neutral beam is obliquely injected at a higher magnetic field position rather than at the mid-plane of the GDT is proposed. This method is beneficial for confining a higher density of fast ions at the turning point in the zone with a higher magnetic field, as well as obtaining a higher mirror ratio by reducing the mid-plane field rather than increasing the mirror field. In this situation, collision scattering loss of fast ions with higher density will occur and change the confinement time, power balance and particle balance. Using an updated calculation model with high-field neutral beam injection for a GDT-based fusion neutron source conceptual design, we got four optimal design schemes for a GDT-based fusion neutron source in which Q was improved to two- to three-fold compared with a conventional design scheme and considering the limitation for avoiding plasma instabilities, especially the fire-hose instability. The distribution of fast ions could be optimized by building a proper magnetic field configuration with enough space for neutron shielding and by multi-beam neutral particle injection at different axial points.
Suboptimal schemes for atmospheric data assimilation based on the Kalman filter
NASA Technical Reports Server (NTRS)
Todling, Ricardo; Cohn, Stephen E.
1994-01-01
This work is directed toward approximating the evolution of forecast error covariances for data assimilation. The performance of different algorithms based on simplification of the standard Kalman filter (KF) is studied. These are suboptimal schemes (SOSs) when compared to the KF, which is optimal for linear problems with known statistics. The SOSs considered here are several versions of optimal interpolation (OI), a scheme for height error variance advection, and a simplified KF in which the full height error covariance is advected. To employ a methodology for exact comparison among these schemes, a linear environment is maintained, in which a beta-plane shallow-water model linearized about a constant zonal flow is chosen for the test-bed dynamics. The results show that constructing dynamically balanced forecast error covariances rather than using conventional geostrophically balanced ones is essential for successful performance of any SOS. A posteriori initialization of SOSs to compensate for model - data imbalance sometimes results in poor performance. Instead, properly constructed dynamically balanced forecast error covariances eliminate the need for initialization. When the SOSs studied here make use of dynamically balanced forecast error covariances, the difference among their performances progresses naturally from conventional OI to the KF. In fact, the results suggest that even modest enhancements of OI, such as including an approximate dynamical equation for height error variances while leaving height error correlation structure homogeneous, go a long way toward achieving the performance of the KF, provided that dynamically balanced cross-covariances are constructed and that model errors are accounted for properly. The results indicate that such enhancements are necessary if unconventional data are to have a positive impact.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
NASA Astrophysics Data System (ADS)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten
2017-11-01
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.
Liu, Jie; Guo, Liang; Jiang, Jiping; Jiang, Dexun; Wang, Peng
2017-01-01
In the emergency management relevant to chemical contingency spills, efficiency emergency rescue can be deeply influenced by a reasonable assignment of the available emergency materials to the related risk sources. In this study, an emergency material scheduling model (EMSM) with time-effective and cost-effective objectives is developed to coordinate both allocation and scheduling of the emergency materials. Meanwhile, an improved genetic algorithm (IGA) which includes a revision operation for EMSM is proposed to identify the emergency material scheduling schemes. Then, scenario analysis is used to evaluate optimal emergency rescue scheme under different emergency pollution conditions associated with different threat degrees based on analytic hierarchy process (AHP) method. The whole framework is then applied to a computational experiment based on south-to-north water transfer project in China. The results demonstrate that the developed method not only could guarantee the implementation of the emergency rescue to satisfy the requirements of chemical contingency spills but also help decision makers identify appropriate emergency material scheduling schemes in a balance between time-effective and cost-effective objectives.
Very high order PNPM schemes on unstructured meshes for the resistive relativistic MHD equations
NASA Astrophysics Data System (ADS)
Dumbser, Michael; Zanotti, Olindo
2009-10-01
In this paper we propose the first better than second order accurate method in space and time for the numerical solution of the resistive relativistic magnetohydrodynamics (RRMHD) equations on unstructured meshes in multiple space dimensions. The nonlinear system under consideration is purely hyperbolic and contains a source term, the one for the evolution of the electric field, that becomes stiff for low values of the resistivity. For the spatial discretization we propose to use high order PNPM schemes as introduced in Dumbser et al. [M. Dumbser, D. Balsara, E.F. Toro, C.D. Munz, A unified framework for the construction of one-step finite volume and discontinuous Galerkin schemes, Journal of Computational Physics 227 (2008) 8209-8253] for hyperbolic conservation laws and a high order accurate unsplit time-discretization is achieved using the element-local space-time discontinuous Galerkin approach proposed in Dumbser et al. [M. Dumbser, C. Enaux, E.F. Toro, Finite volume schemes of very high order of accuracy for stiff hyperbolic balance laws, Journal of Computational Physics 227 (2008) 3971-4001] for one-dimensional balance laws with stiff source terms. The divergence-free character of the magnetic field is accounted for through the divergence cleaning procedure of Dedner et al. [A. Dedner, F. Kemm, D. Kröner, C.-D. Munz, T. Schnitzer, M. Wesenberg, Hyperbolic divergence cleaning for the MHD equations, Journal of Computational Physics 175 (2002) 645-673]. To validate our high order method we first solve some numerical test cases for which exact analytical reference solutions are known and we also show numerical convergence studies in the stiff limit of the RRMHD equations using PNPM schemes from third to fifth order of accuracy in space and time. We also present some applications with shock waves such as a classical shock tube problem with different values for the conductivity as well as a relativistic MHD rotor problem and the relativistic equivalent of the Orszag-Tang vortex problem. We have verified that the proposed method can handle equally well the resistive regime and the stiff limit of ideal relativistic MHD. For these reasons it provides a powerful tool for relativistic astrophysical simulations involving the appearance of magnetic reconnection.
The space-time solution element method: A new numerical approach for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Scott, James R.; Chang, Sin-Chung
1995-01-01
This paper is one of a series of papers describing the development of a new numerical method for the Navier-Stokes equations. Unlike conventional numerical methods, the current method concentrates on the discrete simulation of both the integral and differential forms of the Navier-Stokes equations. Conservation of mass, momentum, and energy in space-time is explicitly provided for through a rigorous enforcement of both the integral and differential forms of the governing conservation laws. Using local polynomial expansions to represent the discrete primitive variables on each cell, fluxes at cell interfaces are evaluated and balanced using exact functional expressions. No interpolation or flux limiters are required. Because of the generality of the current method, it applies equally to the steady and unsteady Navier-Stokes equations. In this paper, we generalize and extend the authors' 2-D, steady state implicit scheme. A general closure methodology is presented so that all terms up through a given order in the local expansions may be retained. The scheme is also extended to nonorthogonal Cartesian grids. Numerous flow fields are computed and results are compared with known solutions. The high accuracy of the scheme is demonstrated through its ability to accurately resolve developing boundary layers on coarse grids. Finally, we discuss applications of the current method to the unsteady Navier-Stokes equations.
NASA Astrophysics Data System (ADS)
Zhang, Yuchao; Gan, Chaoqin; Gou, Kaiyu; Xu, Anni; Ma, Jiamin
2018-01-01
DBA scheme based on Load balance algorithm (LBA) and wavelength recycle mechanism (WRM) for multi-wavelength upstream transmission is proposed in this paper. According to 1 Gbps and 10 Gbps line rates, ONUs are grouped into different VPONs. To facilitate wavelength management, resource pool is proposed to record wavelength state. To realize quantitative analysis, a mathematical model describing metro-access network (MAN) environment is presented. To 10G-EPON upstream, load balance algorithm is designed to ensure load distribution fairness for 10G-OLTs. To 1G-EPON upstream, wavelength recycle mechanism is designed to share remained wavelengths. Finally, the effectiveness of the proposed scheme is demonstrated by simulation and analysis.
Liu, Xikai; Ma, Dong; Chen, Liang; Liu, Xiangdong
2018-01-01
Tuning the stiffness balance is crucial to full-band common-mode rejection for a superconducting gravity gradiometer (SGG). A reliable method to do so has been proposed and experimentally tested. In the tuning scheme, the frequency response functions of the displacement of individual test mass upon common-mode accelerations were measured and thus determined a characteristic frequency for each test mass. A reduced difference in characteristic frequencies between the two test masses was utilized as the criterion for an effective tuning. Since the measurement of the characteristic frequencies does not depend on the scale factors of displacement detection, stiffness tuning can be done independently. We have tested this new method on a single-component SGG and obtained a reduction of two orders of magnitude in stiffness mismatch. PMID:29419796
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaptation on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load inbalances among processors on a parallel machine. This paper described the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution coast is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35 percent of the mesh is randomly adapted. For large scale scientific computations, our load balancing strategy gives an almost sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remappier yields processor assignments that are less than 3 percent of the optimal solutions, but requires only 1 percent of the computational time.
NASA Astrophysics Data System (ADS)
Liao, Haitao; Wu, Wenwang; Fang, Daining
2018-07-01
A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Debojyoti; Constantinescu, Emil M.
The numerical simulation of meso-, convective-, and microscale atmospheric flows requires the solution of the Euler or the Navier-Stokes equations. Nonhydrostatic weather prediction algorithms often solve the equations in terms of derived quantities such as Exner pressure and potential temperature (and are thus not conservative) and/or as perturbations to the hydrostatically balanced equilibrium state. This paper presents a well-balanced, conservative finite difference formulation for the Euler equations with a gravitational source term, where the governing equations are solved as conservation laws for mass, momentum, and energy. Preservation of the hydrostatic balance to machine precision by the discretized equations is essentialmore » because atmospheric phenomena are often small perturbations to this balance. The proposed algorithm uses the weighted essentially nonoscillatory and compact-reconstruction weighted essentially nonoscillatory schemes for spatial discretization that yields high-order accurate solutions for smooth flows and is essentially nonoscillatory across strong gradients; however, the well-balanced formulation may be used with other conservative finite difference methods. The performance of the algorithm is demonstrated on test problems as well as benchmark atmospheric flow problems, and the results are verified with those in the literature.« less
A robust embedded vision system feasible white balance algorithm
NASA Astrophysics Data System (ADS)
Wang, Yuan; Yu, Feihong
2018-01-01
White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.
Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.
Nath, Abhigyan; Subbiah, Karthikeyan
2015-12-01
Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance than that of individual base classifiers. The performance of the learned models trained on Kmeans preprocessed training set is far better than the randomly generated training sets. The proposed method achieved a sensitivity of 90.6%, specificity of 91.4% and accuracy of 91.0% on the first test set and sensitivity of 92.9%, specificity of 96.2% and accuracy of 94.7% on the second blind test set. These results have established that diversifying training set improves the performance of predictive models through superior generalization ability and balancing the training set improves prediction accuracy. For smaller data sets, unsupervised Kmeans based sampling can be an effective technique to increase generalization than that of the usual random splitting method. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2011-01-01
An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.
NASA Astrophysics Data System (ADS)
Canestrelli, Alberto; Toro, Eleuterio F.
2012-10-01
Recently, the FORCE centred scheme for conservative hyperbolic multi-dimensional systems has been introduced in [34] and has been applied to Euler and relativistic MHD equations, solved on unstructured meshes. In this work we propose a modification of the FORCE scheme, named FORCE-Contact, that provides improved resolution of contact and shear waves. This paper presents the technique in full detail as applied to the two-dimensional homogeneous shallow water equations. The improvements due to the new method are particularly evident when an additional equation is solved for a tracer, since the modified scheme exactly resolves isolated and steady contact discontinuities. The improvement is considerable also for slowly moving contact discontinuities, for shear waves and for steady states in meandering channels. For these types of flow fields, the numerical results provided by the new FORCE-Contact scheme are comparable with, and sometimes better than, the results obtained from upwind schemes, such as Roes scheme for example. In a companion paper, a similar approach to restoring the missing contact wave and preserving well-balanced properties for non-conservative one- and two-layer shallow water equations is introduced. However, the procedure is general and it is in principle applicable to other multidimensional hyperbolic systems in conservative and non-conservative form, such as the Euler equations for compressible gas dynamics.
Silva, Bhagya Nathali; Khan, Murad; Han, Kijun
2018-01-01
The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST) algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism. PMID:29495346
NASA Astrophysics Data System (ADS)
Maltese, A.; Capodici, F.; Ciraolo, G.; La Loggia, G.
2015-10-01
Temporal availability of grapes actual evapotranspiration is an emerging issue since vineyards farms are more and more converted from rainfed to irrigated agricultural systems. The manuscript aims to verify the accuracy of the actual evapotranspiration retrieval coupling a single source energy balance approach and two different temporal upscaling schemes. The first scheme tests the temporal upscaling of the main input variables, namely the NDVI, albedo and LST; the second scheme tests the temporal upscaling of the energy balance output, the actual evapotranspiration. The temporal upscaling schemes were implemented on: i) airborne remote sensing data acquired monthly during a whole irrigation season over a Sicilian vineyard; ii) low resolution MODIS products released daily or weekly; iii) meteorological data acquired by standard gauge stations. Daily MODIS LST products (MOD11A1) were disaggregated using the DisTrad model, 8-days black and white sky albedo products (MCD43A) allowed modeling the total albedo, and 8-days NDVI products (MOD13Q1) were modeled using the Fisher approach. Results were validated both in time and space. The temporal validation was carried out using the actual evapotranspiration measured in situ using data collected by a flux tower through the eddy covariance technique. The spatial validation involved airborne images acquired at different times from June to September 2008. Results aim to test whether the upscaling of the energy balance input or output data performed better.
NASA Astrophysics Data System (ADS)
Li, Gaohua; Fu, Xiang; Wang, Fuxin
2017-10-01
The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.
Unveiling the Antarctic subglacial landscape.
NASA Astrophysics Data System (ADS)
Warner, Roland; Roberts, Jason
2010-05-01
Better knowledge of the subglacial landscape of Antarctica is vital to reducing uncertainties regarding prediction of the evolution of the ice sheet. These uncertainties are associated with bedrock geometry for ice sheet dynamics, including possible marine ice sheet instabilities and subglacial hydrological pathways (e.g. Wright et al., 2008). Major collaborative aerogeophysics surveys motivated by the International Polar Year (e.g. ICECAP and AGAP), and continuing large scale radar echo sounding campaigns (ICECAP and NASA Ice Bridge) are significantly improving the coverage. However, the vast size of Antarctica and logistic difficulties mean that data gaps persist, and ice thickness data remains spatially inhomogeneous. The physics governing large scale ice sheet flow enables ice thickness, and hence bedrock topography, to be inferred from knowledge of ice sheet surface topography and considerations of ice sheet mass balance, even in areas with sparse ice thickness measurements (Warner and Budd, 2000). We have developed a robust physically motivated interpolation scheme, based on these methods, and used it to generate a comprehensive map of Antarctic bedrock topography, using along-track ice thickness data assembled for the BEDMAP project (Lythe et al., 2001). This approach reduces ice thickness biases, compared to traditional inverse distance interpolation schemes which ignore the information available from considerations of ice sheet flow. In addition, the use of improved balance fluxes, calculated using a Lagrangian scheme, eliminates the grid orientation biases in ice fluxes associated with finite difference methods (Budd and Warner, 1996, Le Brocq et al., 2006). The present map was generated using a recent surface DEM (Bamber et al., 2009, Griggs and Bamber, 2009) and accumulation distribution (van de Berg et al., 2006). Comparing our results with recent high resolution regional surveys gives confidence that all major subglacial topographic features are revealed by this approach, and we advocate its consideration in future ice thickness data syntheses. REFERENCES Budd, W.F., and R.C. Warner, 1996. A computer scheme for rapid calculations of balance-flux distributions. Annals of Glaciology 23, 21-27. Bamber, J.L., J.L. Gomez Dans and J.A. Griggs, 2009. A new 1 km digital elevation model of the Antarctic derived from combined satellite radar and laser data. Part I: Data and methods. The Cryosphere 3 (2), 101-111. Griggs, J.A., and J.L. Bamber, 2009. A new digital elevation model of Antarctica derived from combined radar and laser altimetry data. Part II: Validation and error estimates, The Cryosphere, 3(2), 113-123. Le Brocq, A.M., A.J. Payne and M.J. Siegert, 2006. West Antarctic balance calculations: Impact of flux-routing algorithm, smoothing algorithm and topography. Computers and Geosciences 23(10): 1780-1795. Lythe, M. B., D.G. Vaughan, and the BEDMAP Consortium 2001, BEDMAP: A new ice thickness and subglacial topographic model of Antarctica, J. of Geophys. Res., 106(B6),11,335-11,351. van de Berg, W.J., M.R. van den Broeke, C.H. Reijmer, and E. van Meijgaard, 2006. Reassessment of the Antarctic surface mass balance using calibrated output of a regional atmospheric climate model, J. Geophys. Res., 111, D11104,doi:10.1029/2005JD006495. Warner, R.C., and W.F. Budd, 2000. Derivation of ice thickness and bedrock topography in data-gap regions over Antarctica, Annals of Glaciology, 31, 191-197. Wright, A.P., M.J. Siegert, A.M. Le Brocq, and D.B. Gore, 2008. High sensitivity of subglacial hydrological pathways in Antarctica to small ice-sheet changes, Geophys. Res. Lett., 35, L17504, doi:10.1029/2008GL034937.
Effect of superconducting solenoid model cores on spanwise iron magnet roll control
NASA Technical Reports Server (NTRS)
Britcher, C. P.
1985-01-01
Compared with conventional ferromagnetic fuselage cores, superconducting solenoid cores appear to offer significant reductions in the projected cost of a large wind tunnel magnetic suspension and balance system. The provision of sufficient magnetic roll torque capability has been a long-standing problem with all magnetic suspension and balance systems; and the spanwise iron magnet scheme appears to be the most powerful system available. This scheme utilizes iron cores which are installed in the wings of the model. It was anticipated that the magnetization of these cores, and hence the roll torque generated, would be affected by the powerful external magnetic field of the superconducting solenoid. A preliminary study has been made of the effect of the superconducting solenoid fuselage model core concept on the spanwise iron magnet roll torque generation schemes. Computed data for one representative configuration indicate that reductions in available roll torque occur over a range of applied magnetic field levels. These results indicate that a 30-percent increase in roll electromagnet capacity over that previously determined will be required for a representative 8-foot wind tunnel magnetic suspension and balance system design.
A spectral multi-domain technique applied to high-speed chemically reacting flows
NASA Technical Reports Server (NTRS)
Macaraeg, Michele G.; Streett, Craig L.; Hussaini, M. Yousuff
1989-01-01
The first applications of a spectral multidomain method for viscous compressible flow is presented. The method imposes a global flux balance condition at the interface so that high-order continuity of the solution is preserved. The global flux balance is imposed in terms of a spectral integral of the discrete equations across adjoining domains. Since the discretized equations interior to each domain solved are uncoupled from each other, and since the interface relation has a block structure, the solution scheme can be adapted to the particular requirements of each subdomain. The spectral multidomain technique presented is well-suited for the multiple scales associated with the chemically reacting and transition flows in hypersonic research. A nonstaggered multidomain discretization is used for the chemically reacting flow calculation, and the first implementation of a staggered multidomain mesh is presented for accurately solving the stability equation for a viscous compressible fluid.
Model predictive control based on reduced order models applied to belt conveyor system.
Chen, Wei; Li, Xin
2016-11-01
In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Path length differencing and energy conservation of the S[sub N] Boltzmann/Spencer-Lewis equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filippone, W.L.; Monahan, S.P.
It is shown that the S[sub N] Boltzmann/Spencer-Lewis equations conserve energy locally if and only if they satisfy particle balance and diamond differencing is used in path length. In contrast, the spatial differencing schemes have no bearing on the energy balance. Energy is conserved globally if it is conserved locally and the multigroup cross sections are energy conserving. Although the coupled electron-photon cross sections generated by CEPXS conserve particles and charge, they do not precisely conserve energy. It is demonstrated that these cross sections can be adjusted such that particles, charge, and energy are conserved. Finally, since a conventional negativemore » flux fixup destroys energy balance when applied to path legend, a modified fixup scheme that does not is presented.« less
Stratified flows with variable density: mathematical modelling and numerical challenges.
NASA Astrophysics Data System (ADS)
Murillo, Javier; Navas-Montilla, Adrian
2017-04-01
Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux-ADER schemes with application to hyperbolic conservation laws with geometric source terms, J. Comput. Phys. 317 (2016) 108-147. J. Murillo and A. Navas-Montilla, A comprehensive explanation and exercise of the source terms in hyperbolic systems using Roe type solutions. Application to the 1D-2D shallow water equations, Advances in Water Resources 98 (2016) 70-96.
Learning and tuning fuzzy logic controllers through reinforcements
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap
1992-01-01
A new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. In particular, our Generalized Approximate Reasoning-based Intelligent Control (GARIC) architecture: (1) learns and tunes a fuzzy logic controller even when only weak reinforcements, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto, Sutton, and Anderson to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and has demonstrated significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
F/A-18 and F-16 forebody vortex control, static and rotary-balance results
NASA Technical Reports Server (NTRS)
Kramer, Brian; Smith, Brooke
1994-01-01
The results from research on forebody vortex control on both the F/A-18 and the F-16 aircraft will be shown. Several methods of forebody vortex control, including mechanical and pneumatic schemes, will be discussed. The wind tunnel data includes both static and rotary balance data for forebody vortex control. Time lags between activation or deactivation of the pneumatic control and when the aircraft experiences the resultant forces are also discussed. The static (non-rotating) forces and pressures are then compared to similar configurations tested in the NASA Langley and DTRC Wind Tunnel, the NASA Ames 80'x120' Wind Tunnel, and in flight on the High Angle of Attack Research Vehicle (HARV).
NASA Astrophysics Data System (ADS)
Hirota, Osamu; Futami, Fumio
2014-10-01
To guarantee a security of Cloud Computing System is urgent problem. Although there are several threats in a security problem, the most serious problem is cyber attack against an optical fiber transmission among data centers. In such a network, an encryption scheme on Layer 1(physical layer) with an ultimately strong security, a small delay, and a very high speed should be employed, because a basic optical link is operated at 10 Gbit/sec/wavelength. We have developed a quantum noise randomied stream cipher so called Yuen- 2000 encryption scheme (Y-00) during a decade. This type of cipher is a completely new type random cipher in which ciphertext for a legitimate receiver and eavesdropper are different. This is a condition to break the Shannon limit in theory of cryptography. In addition, this scheme has a good balance on a security, a speed and a cost performance. To realize such an encryption, several modulation methods are candidates such as phase-modulation, intensity-modulation, quadrature amplitude modulation, and so on. Northwestern university group demonstrated a phase modulation system (α=η) in 2003. In 2005, we reported a demonstration of 1 Gbit/sec system based on intensity modulation scheme(ISK-Y00), and gave a design method for quadratic amplitude modulation (QAM-Y00) in 2005 and 2010. An intensity modulation scheme promises a real application to a secure fiber communication of current data centers. This paper presents a progress in quantum noise randomized stream cipher based on ISK-Y00, integrating our theoretical and experimental achievements in the past and recent 100 Gbit/sec(10Gbit/sec × 10 wavelengths) experiment.
A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks.
Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen
2018-05-12
Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity.
A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks
Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen
2018-01-01
Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity. PMID:29757244
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; ...
2017-11-27
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Sharkas, Kamal; Gagliardi, Laura; Truhlar, Donald G
2017-12-07
We investigate the performance of multiconfiguration pair-density functional theory (MC-PDFT) and complete active space second-order perturbation theory for computing the bond dissociation energies of the diatomic molecules FeC, NiC, FeS, NiS, FeSe, and NiSe, for which accurate experimental data have become recently available [Matthew, D. J.; Tieu, E.; Morse, M. D. J. Chem. Phys. 2017, 146, 144310-144320]. We use three correlated participating orbital (CPO) schemes (nominal, moderate, and extended) to define the active spaces, and we consider both the complete active space (CAS) and the separated-pair (SP) schemes to specify the configurations included for a given active space. We found that the moderate SP-PDFT scheme with the tPBE on-top density functional has the smallest mean unsigned error (MUE) of the methods considered. This level of theory provides a balanced treatment of the static and dynamic correlation energies for the studied systems. This is encouraging because the method is low in cost even for much more complicated systems.
Principles for problem aggregation and assignment in medium scale multiprocessors
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1987-01-01
One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior.
Recent assimilation developments of FOAM the Met Office ocean forecast system
NASA Astrophysics Data System (ADS)
Lea, Daniel; Martin, Matthew; Waters, Jennifer; Mirouze, Isabelle; While, James; King, Robert
2015-04-01
FOAM is the Met Office's operational ocean forecasting system. This system comprises a range of models from a 1/4 degree resolution global to 1/12 degree resolution regional models and shelf seas models at 7 km resolution. The system is made up of the ocean model NEMO (Nucleus for European Modeling of the Ocean), the Los Alomos sea ice model CICE and the NEMOVAR assimilation run in 3D-VAR FGAT mode. Work is ongoing to transition to both a higher resolution global ocean model at 1/12 degrees and to run FOAM in coupled models. The FOAM system generally performs well. One area of concern however is the performance in the tropics where spurious oscillations and excessive vertical velocity gradients are found after assimilation. NEMOVAR includes a balance operator which in the extra-tropics uses geostrophic balance to produce velocity increments which balance the density increments applied. In the tropics, however, the main balance is between the pressure gradients produced by the density gradient and the applied wind stress. A scheme is presented which aims to maintain this balance when increments are applied. Another issue in FOAM is that there are sometimes persistent temperature and salinity errors which are not effectively corrected by the assimilation. The standard NEMOVAR has a single correlation length scale based on the local Rossby radius. This means that observations in the extra tropics have influence on the model only on short length-scales. In order to maximise the information extracted from the observations and to correct large scale model biases a multiple correlation length-scale scheme has been developed. This includes a larger length scale which spreads observation information further. Various refinements of the scheme are also explored including reducing the longer length scale component at the edge of the sea ice and in areas with high potential vorticity gradients. A related scheme which varies the correlation length scale in the shelf seas is also described.
Evaluation of a Stereo Music Preprocessing Scheme for Cochlear Implant Users.
Buyens, Wim; van Dijk, Bas; Moonen, Marc; Wouters, Jan
2018-01-01
Although for most cochlear implant (CI) users good speech understanding is reached (at least in quiet environments), the perception and the appraisal of music are generally unsatisfactory. The improvement in music appraisal was evaluated in CI participants by using a stereo music preprocessing scheme implemented on a take-home device, in a comfortable listening environment. The preprocessing allowed adjusting the balance among vocals/bass/drums and other instruments, and was evaluated for different genres of music. The correlation between the preferred settings and the participants' speech and pitch detection performance was investigated. During the initial visit preceding the take-home test, the participants' speech-in-noise perception and pitch detection performance were measured, and a questionnaire about their music involvement was completed. The take-home device was provided, including the stereo music preprocessing scheme and seven playlists with six songs each. The participants were asked to adjust the balance by means of a turning wheel to make the music sound most enjoyable, and to repeat this three times for all songs. Twelve postlingually deafened CI users participated in the study. The data were collected by means of a take-home device, which preserved all the preferred settings for the different songs. Statistical analysis was done with a Friedman test (with post hoc Wilcoxon signed-rank test) to check the effect of "Genre." The correlations were investigated with Pearson's and Spearman's correlation coefficients. All participants preferred a balance significantly different from the original balance. Differences across participants were observed which could not be explained by perceptual abilities. An effect of "Genre" was found, showing significantly smaller preferred deviation from the original balance for Golden Oldies compared to the other genres. The stereo music preprocessing scheme showed an improvement in music appraisal with complex music and hence might be a good tool for music listening, training, or rehabilitation for CI users. American Academy of Audiology
A faster numerical scheme for a coupled system modeling soil erosion and sediment transport
NASA Astrophysics Data System (ADS)
Le, M.-H.; Cordier, S.; Lucas, C.; Cerdan, O.
2015-02-01
Overland flow and soil erosion play an essential role in water quality and soil degradation. Such processes, involving the interactions between water flow and the bed sediment, are classically described by a well-established system coupling the shallow water equations and the Hairsine-Rose model. Numerical approximation of this coupled system requires advanced methods to preserve some important physical and mathematical properties; in particular, the steady states and the positivity of both water depth and sediment concentration. Recently, finite volume schemes based on Roe's solver have been proposed by Heng et al. (2009) and Kim et al. (2013) for one and two-dimensional problems. In their approach, an additional and artificial restriction on the time step is required to guarantee the positivity of sediment concentration. This artificial condition can lead the computation to be costly when dealing with very shallow flow and wet/dry fronts. The main result of this paper is to propose a new and faster scheme for which only the CFL condition of the shallow water equations is sufficient to preserve the positivity of sediment concentration. In addition, the numerical procedure of the erosion part can be used with any well-balanced and positivity preserving scheme of the shallow water equations. The proposed method is tested on classical benchmarks and also on a realistic configuration.
Friction damping of two-dimensional motion and its application in vibration control
NASA Technical Reports Server (NTRS)
Menq, C.-H.; Chidamparam, P.; Griffin, J. H.
1991-01-01
This paper presents an approximate method for analyzing the two-dimensional friction contact problem so as to compute the dynamic response of a structure constrained by friction interfaces. The friction force at the joint is formulated based on the Coulomb model. The single-term harmonic balance scheme, together with the receptance approach of decoupling the effect of the friction force on the structure from those of the external forces has been utilized to obtain the steady state response. The computational efficiency and accuracy of the method are demonstrated by comparing the results with long-term time solutions.
Tradeoffs in the Design of Health Plan Payment Systems: Fit, Power and Balance
Geruso, Michael; McGuire, Thomas G.
2016-01-01
In many markets, including the new U.S. Marketplaces, health insurance plans are paid by risk-adjusted capitation, sometimes combined with reinsurance and other payment mechanisms. This paper proposes a framework for evaluating the de facto insurer incentives embedded in these complex payment systems. We discuss fit, power and balance, each of which addresses a distinct market failure in health insurance. We implement empirical metrics of fit, power, and balance in a study of Marketplace payment systems. Using data similar to that used to develop the Marketplace risk adjustment scheme, we quantify tradeoffs among the three classes of incentives. We show that an essential tradeoff arises between the goals of limiting costs and limiting cream skimming because risk adjustment, which is aimed at discouraging cream-skimming, weakens cost control incentives in practice. A simple reinsurance system scores better on our measures of fit, power and balance than the risk adjustment scheme in use in the Marketplaces. PMID:26922122
Tradeoffs in the design of health plan payment systems: Fit, power and balance.
Geruso, Michael; McGuire, Thomas G
2016-05-01
In many markets, including the new U.S. Marketplaces, health insurance plans are paid by risk-adjusted capitation, sometimes combined with reinsurance and other payment mechanisms. This paper proposes a framework for evaluating the de facto insurer incentives embedded in these complex payment systems. We discuss fit, power and balance, each of which addresses a distinct market failure in health insurance. We implement empirical metrics of fit, power, and balance in a study of Marketplace payment systems. Using data similar to that used to develop the Marketplace risk adjustment scheme, we quantify tradeoffs among the three classes of incentives. We show that an essential tradeoff arises between the goals of limiting costs and limiting cream skimming because risk adjustment, which is aimed at discouraging cream-skimming, weakens cost control incentives in practice. A simple reinsurance system scores better on our measures of fit, power and balance than the risk adjustment scheme in use in the Marketplaces. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Lin; Cao, Xin; Ren, Qingyun; Chen, Xueli; He, Xiaowei
2018-05-01
Cerenkov luminescence imaging (CLI) is an imaging method that uses an optical imaging scheme to probe a radioactive tracer. Application of CLI with clinically approved radioactive tracers has opened an opportunity for translating optical imaging from preclinical to clinical applications. Such translation was further improved by developing an endoscopic CLI system. However, two-dimensional endoscopic imaging cannot identify accurate depth and obtain quantitative information. Here, we present an imaging scheme to retrieve the depth and quantitative information from endoscopic Cerenkov luminescence tomography, which can also be applied for endoscopic radio-luminescence tomography. In the scheme, we first constructed a physical model for image collection, and then a mathematical model for characterizing the luminescent light propagation from tracer to the endoscopic detector. The mathematical model is a hybrid light transport model combined with the 3rd order simplified spherical harmonics approximation, diffusion, and radiosity equations to warrant accuracy and speed. The mathematical model integrates finite element discretization, regularization, and primal-dual interior-point optimization to retrieve the depth and the quantitative information of the tracer. A heterogeneous-geometry-based numerical simulation was used to explore the feasibility of the unified scheme, which demonstrated that it can provide a satisfactory balance between imaging accuracy and computational burden.
Li, Bai; Gong, Li-gang; Yang, Wen-lun
2014-01-01
Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms.
Recent Progress on the Parallel Implementation of Moving-Body Overset Grid Schemes
NASA Technical Reports Server (NTRS)
Wissink, Andrew; Allen, Edwin (Technical Monitor)
1998-01-01
Viscous calculations about geometrically complex bodies in which there is relative motion between component parts is one of the most computationally demanding problems facing CFD researchers today. This presentation documents results from the first two years of a CHSSI-funded effort within the U.S. Army AFDD to develop scalable dynamic overset grid methods for unsteady viscous calculations with moving-body problems. The first pan of the presentation will focus on results from OVERFLOW-D1, a parallelized moving-body overset grid scheme that employs traditional Chimera methodology. The two processes that dominate the cost of such problems are the flow solution on each component and the intergrid connectivity solution. Parallel implementations of the OVERFLOW flow solver and DCF3D connectivity software are coupled with a proposed two-part static-dynamic load balancing scheme and tested on the IBM SP and Cray T3E multi-processors. The second part of the presentation will cover some recent results from OVERFLOW-D2, a new flow solver that employs Cartesian grids with various levels of refinement, facilitating solution adaption. A study of the parallel performance of the scheme on large distributed- memory multiprocessor computer architectures will be reported.
Design and implementation of streaming media server cluster based on FFMpeg.
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.
Design and Implementation of Streaming Media Server Cluster Based on FFMpeg
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187
NASA Astrophysics Data System (ADS)
Sun, Hung-Min; Yang, Cheng-Ta; Wu, Mu-En
In some applications, a short private exponent d is chosen to improve the decryption or signing process for RSA public key cryptosystem. However, in a typical RSA, if the private exponent d is selected first, the public exponent e should be of the same order of magnitude as φ(N). Sun et al. devised three RSA variants using unbalanced prime factors p and q to lower the computational cost. Unfortunately, Durfee & Nguyen broke the illustrated instances of the first and third variants by solving small roots to trivariate modular polynomial equations. They also indicated that the instances with unbalanced primes p and q are more insecure than the instances with balanced p and q. This investigation focuses on designing a new RSA variant with balanced p and q, and short exponents d and e, to improve the security of an RSA variant against the Durfee & Nguyen's attack, and the other existing attacks. Furthermore, the proposed variant (Scheme A) is also extended to another RSA variant (Scheme B) in which p and q are balanced, and a trade-off between the lengths of d and e is enable. In addition, we provide the security analysis and feasibility analysis of the proposed schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Fulin; Cao, Yang; Zhang, Jun Jason
Ensuring flexible and reliable data routing is indispensable for the integration of Advanced Metering Infrastructure (AMI) networks, we propose a secure-oriented and load-balancing wireless data routing scheme. A novel utility function is designed based on security routing scheme. Then, we model the interactive security-oriented routing strategy among meter data concentrators or smart grid meters as a mixed-strategy network formation game. Finally, such problem results in a stable probabilistic routing scheme with proposed distributed learning algorithm. One contributions is that we studied that different types of applications affect the routing selection strategy and the strategy tendency. Another contributions is that themore » chosen strategy of our mixed routing can adaptively to converge to a new mixed strategy Nash equilibrium (MSNE) during the learning process in the smart grid.« less
NASA Astrophysics Data System (ADS)
Li, R.; Arora, V. K.
2011-06-01
Energy and carbon balance implications of representing vegetation using a composite or mosaic approach in a land surface scheme are investigated. In the composite approach the attributes of different plant functional types (PFTs) present in a grid cell are aggregated in some fashion for energy and water balance calculations. The resulting physical environmental conditions (including net radiation, soil moisture and soil temperature) are common to all PFTs and affect their ecosystem processes. In the mosaic approach energy and water balance calculations are performed separately for each PFT tile using its own vegetation attributes, so each PFT "sees" different physical environmental conditions and its carbon balance evolves somewhat differently from that in the composite approach. Simulations are performed at selected boreal, temperate and tropical locations to illustrate the differences caused by using the composite versus the mosaic approaches of representing vegetation. Differences in grid averaged primary energy fluxes are generally less than 5 % between the two approaches. Grid-averaged carbon fluxes and pool sizes can, however, differ by as much as 46 %. Simulation results suggest that differences in carbon balance between the two approaches arise primarily through differences in net radiation which directly affects net primary productivity, and thus leaf area index and vegetation biomass.
Microwave implementation of two-source energy balance approach for estimating evapotranspiration
USDA-ARS?s Scientific Manuscript database
A newly developed microwave (MW) land surface temperature (LST) product is used to effectively substitute thermal infrared (TIR) based LST in the two-source energy balance approach (TSEB) for estimating ET from space. This TSEB land surface scheme, used in the Atmosphere Land Exchange Inverse (ALEXI...
Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.
Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N
2017-05-01
This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.
Porru, Marcella; Özkan, Leyla
2017-05-24
This paper develops a new simulation model for crystal size distribution dynamics in industrial batch crystallization. The work is motivated by the necessity of accurate prediction models for online monitoring purposes. The proposed numerical scheme is able to handle growth, nucleation, and agglomeration kinetics by means of the population balance equation and the method of characteristics. The former offers a detailed description of the solid phase evolution, while the latter provides an accurate and efficient numerical solution. In particular, the accuracy of the prediction of the agglomeration kinetics, which cannot be ignored in industrial crystallization, has been assessed by comparing it with solutions in the literature. The efficiency of the solution has been tested on a simulation of a seeded flash cooling batch process. Since the proposed numerical scheme can accurately simulate the system behavior more than hundred times faster than the batch duration, it is suitable for online applications such as process monitoring tools based on state estimators.
2017-01-01
This paper develops a new simulation model for crystal size distribution dynamics in industrial batch crystallization. The work is motivated by the necessity of accurate prediction models for online monitoring purposes. The proposed numerical scheme is able to handle growth, nucleation, and agglomeration kinetics by means of the population balance equation and the method of characteristics. The former offers a detailed description of the solid phase evolution, while the latter provides an accurate and efficient numerical solution. In particular, the accuracy of the prediction of the agglomeration kinetics, which cannot be ignored in industrial crystallization, has been assessed by comparing it with solutions in the literature. The efficiency of the solution has been tested on a simulation of a seeded flash cooling batch process. Since the proposed numerical scheme can accurately simulate the system behavior more than hundred times faster than the batch duration, it is suitable for online applications such as process monitoring tools based on state estimators. PMID:28603342
NASA Astrophysics Data System (ADS)
Chan, Chia-Hsin; Tu, Chun-Chuan; Tsai, Wen-Jiin
2017-01-01
High efficiency video coding (HEVC) not only improves the coding efficiency drastically compared to the well-known H.264/AVC but also introduces coding tools for parallel processing, one of which is tiles. Tile partitioning is allowed to be arbitrary in HEVC, but how to decide tile boundaries remains an open issue. An adaptive tile boundary (ATB) method is proposed to select a better tile partitioning to improve load balancing (ATB-LoadB) and coding efficiency (ATB-Gain) with a unified scheme. Experimental results show that, compared to ordinary uniform-space partitioning, the proposed ATB can save up to 17.65% of encoding times in parallel encoding scenarios and can reduce up to 0.8% of total bit rates for coding efficiency.
Unstructured P2P Network Load Balance Strategy Based on Multilevel Partitioning of Hypergraph
NASA Astrophysics Data System (ADS)
Feng, Lv; Chunlin, Gao; Kaiyang, Ma
2017-05-01
With rapid development of computer performance and distributed technology, P2P-based resource sharing mode plays important role in Internet. P2P network users continued to increase so the high dynamic characteristics of the system determine that it is difficult to obtain the load of other nodes. Therefore, a dynamic load balance strategy based on hypergraph is proposed in this article. The scheme develops from the idea of hypergraph theory in multilevel partitioning. It adopts optimized multilevel partitioning algorithms to partition P2P network into several small areas, and assigns each area a supernode for the management and load transferring of the nodes in this area. In the case of global scheduling is difficult to be achieved, the priority of a number of small range of load balancing can be ensured first. By the node load balance in each small area the whole network can achieve relative load balance. The experiments indicate that the load distribution of network nodes in our scheme is obviously compacter. It effectively solves the unbalanced problems in P2P network, which also improve the scalability and bandwidth utilization of system.
Fan, Zhaoyang; Hodnett, Philip A; Davarpanah, Amir H; Scanlon, Timothy G; Sheehan, John J; Varga, John; Carr, James C; Li, Debiao
2011-08-01
: To develop a flow-sensitive dephasing (FSD) preparative scheme to facilitate multidirectional flow-signal suppression in 3-dimensional balanced steady-state free precession imaging and to validate the feasibility of the refined sequence for noncontrast magnetic resonance angiography (NC-MRA) of the hand. : A new FSD preparative scheme was developed that combines 2 conventional FSD modules. Studies using a flow phantom (gadolinium-doped water 15 cm/s) and the hands of 11 healthy volunteers (6 males and 5 females) were performed to compare the proposed FSD scheme with its conventional counterpart with respect to the signal suppression of multidirectional flow. In 9 of the 11 healthy subjects and 2 patients with suspected vasculitis and documented Raynaud phenomenon, respectively, 3-dimensional balanced steady-state free precession imaging coupled with the new FSD scheme was compared with spatial-resolution-matched (0.94 × 0.94 × 0.94 mm) contrast-enhanced magnetic resonance angiography (0.15 mmol/kg gadopentetate dimeglumine) in terms of overall image quality, venous contamination, motion degradation, and arterial conspicuity. : The proposed FSD scheme was able to suppress 2-dimensional flow signal in the flow phantom and hands and yielded significantly higher arterial conspicuity scores than the conventional scheme did on NC-MRA at the regions of common digitals and proper digitals. Compared with contrast-enhanced magnetic resonance angiography, the refined NC-MRA technique yielded comparable overall image quality and motion degradation, significantly less venous contamination, and significantly higher arterial conspicuity score at digital arteries. : The FSD-based NC-MRA technique is improved in the depiction of multidirectional flow by applying a 2-module FSD preparation, which enhances its potential to serve as an alternative magnetic resonance angiography technique for the assessment of hand vascular abnormalities.
Finite area method for nonlinear supersonic conical flows
NASA Technical Reports Server (NTRS)
Sritharan, S. S.; Seebass, A. R.
1983-01-01
A fully conservative numerical method for the computation of steady inviscid supersonic flow about general conical bodies at incidence is described. The procedure utilizes the potential approximation and implements a body conforming mesh generator. The conical potential is assumed to have its best linear variation inside each mesh cell; a secondary interlocking cell system is used to establish the flux balance required to conserve mass. In the supersonic regions the scheme is symmetrized by adding artificial viscosity in conservation form. The algorithm is nearly an order of a magnitude faster than present Euler methods and predicts known results accurately and qualitative features such as nodal point lift off correctly. Results are compared with those of other investigators.
Finite area method for nonlinear conical flows
NASA Technical Reports Server (NTRS)
Sritharan, S. S.; Seebass, A. R.
1982-01-01
A fully conservative finite area method for the computation of steady inviscid flow about general conical bodies at incidence is described. The procedure utilizes the potential approximation and implements a body conforming mesh generator. The conical potential is assumed to have its best linear variation inside each mesh cell and a secondary interlocking cell system is used to establish the flux balance required to conserve mass. In the supersonic regions the scheme is desymmetrized by adding appropriate artificial viscosity in conservation form. The algorithm is nearly an order of a magnitude faster than present Euler methods and predicts known results accurately and qualitative features such as nodal point lift off correctly. Results are compared with those of other investigations.
Method of self-consistent evaluation of absolute emission probabilities of particles and gamma rays
NASA Astrophysics Data System (ADS)
Badikov, Sergei; Chechev, Valery
2017-09-01
In assumption of well installed decay scheme the method provides a) exact balance relationships, b) lower (compared to the traditional techniques) uncertainties of recommended absolute emission probabilities of particles and gamma rays, c) evaluation of correlations between the recommended emission probabilities (for the same and different decay modes). Application of the method for the decay data evaluation for even curium isotopes led to paradoxical results. The multidimensional confidence regions for the probabilities of the most intensive alpha transitions constructed on the basis of present and the ENDF/B-VII.1, JEFF-3.1, DDEP evaluations are inconsistent whereas the confidence intervals for the evaluated probabilities of single transitions agree with each other.
The method of fundamental solutions for computing acoustic interior transmission eigenvalues
NASA Astrophysics Data System (ADS)
Kleefeld, Andreas; Pieronek, Lukas
2018-03-01
We analyze the method of fundamental solutions (MFS) in two different versions with focus on the computation of approximate acoustic interior transmission eigenvalues in 2D for homogeneous media. Our approach is mesh- and integration free, but suffers in general from the ill-conditioning effects of the discretized eigenoperator, which we could then successfully balance using an approved stabilization scheme. Our numerical examples cover many of the common scattering objects and prove to be very competitive in accuracy with the standard methods for PDE-related eigenvalue problems. We finally give an approximation analysis for our framework and provide error estimates, which bound interior transmission eigenvalue deviations in terms of some generalized MFS output.
Numerical method of lines for the relaxational dynamics of nematic liquid crystals.
Bhattacharjee, A K; Menon, Gautam I; Adhikari, R
2008-08-01
We propose an efficient numerical scheme, based on the method of lines, for solving the Landau-de Gennes equations describing the relaxational dynamics of nematic liquid crystals. Our method is computationally easy to implement, balancing requirements of efficiency and accuracy. We benchmark our method through the study of the following problems: the isotropic-nematic interface, growth of nematic droplets in the isotropic phase, and the kinetics of coarsening following a quench into the nematic phase. Our results, obtained through solutions of the full coarse-grained equations of motion with no approximations, provide a stringent test of the de Gennes ansatz for the isotropic-nematic interface, illustrate the anisotropic character of droplets in the nucleation regime, and validate dynamical scaling in the coarsening regime.
Optimized Energy Harvesting, Cluster-Head Selection and Channel Allocation for IoTs in Smart Cities
Aslam, Saleem; Hasan, Najam Ul; Jang, Ju Wook; Lee, Kyung-Geun
2016-01-01
This paper highlights three critical aspects of the internet of things (IoTs), namely (1) energy efficiency, (2) energy balancing and (3) quality of service (QoS) and presents three novel schemes for addressing these aspects. For energy efficiency, a novel radio frequency (RF) energy-harvesting scheme is presented in which each IoT device is associated with the best possible RF source in order to maximize the overall energy that the IoT devices harvest. For energy balancing, the IoT devices in close proximity are clustered together and then an IoT device with the highest residual energy is selected as a cluster head (CH) on a rotational basis. Once the CH is selected, it assigns channels to the IoT devices to report their data using a novel integer linear program (ILP)-based channel allocation scheme by satisfying their desired QoS. To evaluate the presented schemes, exhaustive simulations are carried out by varying different parameters, including the number of IoT devices, the number of harvesting sources, the distance between RF sources and IoT devices and the primary user (PU) activity of different channels. The simulation results demonstrate that our proposed schemes perform better than the existing ones. PMID:27918424
Optimized Energy Harvesting, Cluster-Head Selection and Channel Allocation for IoTs in Smart Cities.
Aslam, Saleem; Hasan, Najam Ul; Jang, Ju Wook; Lee, Kyung-Geun
2016-12-02
This paper highlights three critical aspects of the internet of things (IoTs), namely (1) energy efficiency, (2) energy balancing and (3) quality of service (QoS) and presents three novel schemes for addressing these aspects. For energy efficiency, a novel radio frequency (RF) energy-harvesting scheme is presented in which each IoT device is associated with the best possible RF source in order to maximize the overall energy that the IoT devices harvest. For energy balancing, the IoT devices in close proximity are clustered together and then an IoT device with the highest residual energy is selected as a cluster head (CH) on a rotational basis. Once the CH is selected, it assigns channels to the IoT devices to report their data using a novel integer linear program (ILP)-based channel allocation scheme by satisfying their desired QoS. To evaluate the presented schemes, exhaustive simulations are carried out by varying different parameters, including the number of IoT devices, the number of harvesting sources, the distance between RF sources and IoT devices and the primary user (PU) activity of different channels. The simulation results demonstrate that our proposed schemes perform better than the existing ones.
Nonnegative methods for bilinear discontinuous differencing of the S N equations on quadrilaterals
Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.
2016-12-22
Historically, matrix lumping and ad hoc flux fixups have been the only methods used to eliminate or suppress negative angular flux solutions associated with the unlumped bilinear discontinuous (UBLD) finite element spatial discretization of the two-dimensional S N equations. Though matrix lumping inhibits negative angular flux solutions of the S N equations, it does not guarantee strictly positive solutions. In this paper, we develop and define a strictly nonnegative, nonlinear, Petrov-Galerkin finite element method that fully preserves the bilinear discontinuous spatial moments of the transport equation. Additionally, we define two ad hoc fixups that maintain particle balance and explicitly setmore » negative nodes of the UBLD finite element solution to zero but use different auxiliary equations to fully define their respective solutions. We assess the ability to inhibit negative angular flux solutions and the accuracy of every spatial discretization that we consider using a glancing void test problem with a discontinuous solution known to stress numerical methods. Though significantly more computationally intense, the nonlinear Petrov-Galerkin scheme results in a strictly nonnegative solution and is a more accurate solution than all the other methods considered. One fixup, based on shape preserving, results in a strictly nonnegative final solution but has increased numerical diffusion relative to the Petrov-Galerkin scheme and is less accurate than the UBLD solution. The second fixup, which preserves as many spatial moments as possible while setting negative values of the unlumped solution to zero, is less accurate than the Petrov-Galerkin scheme but is more accurate than the other fixup. However, it fails to guarantee a strictly nonnegative final solution. As a result, the fully lumped bilinear discontinuous finite element solution is the least accurate method, with significantly more numerical diffusion than the Petrov-Galerkin scheme and both fixups.« less
Nonnegative methods for bilinear discontinuous differencing of the S N equations on quadrilaterals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.
Historically, matrix lumping and ad hoc flux fixups have been the only methods used to eliminate or suppress negative angular flux solutions associated with the unlumped bilinear discontinuous (UBLD) finite element spatial discretization of the two-dimensional S N equations. Though matrix lumping inhibits negative angular flux solutions of the S N equations, it does not guarantee strictly positive solutions. In this paper, we develop and define a strictly nonnegative, nonlinear, Petrov-Galerkin finite element method that fully preserves the bilinear discontinuous spatial moments of the transport equation. Additionally, we define two ad hoc fixups that maintain particle balance and explicitly setmore » negative nodes of the UBLD finite element solution to zero but use different auxiliary equations to fully define their respective solutions. We assess the ability to inhibit negative angular flux solutions and the accuracy of every spatial discretization that we consider using a glancing void test problem with a discontinuous solution known to stress numerical methods. Though significantly more computationally intense, the nonlinear Petrov-Galerkin scheme results in a strictly nonnegative solution and is a more accurate solution than all the other methods considered. One fixup, based on shape preserving, results in a strictly nonnegative final solution but has increased numerical diffusion relative to the Petrov-Galerkin scheme and is less accurate than the UBLD solution. The second fixup, which preserves as many spatial moments as possible while setting negative values of the unlumped solution to zero, is less accurate than the Petrov-Galerkin scheme but is more accurate than the other fixup. However, it fails to guarantee a strictly nonnegative final solution. As a result, the fully lumped bilinear discontinuous finite element solution is the least accurate method, with significantly more numerical diffusion than the Petrov-Galerkin scheme and both fixups.« less
A decentralized linear quadratic control design method for flexible structures
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Craig, Roy R., Jr.
1990-01-01
A decentralized suboptimal linear quadratic control design procedure which combines substructural synthesis, model reduction, decentralized control design, subcontroller synthesis, and controller reduction is proposed for the design of reduced-order controllers for flexible structures. The procedure starts with a definition of the continuum structure to be controlled. An evaluation model of finite dimension is obtained by the finite element method. Then, the finite element model is decomposed into several substructures by using a natural decomposition called substructuring decomposition. Each substructure, at this point, still has too large a dimension and must be reduced to a size that is Riccati-solvable. Model reduction of each substructure can be performed by using any existing model reduction method, e.g., modal truncation, balanced reduction, Krylov model reduction, or mixed-mode method. Then, based on the reduced substructure model, a subcontroller is designed by an LQ optimal control method for each substructure independently. After all subcontrollers are designed, a controller synthesis method called substructural controller synthesis is employed to synthesize all subcontrollers into a global controller. The assembling scheme used is the same as that employed for the structure matrices. Finally, a controller reduction scheme, called the equivalent impulse response energy controller (EIREC) reduction algorithm, is used to reduce the global controller to a reasonable size for implementation. The EIREC reduced controller preserves the impulse response energy of the full-order controller and has the property of matching low-frequency moments and low-frequency power moments. An advantage of the substructural controller synthesis method is that it relieves the computational burden associated with dimensionality. Besides that, the SCS design scheme is also a highly adaptable controller synthesis method for structures with varying configuration, or varying mass and stiffness properties.
NASA Astrophysics Data System (ADS)
Schäfer, Martina; Möller, Marco; Zwinger, Thomas; Moore, John
2016-04-01
Using a coupled simulation set-up between a by statistical climate data forced and to ice-cap resolution downscaled mass balance model and an ice-dynamic model, we study coupling effects for the Vestfonna ice cap, Nordaustlandet, Svalbard, by analysing the impacts of different imposed coupling intervals on mass-balance and sea-level rise (SLR) projections. Based on a method to estimate errors introduced by different coupling schemes, we find that neglecting the topographic feedback in the coupling leads to underestimations of 10-20% in SLR projections on century time-scales in our model compared to full coupling (i.e., exchange of properties using smallest occurring time-step). Using the same method it also is shown that parametrising mass-balance adjustment for changes in topography using lapse rates is a - in computational terms - cost-effective reasonably accurate alternative applied to an ice-cap like Vestfonna. We test the forcing imposed by different emission pathways (RCP 2.4, 4.5, 6.0 and 8.5). For most of them, over the time-period explored (2000-2100), fast-flowing outlet glaciers decrease in impacting SLR due to their deceleration and reduced mass flux as they thin and retreat from the coast, hence detaching from the ocean and thereby losing their major mass drainage mechanism, i.e., calving.
A Novel Code System for Revealing Sources of Students' Difficulties with Stoichiometry
ERIC Educational Resources Information Center
Gulacar, Ozcan; Overton, Tina L.; Bowman, Charles R.; Fynewever, Herb
2013-01-01
A coding scheme is presented and used to evaluate solutions of seventeen students working on twenty five stoichiometry problems in a think-aloud protocol. The stoichiometry problems are evaluated as a series of sub-problems (e.g., empirical formulas, mass percent, or balancing chemical equations), and the coding scheme was used to categorize each…
A mimetic finite difference method for the Stokes problem with elected edge bubbles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipnikov, K; Berirao, L
2009-01-01
A new mimetic finite difference method for the Stokes problem is proposed and analyzed. The unstable P{sub 1}-P{sub 0} discretization is stabilized by adding a small number of bubble functions to selected mesh edges. A simple strategy for selecting such edges is proposed and verified with numerical experiments. The discretizations schemes for Stokes and Navier-Stokes equations must satisfy the celebrated inf-sup (or the LBB) stability condition. The stability condition implies a balance between discrete spaces for velocity and pressure. In finite elements, this balance is frequently achieved by adding bubble functions to the velocity space. The goal of this articlemore » is to show that the stabilizing edge bubble functions can be added only to a small set of mesh edges. This results in a smaller algebraic system and potentially in a faster calculations. We employ the mimetic finite difference (MFD) discretization technique that works for general polyhedral meshes and can accomodate non-uniform distribution of stabilizing bubbles.« less
GTRF: a game theory approach for regulating node behavior in real-time wireless sensor networks.
Lin, Chi; Wu, Guowei; Pirozmand, Poria
2015-06-04
The selfish behaviors of nodes (or selfish nodes) cause packet loss, network congestion or even void regions in real-time wireless sensor networks, which greatly decrease the network performance. Previous methods have focused on detecting selfish nodes or avoiding selfish behavior, but little attention has been paid to regulating selfish behavior. In this paper, a Game Theory-based Real-time & Fault-tolerant (GTRF) routing protocol is proposed. GTRF is composed of two stages. In the first stage, a game theory model named VA is developed to regulate nodes' behaviors and meanwhile balance energy cost. In the second stage, a jumping transmission method is adopted, which ensures that real-time packets can be successfully delivered to the sink before a specific deadline. We prove that GTRF theoretically meets real-time requirements with low energy cost. Finally, extensive simulations are conducted to demonstrate the performance of our scheme. Simulation results show that GTRF not only balances the energy cost of the network, but also prolongs network lifetime.
Learning and tuning fuzzy logic controllers through reinforcements
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap
1992-01-01
This paper presents a new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system. In particular, our generalized approximate reasoning-based intelligent control (GARIC) architecture (1) learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward neural network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto et al. (1983) to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal
2014-01-01
This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption. PMID:24776938
Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal
2014-04-25
This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption.
Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin
2003-04-15
A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003
NASA Astrophysics Data System (ADS)
Li, R.; Arora, V. K.
2012-01-01
Energy and carbon balance implications of representing vegetation using a composite or mosaic approach in a land surface scheme are investigated. In the composite approach the attributes of different plant functional types (PFTs) present in a grid cell are aggregated in some fashion for energy and water balance calculations. The resulting physical environmental conditions (including net radiation, soil moisture and soil temperature) are common to all PFTs and affect their ecosystem processes. In the mosaic approach energy and water balance calculations are performed separately for each PFT tile using its own vegetation attributes, so each PFT "sees" different physical environmental conditions and its carbon balance evolves somewhat differently from that in the composite approach. Simulations are performed at selected boreal, temperate and tropical locations to illustrate the differences caused by using the composite versus mosaic approaches of representing vegetation. These idealized simulations use 50% fractional coverage for each of the two dominant PFTs in a grid cell. Differences in simulated grid averaged primary energy fluxes at selected sites are generally less than 5% between the two approaches. Simulated grid-averaged carbon fluxes and pool sizes at these sites can, however, differ by as much as 46%. Simulation results suggest that differences in carbon balance between the two approaches arise primarily through differences in net radiation which directly affects net primary productivity, and thus leaf area index and vegetation biomass.
NASA Astrophysics Data System (ADS)
Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua
2014-12-01
Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU-GPU collaborative simulations that solve realistic CFD problems with both complex configurations and high-order schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn; Deng, Xiaogang; Zhang, Lilun
Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations formore » high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations that solve realistic CFD problems with both complex configurations and high-order schemes.« less
NASA Astrophysics Data System (ADS)
Manodham, Thavisak; Loyola, Luis; Miki, Tetsuya
IEEE 802.11 wirelesses LANs (WLANs) have been rapidly deployed in enterprises, public areas, and households. Voice-over-IP (VoIP) and similar applications are now commonly used in mobile devices over wireless networks. Recent works have improved the quality of service (QoS) offering higher data rates to support various kinds of real-time applications. However, besides the need for higher data rates, seamless handoff and load balancing among APs are key issues that must be addressed in order to continue supporting real-time services across wireless LANs and providing fair services to all users. In this paper, we introduce a novel access point (AP) with two transceivers that improves network efficiency by supporting seamless handoff and traffic load balancing in a wireless network. In our proposed scheme, the novel AP uses the second transceiver to scan and find neighboring STAs in the transmission range and then sends the results to neighboring APs, which compare and analyze whether or not the STA should perform a handoff. The initial results from our simulations show that the novel AP module is more effective than the conventional scheme and a related work in terms of providing a handoff process with low latency and sharing traffic load with neighbor APs.
Constructing an Urban Population Model for Medical Insurance Scheme Using Microsimulation Techniques
Xiong, Linping; Zhang, Lulu; Tang, Weidong; Ma, Yuqin
2012-01-01
China launched a pilot project of medical insurance reform in 79 cities in 2007 to cover urban nonworking residents. An urban population model was created in this paper for China's medical insurance scheme using microsimulation model techniques. The model made it clear for the policy makers the population distributions of different groups of people, the potential urban residents entering the medical insurance scheme. The income trends of units of individuals and families were also obtained. These factors are essential in making the challenging policy decisions when considering to balance the long-term financial sustainability of the medical insurance scheme. PMID:22481973
NASA Astrophysics Data System (ADS)
Kustas, William P.; Alfieri, Joseph G.; Anderson, Martha C.; Colaizzi, Paul D.; Prueger, John H.; Evett, Steven R.; Neale, Christopher M. U.; French, Andrew N.; Hipps, Lawrence E.; Chávez, José L.; Copeland, Karen S.; Howell, Terry A.
2012-12-01
Application and validation of many thermal remote sensing-based energy balance models involve the use of local meteorological inputs of incoming solar radiation, wind speed and air temperature as well as accurate land surface temperature (LST), vegetation cover and surface flux measurements. For operational applications at large scales, such local information is not routinely available. In addition, the uncertainty in LST estimates can be several degrees due to sensor calibration issues, atmospheric effects and spatial variations in surface emissivity. Time differencing techniques using multi-temporal thermal remote sensing observations have been developed to reduce errors associated with deriving the surface-air temperature gradient, particularly in complex landscapes. The Dual-Temperature-Difference (DTD) method addresses these issues by utilizing the Two-Source Energy Balance (TSEB) model of Norman et al. (1995) [1], and is a relatively simple scheme requiring meteorological input from standard synoptic weather station networks or mesoscale modeling. A comparison of the TSEB and DTD schemes is performed using LST and flux observations from eddy covariance (EC) flux towers and large weighing lysimeters (LYs) in irrigated cotton fields collected during BEAREX08, a large-scale field experiment conducted in the semi-arid climate of the Texas High Plains as described by Evett et al. (2012) [2]. Model output of the energy fluxes (i.e., net radiation, soil heat flux, sensible and latent heat flux) generated with DTD and TSEB using local and remote meteorological observations are compared with EC and LY observations. The DTD method is found to be significantly more robust in flux estimation compared to the TSEB using the remote meteorological observations. However, discrepancies between model and measured fluxes are also found to be significantly affected by the local inputs of LST and vegetation cover and the representativeness of the remote sensing observations with the local flux measurement footprint.
Zhang, Luying; Cheng, Xiaoming; Liu, Xiaoyun; Zhu, Kun; Tang, Shenglan; Bogg, Lennart; Dobberschuetz, Karin; Tolhurst, Rachel
2010-01-01
In recent years, the central government in China has been leading the re-establishment of its rural health insurance system, but local government institutions have considerable flexibility in the specific design and management of schemes. Maintaining a reasonable balance of funds is critical to ensure that the schemes are sustainable and effective in offering financial protection to members. This paper explores the financial management of the NCMS in China through a case study of the balance of funds and the factors influencing this, in six counties in two Chinese provinces. The main data source is NCMS management data from each county from 2003 to 2005, supplemented by: a household questionnaire survey, qualitative interviews and focus group discussions with all local stakeholders and policy document analysis. The study found that five out of six counties held a large fund surplus, whilst enrolees obtained only partial financial protection. However, in one county greater risk pooling for enrolees was accompanied by relatively high utilisation levels, resulting in a fund deficit. The opportunities to sustainably increase the financial protection offered to NCMS enrolees are limited by the financial pressures on local government, specific political incentives and low technical capacities at the county level and below. Our analysis suggests that in the short term, efforts should be made to improve the management of the current NCMS design, which should be supported through capacity building for NCMS offices. However, further medium-term initiatives may be required including changes to the design of the schemes. Copyright (c) 2009 John Wiley & Sons, Ltd.
Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.
Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144
Disseminating the unit of mass from multiple primary realisations
NASA Astrophysics Data System (ADS)
Nielsen, Lars
2016-12-01
When a new definition of the kilogram has been adopted in 2018 as expected, the unit of mass will be realised by the watt balance method, the x-ray crystal density method or perhaps other primary methods still to be developed. So far, the standard uncertainties associated with the available primary methods are at least one order of magnitude larger than the standard uncertainty associated with mass comparisons using mass comparators, so differences in primary realisations of the kilogram are easily detected, whereas many National Metrology Institutes would have to increase their calibration and measurement capabilities (CMCs) if they were traceable to a single primary realisation. This paper presents a scheme for obtaining traceability to multiple primary realisations of the kilogram using a small group of stainless steel 1 kg weights, which are allowed to change their masses over time in a way known to be realistic, and which are calibrated and stored in air. An analysis of the scheme shows that if the relative standard uncertainties of future primary realisations are equal to the relative standard uncertainties of the present methods used to measure the Planck constant, the unit of mass can be disseminated with a standard uncertainty less than 0.015 mg, which matches the smallest CMCs currently claimed for the calibration of 1 kg weights.
Hybrid upwind discretization of nonlinear two-phase flow with gravity
NASA Astrophysics Data System (ADS)
Lee, S. H.; Efendiev, Y.; Tchelepi, H. A.
2015-08-01
Multiphase flow in porous media is described by coupled nonlinear mass conservation laws. For immiscible Darcy flow of multiple fluid phases, whereby capillary effects are negligible, the transport equations in the presence of viscous and buoyancy forces are highly nonlinear and hyperbolic. Numerical simulation of multiphase flow processes in heterogeneous formations requires the development of discretization and solution schemes that are able to handle the complex nonlinear dynamics, especially of the saturation evolution, in a reliable and computationally efficient manner. In reservoir simulation practice, single-point upwinding of the flux across an interface between two control volumes (cells) is performed for each fluid phase, whereby the upstream direction is based on the gradient of the phase-potential (pressure plus gravity head). This upwinding scheme, which we refer to as Phase-Potential Upwinding (PPU), is combined with implicit (backward-Euler) time discretization to obtain a Fully Implicit Method (FIM). Even though FIM suffers from numerical dispersion effects, it is widely used in practice. This is because of its unconditional stability and because it yields conservative, monotone numerical solutions. However, FIM is not unconditionally convergent. The convergence difficulties are particularly pronounced when the different immiscible fluid phases switch between co-current and counter-current states as a function of time, or (Newton) iteration. Whether the multiphase flow across an interface (between two control-volumes) is co-current, or counter-current, depends on the local balance between the viscous and buoyancy forces, and how the balance evolves in time. The sensitivity of PPU to small changes in the (local) pressure distribution exacerbates the problem. The common strategy to deal with these difficulties is to cut the timestep and try again. Here, we propose a Hybrid-Upwinding (HU) scheme for the phase fluxes, then HU is combined with implicit time discretization to yield a fully implicit method. In the HU scheme, the phase flux is divided into two parts based on the driving force. The viscous-driven and buoyancy-driven phase fluxes are upwinded differently. Specifically, the viscous flux, which is always co-current, is upwinded based on the direction of the total-velocity. The buoyancy-driven flux across an interface is always counter-current and is upwinded such that the heavier fluid goes downward and the lighter fluid goes upward. We analyze the properties of the Implicit Hybrid Upwinding (IHU) scheme. It is shown that IHU is locally conservative and produces monotone, physically-consistent numerical solutions. The IHU solutions show numerical diffusion levels that are slightly higher than those for standard FIM (i.e., implicit PPU). The primary advantage of the IHU scheme is that the numerical overall-flux of a fluid phase remains continuous and differentiable as the flow regime changes between co-current and counter-current conditions. This is in contrast to the standard phase-potential upwinding scheme, in which the overall fractional-flow (flux) function is non-differentiable across the boundary between co-current and counter-current flows.
A design of high-precision BLDCM drive with bus voltage protection
NASA Astrophysics Data System (ADS)
Lian, Xuezheng; Wang, Haitao; Xie, Meilin; Huang, Wei; Li, Dawei; Jing, Feng
2017-11-01
In the application of space satellite turntable, the design of balance wheel is very necessary. To solve the acquisition precision of Brushless DC motor speed is low, and the encoder is also more complex, this paper improves the original hall signal measurement methods. Using the logic device to achieve the six frequency multiplication of hall signal, the signal is used as speed feedback to achieve speed closed-loop control and improve the speed stability. At the same time, in order to prevent the E.M.F of BLDC motor to raise the voltage of the bus bar when reversing or braking, and affect the normal operation of other circuit modules, the analog circuit is used to protect the bus bar voltage by the way of energy consumption braking. The experimental results are consistent with the theoretical design, and the rationality and feasibility of the frequency multiplication scheme and bus voltage protection scheme are verified.
Separation of selenium species released from Se-exposed algae
Besser, John M.; Huckins, James N.; Clark, Randal C.
1994-01-01
We have assessed a fractionation scheme for selenium species that separates Se-containing amino acids and other organoselenium compounds in aqueous samples. We investigated the retention of standard solutions of selenate (Se+6), selenite (Se+4), and selenomethionine (Se−2) by fractionation media (Sephadex A-25 ion-exchange resin, copper-treated Chelex-100 ligand-exchange resin, and activated charcoal) and by several types of membrane filters. The fractionation method successfully isolated Se from the standard solutions into appropriate fractions for radiometric quantitation of 75Se. However, some filter media retained unacceptably large amounts of selenate and selenite. Mass balance microcosms were inoculated with green algae (Chlamydomonas">Chlamydomonasreinhardtii">reinhardtii) previously exposed to inorganic 75Se, and the fractionation scheme was used to examine the release of 75Se species into water and air. The results of the microcosm exposure indicate that seasonal blooms and crashes of phytoplankton populations may produce increased concentrations of organoselenium species.
NASA Astrophysics Data System (ADS)
Lockley, Andrew
2015-04-01
Solar radiation management (SRM) geoengineering can be used to deliberately alter the Earth's radiation budget, by reflecting sunlight to space. SRM has been suggested as a response to Anthropogenic Global Warming (AGW), to partly or fully balance radiative forcing from AGW [1]. Approximately 22% of sun-like stars have Earth-like exoplanets[2]. Advanced civilisations may exist on these, and may use geoengineering for positive or negative radiative forcing. Additionally, terraforming projects [e.g. 3], may be used to expand alien habitable territory, or for resource management or military operations on non-home planets. Potential observations of alien geoengineering and terraforming may enable detection of technologically advanced alien civilisations, and may help identify widely-used and stable geoengineering technologies. This knowledge may assist the development of safe and stable geoengineering methods for Earth. The potential risks and benefits of possible alien detection of Earth-bound geoengineering schemes must be considered before deployment of terrestrial geoengineering schemes.
Eun, Yongsoon
2017-01-01
Underwater Acoustic Sensor Network (UASN) comes with intrinsic constraints because it is deployed in the aquatic environment and uses the acoustic signals to communicate. The examples of those constraints are long propagation delay, very limited bandwidth, high energy cost for transmission, very high signal attenuation, costly deployment and battery replacement, and so forth. Therefore, the routing schemes for UASN must take into account those characteristics to achieve energy fairness, avoid energy holes, and improve the network lifetime. The depth based forwarding schemes in literature use node’s depth information to forward data towards the sink. They minimize the data packet duplication by employing the holding time strategy. However, to avoid void holes in the network, they use two hop node proximity information. In this paper, we propose the Energy and Depth variance-based Opportunistic Void avoidance (EDOVE) scheme to gain energy balancing and void avoidance in the network. EDOVE considers not only the depth parameter, but also the normalized residual energy of the one-hop nodes and the normalized depth variance of the second hop neighbors. Hence, it avoids the void regions as well as balances the network energy and increases the network lifetime. The simulation results show that the EDOVE gains more than 15% packet delivery ratio, propagates 50% less copies of data packet, consumes less energy, and has more lifetime than the state of the art forwarding schemes. PMID:28954395
Bouk, Safdar Hussain; Ahmed, Syed Hassan; Park, Kyung-Joon; Eun, Yongsoon
2017-09-26
Underwater Acoustic Sensor Network (UASN) comes with intrinsic constraints because it is deployed in the aquatic environment and uses the acoustic signals to communicate. The examples of those constraints are long propagation delay, very limited bandwidth, high energy cost for transmission, very high signal attenuation, costly deployment and battery replacement, and so forth. Therefore, the routing schemes for UASN must take into account those characteristics to achieve energy fairness, avoid energy holes, and improve the network lifetime. The depth based forwarding schemes in literature use node's depth information to forward data towards the sink. They minimize the data packet duplication by employing the holding time strategy. However, to avoid void holes in the network, they use two hop node proximity information. In this paper, we propose the Energy and Depth variance-based Opportunistic Void avoidance (EDOVE) scheme to gain energy balancing and void avoidance in the network. EDOVE considers not only the depth parameter, but also the normalized residual energy of the one-hop nodes and the normalized depth variance of the second hop neighbors. Hence, it avoids the void regions as well as balances the network energy and increases the network lifetime. The simulation results show that the EDOVE gains more than 15 % packet delivery ratio, propagates 50 % less copies of data packet, consumes less energy, and has more lifetime than the state of the art forwarding schemes.
On a two-pass scheme without a faraday mirror for free-space relativistic quantum cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kravtsov, K. S.; Radchenko, I. V.; Korol'kov, A. V.
2013-05-15
The stability of destructive interference independent of the input polarization and the state of a quantum communication channel in fiber optic systems used in quantum cryptography plays a principal role in providing the security of communicated keys. A novel optical scheme is proposed that can be used both in relativistic quantum cryptography for communicating keys in open space and for communicating them over fiber optic lines. The scheme ensures stability of destructive interference and admits simple automatic balancing of a fiber interferometer.
An improved algorithm of mask image dodging for aerial image
NASA Astrophysics Data System (ADS)
Zhang, Zuxun; Zou, Songbai; Zuo, Zhiqi
2011-12-01
The technology of Mask image dodging based on Fourier transform is a good algorithm in removing the uneven luminance within a single image. At present, the difference method and the ratio method are the methods in common use, but they both have their own defects .For example, the difference method can keep the brightness uniformity of the whole image, but it is deficient in local contrast; meanwhile the ratio method can work better in local contrast, but sometimes it makes the dark areas of the original image too bright. In order to remove the defects of the two methods effectively, this paper on the basis of research of the two methods proposes a balance solution. Experiments show that the scheme not only can combine the advantages of the difference method and the ratio method, but also can avoid the deficiencies of the two algorithms.
S66: A Well-balanced Database of Benchmark Interaction Energies Relevant to Biomolecular Structures
2011-01-01
With numerous new quantum chemistry methods being developed in recent years and the promise of even more new methods to be developed in the near future, it is clearly critical that highly accurate, well-balanced, reference data for many different atomic and molecular properties be available for the parametrization and validation of these methods. One area of research that is of particular importance in many areas of chemistry, biology, and material science is the study of noncovalent interactions. Because these interactions are often strongly influenced by correlation effects, it is necessary to use computationally expensive high-order wave function methods to describe them accurately. Here, we present a large new database of interaction energies calculated using an accurate CCSD(T)/CBS scheme. Data are presented for 66 molecular complexes, at their reference equilibrium geometries and at 8 points systematically exploring their dissociation curves; in total, the database contains 594 points: 66 at equilibrium geometries, and 528 in dissociation curves. The data set is designed to cover the most common types of noncovalent interactions in biomolecules, while keeping a balanced representation of dispersion and electrostatic contributions. The data set is therefore well suited for testing and development of methods applicable to bioorganic systems. In addition to the benchmark CCSD(T) results, we also provide decompositions of the interaction energies by means of DFT-SAPT calculations. The data set was used to test several correlated QM methods, including those parametrized specifically for noncovalent interactions. Among these, the SCS-MI-CCSD method outperforms all other tested methods, with a root-mean-square error of 0.08 kcal/mol for the S66 data set. PMID:21836824
Huang, Chao-Chi; Chiu, Yang-Hung; Wen, Chih-Yu
2014-01-01
In a vehicular sensor network (VSN), the key design issue is how to organize vehicles effectively, such that the local network topology can be stabilized quickly. In this work, each vehicle with on-board sensors can be considered as a local controller associated with a group of communication members. In order to balance the load among the nodes and govern the local topology change, a group formation scheme using localized criteria is implemented. The proposed distributed topology control method focuses on reducing the rate of group member change and avoiding the unnecessary information exchange. Two major phases are sequentially applied to choose the group members of each vehicle using hybrid angle/distance information. The operation of Phase I is based on the concept of the cone-based method, which can select the desired vehicles quickly. Afterwards, the proposed time-slot method is further applied to stabilize the network topology. Given the network structure in Phase I, a routing scheme is presented in Phase II. The network behaviors are explored through simulation and analysis in a variety of scenarios. The results show that the proposed mechanism is a scalable and effective control framework for VSNs. PMID:25350506
NASA Astrophysics Data System (ADS)
García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin
2014-10-01
Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.
A well-balanced meshless tsunami propagation and inundation model
NASA Astrophysics Data System (ADS)
Brecht, Rüdiger; Bihlo, Alexander; MacLachlan, Scott; Behrens, Jörn
2018-05-01
We present a novel meshless tsunami propagation and inundation model. We discretize the nonlinear shallow-water equations using a well-balanced scheme relying on radial basis function based finite differences. For the inundation model, radial basis functions are used to extrapolate the dry region from nearby wet points. Numerical results against standard one- and two-dimensional benchmarks are presented.
NASA Astrophysics Data System (ADS)
Couderc, F.; Duran, A.; Vila, J.-P.
2017-08-01
We present an explicit scheme for a two-dimensional multilayer shallow water model with density stratification, for general meshes and collocated variables. The proposed strategy is based on a regularized model where the transport velocity in the advective fluxes is shifted proportionally to the pressure potential gradient. Using a similar strategy for the potential forces, we show the stability of the method in the sense of a discrete dissipation of the mechanical energy, in general multilayer and non-linear frames. These results are obtained at first-order in space and time and extended using a second-order MUSCL extension in space and a Heun's method in time. With the objective of minimizing the diffusive losses in realistic contexts, sufficient conditions are exhibited on the regularizing terms to ensure the scheme's linear stability at first and second-order in time and space. The other main result stands in the consistency with respect to the asymptotics reached at small and large time scales in low Froude regimes, which governs large-scale oceanic circulation. Additionally, robustness and well-balanced results for motionless steady states are also ensured. These stability properties tend to provide a very robust and efficient approach, easy to implement and particularly well suited for large-scale simulations. Some numerical experiments are proposed to highlight the scheme efficiency: an experiment of fast gravitational modes, a smooth surface wave propagation, an initial propagating surface water elevation jump considering a non-trivial topography, and a last experiment of slow Rossby modes simulating the displacement of a baroclinic vortex subject to the Coriolis force.
NASA Astrophysics Data System (ADS)
Mölg, Thomas; Cullen, Nicolas J.; Kaser, Georg
Broadband radiation schemes (parameterizations) are commonly used tools in glacier mass-balance modelling, but their performance at high altitude in the tropics has not been evaluated in detail. Here we take advantage of a high-quality 2 year record of global radiation (G) and incoming longwave radiation (L↓) measured on Kersten Glacier, Kilimanjaro, East Africa, at 5873 m a.s.l., to optimize parameterizations of G and L↓. We show that the two radiation terms can be related by an effective cloud-cover fraction neff, so G or L↓ can be modelled based on neff derived from measured L↓ or G, respectively. At neff = 1, G is reduced to 35% of clear-sky G, and L↓ increases by 45-65% (depending on altitude) relative to clear-sky L↓. Validation for a 1 year dataset of G and L↓ obtained at 4850 m on Glaciar Artesonraju, Peruvian Andes, yields a satisfactory performance of the radiation scheme. Whether this performance is acceptable for mass-balance studies of tropical glaciers is explored by applying the data from Glaciar Artesonraju to a physically based mass-balance model, which requires, among others, G and L↓ as forcing variables. Uncertainties in modelled mass balance introduced by the radiation parameterizations do not exceed those that can be caused by errors in the radiation measurements. Hence, this paper provides a tool for inclusion in spatially distributed mass-balance modelling of tropical glaciers and/or extension of radiation data when only G or L↓ is measured.
Díaz, J I; Hidalgo, A; Tello, L
2014-10-08
We study a climatologically important interaction of two of the main components of the geophysical system by adding an energy balance model for the averaged atmospheric temperature as dynamic boundary condition to a diagnostic ocean model having an additional spatial dimension. In this work, we give deeper insight than previous papers in the literature, mainly with respect to the 1990 pioneering model by Watts and Morantine. We are taking into consideration the latent heat for the two phase ocean as well as a possible delayed term. Non-uniqueness for the initial boundary value problem, uniqueness under a non-degeneracy condition and the existence of multiple stationary solutions are proved here. These multiplicity results suggest that an S-shaped bifurcation diagram should be expected to occur in this class of models generalizing previous energy balance models. The numerical method applied to the model is based on a finite volume scheme with nonlinear weighted essentially non-oscillatory reconstruction and Runge-Kutta total variation diminishing for time integration.
Díaz, J. I.; Hidalgo, A.; Tello, L.
2014-01-01
We study a climatologically important interaction of two of the main components of the geophysical system by adding an energy balance model for the averaged atmospheric temperature as dynamic boundary condition to a diagnostic ocean model having an additional spatial dimension. In this work, we give deeper insight than previous papers in the literature, mainly with respect to the 1990 pioneering model by Watts and Morantine. We are taking into consideration the latent heat for the two phase ocean as well as a possible delayed term. Non-uniqueness for the initial boundary value problem, uniqueness under a non-degeneracy condition and the existence of multiple stationary solutions are proved here. These multiplicity results suggest that an S-shaped bifurcation diagram should be expected to occur in this class of models generalizing previous energy balance models. The numerical method applied to the model is based on a finite volume scheme with nonlinear weighted essentially non-oscillatory reconstruction and Runge–Kutta total variation diminishing for time integration. PMID:25294969
Competitive region orientation code for palmprint verification and identification
NASA Astrophysics Data System (ADS)
Tang, Wenliang
2015-11-01
Orientation features of the palmprint have been widely investigated in coding-based palmprint-recognition methods. Conventional orientation-based coding methods usually used discrete filters to extract the orientation feature of palmprint. However, in real operations, the orientations of the filter usually are not consistent with the lines of the palmprint. We thus propose a competitive region orientation-based coding method. Furthermore, an effective weighted balance scheme is proposed to improve the accuracy of the extracted region orientation. Compared with conventional methods, the region orientation of the palmprint extracted using the proposed method can precisely and robustly describe the orientation feature of the palmprint. Extensive experiments on the baseline PolyU and multispectral palmprint databases are performed and the results show that the proposed method achieves a promising performance in comparison to conventional state-of-the-art orientation-based coding methods in both palmprint verification and identification.
Control of final moisture content of food products baked in continuous tunnel ovens
NASA Astrophysics Data System (ADS)
McFarlane, Ian
2006-02-01
There are well-known difficulties in making measurements of the moisture content of baked goods (such as bread, buns, biscuits, crackers and cake) during baking or at the oven exit; in this paper several sensing methods are discussed, but none of them are able to provide direct measurement with sufficient precision. An alternative is to use indirect inferential methods. Some of these methods involve dynamic modelling, with incorporation of thermal properties and using techniques familiar in computational fluid dynamics (CFD); a method of this class that has been used for the modelling of heat and mass transfer in one direction during baking is summarized, which may be extended to model transport of moisture within the product and also within the surrounding atmosphere. The concept of injecting heat during the baking process proportional to the calculated heat load on the oven has been implemented in a control scheme based on heat balance zone by zone through a continuous baking oven, taking advantage of the high latent heat of evaporation of water. Tests on biscuit production ovens are reported, with results that support a claim that the scheme gives more reproducible water distribution in the final product than conventional closed loop control of zone ambient temperatures, thus enabling water content to be held more closely within tolerance.
Distributed computing for membrane-based modeling of action potential propagation.
Porras, D; Rogers, J M; Smith, W M; Pollard, A E
2000-08-01
Action potential propagation simulations with physiologic membrane currents and macroscopic tissue dimensions are computationally expensive. We, therefore, analyzed distributed computing schemes to reduce execution time in workstation clusters by parallelizing solutions with message passing. Four schemes were considered in two-dimensional monodomain simulations with the Beeler-Reuter membrane equations. Parallel speedups measured with each scheme were compared to theoretical speedups, recognizing the relationship between speedup and code portions that executed serially. A data decomposition scheme based on total ionic current provided the best performance. Analysis of communication latencies in that scheme led to a load-balancing algorithm in which measured speedups at 89 +/- 2% and 75 +/- 8% of theoretical speedups were achieved in homogeneous and heterogeneous clusters of workstations. Speedups in this scheme with the Luo-Rudy dynamic membrane equations exceeded 3.0 with eight distributed workstations. Cluster speedups were comparable to those measured during parallel execution on a shared memory machine.
Design of a 3-dimensional visual illusion speed reduction marking scheme.
Liang, Guohua; Qian, Guomin; Wang, Ye; Yi, Zige; Ru, Xiaolei; Ye, Wei
2017-03-01
To determine which graphic and color combination for a 3-dimensional visual illusion speed reduction marking scheme presents the best visual stimulus, five parameters were designed. According to the Balanced Incomplete Blocks-Law of Comparative Judgment, three schemes, which produce strong stereoscopic impressions, were screened from the 25 initial design schemes of different combinations of graphics and colors. Three-dimensional experimental simulation scenes of the three screened schemes were created to evaluate four different effects according to a semantic analysis. The following conclusions were drawn: schemes with a red color are more effective than those without; the combination of red, yellow and blue produces the best visual stimulus; a larger area from the top surface and the front surface should be colored red; and a triangular prism should be painted as the graphic of the marking according to the stereoscopic impression and the coordination of graphics with the road.
ED(MF)n: Humidity-Convection Feedbacks in a Mass Flux Scheme Based on Resolved Size Densities
NASA Astrophysics Data System (ADS)
Neggers, R.
2014-12-01
Cumulus cloud populations remain at least partially unresolved in present-day numerical simulations of global weather and climate, and accordingly their impact on the larger-scale flow has to be represented through parameterization. Various methods have been developed over the years, ranging in complexity from the early bulk models relying on a single plume to more recent approaches that attempt to reconstruct the underlying probability density functions, such as statistical schemes and multiple plume approaches. Most of these "classic" methods capture key aspects of cumulus cloud populations, and have been successfully implemented in operational weather and climate models. However, the ever finer discretizations of operational circulation models, driven by advances in the computational efficiency of supercomputers, is creating new problems for existing sub-grid schemes. Ideally, a sub-grid scheme should automatically adapt its impact on the resolved scales to the dimension of the grid-box within which it is supposed to act. It can be argued that this is only possible when i) the scheme is aware of the range of scales of the processes it represents, and ii) it can distinguish between contributions as a function of size. How to conceptually represent this knowledge of scale in existing parameterization schemes remains an open question that is actively researched. This study considers a relatively new class of models for sub-grid transport in which ideas from the field of population dynamics are merged with the concept of multi plume modelling. More precisely, a multiple mass flux framework for moist convective transport is formulated in which the ensemble of plumes is created in "size-space". It is argued that thus resolving the underlying size-densities creates opportunities for introducing scale-awareness and scale-adaptivity in the scheme. The behavior of an implementation of this framework in the Eddy Diffusivity Mass Flux (EDMF) model, named ED(MF)n, is examined for a standard case of subtropical marine shallow cumulus. We ask if a system of multiple independently resolved plumes is able to automatically create the vertical profile of bulk (mass) flux at which the sub-grid scale transport balances the imposed larger-scale forcings in the cloud layer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Huang, Zhenyu; Rice, Mark J.
Contingency analysis studies are necessary to assess the impact of possible power system component failures. The results of the contingency analysis are used to ensure the grid reliability, and in power market operation for the feasibility test of market solutions. Currently, these studies are performed in real time based on the current operating conditions of the grid with a set of pre-selected contingency list, which might result in overlooking some critical contingencies caused by variable system status. To have a complete picture of a power grid, more contingencies need to be studied to improve grid reliability. High-performance computing techniques holdmore » the promise of being able to perform the analysis for more contingency cases within a much shorter time frame. This paper evaluates the performance of counter-based dynamic load balancing schemes for a massive contingency analysis program on 10,000+ cores. One million N-2 contingency analysis cases with a Western Electricity Coordinating Council power grid model have been used to demonstrate the performance. The speedup of 3964 with 4096 cores and 7877 with 10240 cores are obtained. This paper reports the performance of the load balancing scheme with a single counter and two counters, describes disk I/O issues, and discusses other potential techniques for further improving the performance.« less
NASA Astrophysics Data System (ADS)
Al-Chalabi, Rifat M. Khalil
1997-09-01
Development of an improvement to the computational efficiency of the existing nested iterative solution strategy of the Nodal Exapansion Method (NEM) nodal based neutron diffusion code NESTLE is presented. The improvement in the solution strategy is the result of developing a multilevel acceleration scheme that does not suffer from the numerical stalling associated with a number of iterative solution methods. The acceleration scheme is based on the multigrid method, which is specifically adapted for incorporation into the NEM nonlinear iterative strategy. This scheme optimizes the computational interplay between the spatial discretization and the NEM nonlinear iterative solution process through the use of the multigrid method. The combination of the NEM nodal method, calculation of the homogenized, neutron nodal balance coefficients (i.e. restriction operator), efficient underlying smoothing algorithm (power method of NESTLE), and the finer mesh reconstruction algorithm (i.e. prolongation operator), all operating on a sequence of coarser spatial nodes, constitutes the multilevel acceleration scheme employed in this research. Two implementations of the multigrid method into the NESTLE code were examined; the Imbedded NEM Strategy and the Imbedded CMFD Strategy. The main difference in implementation between the two methods is that in the Imbedded NEM Strategy, the NEM solution is required at every MG level. Numerical tests have shown that the Imbedded NEM Strategy suffers from divergence at coarse- grid levels, hence all the results for the different benchmarks presented here were obtained using the Imbedded CMFD Strategy. The novelties in the developed MG method are as follows: the formulation of the restriction and prolongation operators, and the selection of the relaxation method. The restriction operator utilizes a variation of the reactor physics, consistent homogenization technique. The prolongation operator is based upon a variant of the pin power reconstruction methodology. The relaxation method, which is the power method, utilizes a constant coefficient matrix within the NEM non-linear iterative strategy. The choice of the MG nesting within the nested iterative strategy enables the incorporation of other non-linear effects with no additional coding effort. In addition, if an eigenvalue problem is being solved, it remains an eigenvalue problem at all grid levels, simplifying coding implementation. The merit of the developed MG method was tested by incorporating it into the NESTLE iterative solver, and employing it to solve four different benchmark problems. In addition to the base cases, three different sensitivity studies are performed, examining the effects of number of MG levels, homogenized coupling coefficients correction (i.e. restriction operator), and fine-mesh reconstruction algorithm (i.e. prolongation operator). The multilevel acceleration scheme developed in this research provides the foundation for developing adaptive multilevel acceleration methods for steady-state and transient NEM nodal neutron diffusion equations. (Abstract shortened by UMI.)
Towards an Understanding of Atmospheric Balance
NASA Technical Reports Server (NTRS)
Errico, Ronald M.
2015-01-01
During a 35 year period I published 30+ pear-reviewed papers and technical reports concerning, in part or whole, the topic of atmospheric balance. Most used normal modes, either implicitly or explicitly, as the appropriate diagnostic tool. This included examination of nonlinear balance in several different global and regional models using a variety of novel metrics as well as development of nonlinear normal mode initialization schemes for particular global and regional models. Recent studies also included the use of adjoint models and OSSEs to answer some questions regarding balance. lwill summarize what I learned through those many works, but also present what l see as remaining issues to be considered or investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kodavasal, Janardhan; Harms, Kevin; Srivastava, Priyesh
A closed-cycle gasoline compression ignition engine simulation near top dead center (TDC) was used to profile the performance of a parallel commercial engine computational fluid dynamics code, as it was scaled on up to 4096 cores of an IBM Blue Gene/Q supercomputer. The test case has 9 million cells near TDC, with a fixed mesh size of 0.15 mm, and was run on configurations ranging from 128 to 4096 cores. Profiling was done for a small duration of 0.11 crank angle degrees near TDC during ignition. Optimization of input/output performance resulted in a significant speedup in reading restart files, andmore » in an over 100-times speedup in writing restart files and files for post-processing. Improvements to communication resulted in a 1400-times speedup in the mesh load balancing operation during initialization, on 4096 cores. An improved, “stiffness-based” algorithm for load balancing chemical kinetics calculations was developed, which results in an over 3-times faster run-time near ignition on 4096 cores relative to the original load balancing scheme. With this improvement to load balancing, the code achieves over 78% scaling efficiency on 2048 cores, and over 65% scaling efficiency on 4096 cores, relative to 256 cores.« less
How Building Systems Affect Worker Wellness
1994-03-01
spatial configuration must strike a balance between the objective needs of the organization and the more subjective human ingredient. Good building...sense, building design for thermal comfort involves a balance between the building’s orientation, its windowing scheme, the use of thermal mass, and the...stated above. An improved quality of worklife and a humanized work environment are psychological incentives that can increase productivity. Worker specific
Computations of Torque-Balanced Coaxial Rotor Flows
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Chan, William M.; Pulliam, Thomas H.
2017-01-01
Interactional aerodynamics has been studied for counter-rotating coaxial rotors in hover. The effects of torque balancing on the performance of coaxial-rotor systems have been investigated. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, and a hybrid turbulence model. Computational results for an experimental model are compared to available data. The results for a coaxial quadcopter vehicle with and without torque balancing are discussed. Understanding interactions in coaxial-rotor flows would help improve the design of next-generation autonomous drones.
Constraining the Surface Energy Balance of Snow in Complex Terrain
NASA Astrophysics Data System (ADS)
Lapo, Karl E.
Physically-based snow models form the basis of our understanding of current and future water and energy cycles, especially in mountainous terrain. These models are poorly constrained and widely diverge from each other, demonstrating a poor understanding of the surface energy balance. This research aims to improve our understanding of the surface energy balance in regions of complex terrain by improving our confidence in existing observations and improving our knowledge of remotely sensed irradiances (Chapter 1), critically analyzing the representation of boundary layer physics within land models (Chapter 2), and utilizing relatively novel observations to in the diagnoses of model performance (Chapter 3). This research has improved the understanding of the literal and metaphorical boundary between the atmosphere and land surface. Solar irradiances are difficult to observe in regions of complex terrain, as observations are subject to harsh conditions not found in other environments. Quality control methods were developed to handle these unique conditions. These quality control methods facilitated an analysis of estimated solar irradiances over mountainous environments. Errors in the estimated solar irradiance are caused by misrepresenting the effect of clouds over regions of topography and regularly exceed the range of observational uncertainty (up to 80Wm -2) in all regions examined. Uncertainty in the solar irradiance estimates were especially pronounced when averaging over high-elevation basins, with monthly differences between estimates up to 80Wm-2. These findings can inform the selection of a method for estimating the solar irradiance and suggest several avenues of future research for improving existing methods. Further research probed the relationship between the land surface and atmosphere as it pertains to the stable boundary layers that commonly form over snow-covered surfaces. Stable conditions are difficult to represent, especially for low wind speed values and coupled land-atmosphere models have difficulty representing these processes. We developed a new method analyzing turbulent fluxes at the land surface that relies on using the observed surface temperature, which we called the offline turbulence method. We used this method to test a number of stability schemes as they are implemented within land models. Stability schemes can cause small biases in the simulated sensible heat flux, but these are caused by compensating errors, as no single method was able to accurately reproduce the observed distribution of the sensible heat flux. We described how these turbulence schemes perform within different turbulence regimes, particularly noting the difficulty representing turbulence during conditions with faster wind speeds and the transition between weak and strong wind turbulence regimes. Heterogeneity in the horizontal distribution of surface temperature associated with different land surface types likely explains some of the missing physics within land models and is manifested as counter-gradient fluxes in observations. The coupling of land and atmospheric models needs further attention, as we highlight processes that are missing. Expanding on the utility of surface temperature, Ts, in model evaluations, we demonstrated the utility of using surface temperature in snow models evaluations. Ts is the diagnostic variable of the modeled surface energy balance within physically-based models and is an ideal supplement to traditional evaluation techniques. We demonstrated how modeling decisions affect Ts, specifically testing the impact of vertical layer structure, thermal conductivity, and stability corrections in addition to the effect of uncertainty in forcing data on simulated Ts. The internal modeling decisions had minimal impacts relative to uncertainty in the forcing data. Uncertainty in downwelling longwave was found to have the largest impact on simulated Ts. Using Ts, we demonstrated how various errors in the forcing data can be identified, noting that uncertainty in downwelling longwave and wind are the easiest to identify due to their effect on night time minimum Ts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schanen, Michel; Marin, Oana; Zhang, Hong
Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less
Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh
NASA Astrophysics Data System (ADS)
Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru
2017-11-01
We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.
NASA Technical Reports Server (NTRS)
Krasteva, Denitza T.
1998-01-01
Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.
NASA Astrophysics Data System (ADS)
Minotti, Luca; Savaré, Giuseppe
2018-02-01
We propose the new notion of Visco-Energetic solutions to rate-independent systems {(X, E,} d) driven by a time dependent energy E and a dissipation quasi-distance d in a general metric-topological space X. As for the classic Energetic approach, solutions can be obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation quasi-distance d is incremented by a viscous correction {δ} (for example proportional to the square of the distance d), which penalizes far distance jumps by inducing a localized version of the stability condition. We prove a general convergence result and a typical characterization by Stability and Energy Balance in a setting comparable to the standard energetic one, thus capable of covering a wide range of applications. The new refined Energy Balance condition compensates for the localized stability and provides a careful description of the jump behavior: at every jump the solution follows an optimal transition, which resembles in a suitable variational sense the discrete scheme that has been implemented for the whole construction.
NASA Astrophysics Data System (ADS)
Parsons, Earl Ryan
In this dissertation I investigated a multi-channel and multi-bit rate all-optical clock recovery device. This device, a birefringent Fabry-Perot resonator, had previously been demonstrated to simultaneously recover the clock signal from 10 wavelength channels operating at 10 Gb/s and one channel at 40 Gb/s. Similar to clock signals recovered from a conventional Fabry-Perot resonator, the clock signal from the birefringent resonator suffers from a bit pattern effect. I investigated this bit pattern effect for birefringent resonators numerically and experimentally and found that the bit pattern effect is less prominent than for clock signals from a conventional Fabry-Perot resonator. I also demonstrated photonic balancing which is an all-optical alternative to electrical balanced detection for phase shift keyed signals. An RZ-DPSK data signal was demodulated using a delay interferometer. The two logically opposite outputs from the delay interferometer then counter-propagated in a saturated SOA. This process created a differential signal which used all the signal power present in two consecutive symbols. I showed that this scheme could provide an optical alternative to electrical balanced detection by reducing the required OSNR by 3 dB. I also show how this method can provide amplitude regeneration to a signal after modulation format conversion. In this case an RZ-DPSK signal was converted to an amplitude modulation signal by the delay interferometer. The resulting amplitude modulated signal is degraded by both the amplitude noise and the phase noise of the original signal. The two logically opposite outputs from the delay interferometer again counter-propagated in a saturated SOA. Through limiting amplification and noise modulation this scheme provided amplitude regeneration and improved the Q-factor of the demodulated signal by 3.5 dB. Finally I investigated how SPM provided by the SOA can provide a method to reduce the in-band noise of a communication signal. The marks, which represented data, experienced a spectral shift due to SPM while the spaces, which consisted of noise, did not. A bandpass filter placed after the SOA then selected the signal and filtered out what was originally in-band noise. The receiver sensitivity was improved by 3 dB.
A new flux conserving Newton's method scheme for the two-dimensional, steady Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Scott, James R.; Chang, Sin-Chung
1993-01-01
A new numerical method is developed for the solution of the two-dimensional, steady Navier-Stokes equations. The method that is presented differs in significant ways from the established numerical methods for solving the Navier-Stokes equations. The major differences are described. First, the focus of the present method is on satisfying flux conservation in an integral formulation, rather than on simulating conservation laws in their differential form. Second, the present approach provides a unified treatment of the dependent variables and their unknown derivatives. All are treated as unknowns together to be solved for through simulating local and global flux conservation. Third, fluxes are balanced at cell interfaces without the use of interpolation or flux limiters. Fourth, flux conservation is achieved through the use of discrete regions known as conservation elements and solution elements. These elements are not the same as the standard control volumes used in the finite volume method. Fifth, the discrete approximation obtained on each solution element is a functional solution of both the integral and differential form of the Navier-Stokes equations. Finally, the method that is presented is a highly localized approach in which the coupling to nearby cells is only in one direction for each spatial coordinate, and involves only the immediately adjacent cells. A general third-order formulation for the steady, compressible Navier-Stokes equations is presented, and then a Newton's method scheme is developed for the solution of incompressible, low Reynolds number channel flow. It is shown that the Jacobian matrix is nearly block diagonal if the nonlinear system of discrete equations is arranged approximately and a proper pivoting strategy is used. Numerical results are presented for Reynolds numbers of 100, 1000, and 2000. Finally, it is shown that the present scheme can resolve the developing channel flow boundary layer using as few as six to ten cells per channel width, depending on the Reynolds number.
Schaefer, C; Jansen, A P J
2013-02-07
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
NASA Astrophysics Data System (ADS)
Ajami, H.; Sharma, A.
2016-12-01
A computationally efficient, semi-distributed hydrologic modeling framework is developed to simulate water balance at a catchment scale. The Soil Moisture and Runoff simulation Toolkit (SMART) is based upon the delineation of contiguous and topologically connected Hydrologic Response Units (HRUs). In SMART, HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are distributed cross sections or equivalent cross sections (ECS) delineated in first order sub-basins. ECSs are formulated by aggregating topographic and physiographic properties of the part or entire first order sub-basins to further reduce computational time in SMART. Previous investigations using SMART have shown that temporal dynamics of soil moisture are well captured at a HRU level using the ECS delineation approach. However, spatial variability of soil moisture within a given HRU is ignored. Here, we examined a number of disaggregation schemes for soil moisture distribution in each HRU. The disaggregation schemes are either based on topographic based indices or a covariance matrix obtained from distributed soil moisture simulations. To assess the performance of the disaggregation schemes, soil moisture simulations from an integrated land surface-groundwater model, ParFlow.CLM in Baldry sub-catchment, Australia are used. ParFlow is a variably saturated sub-surface flow model that is coupled to the Common Land Model (CLM). Our results illustrate that the statistical disaggregation scheme performs better than the methods based on topographic data in approximating soil moisture distribution at a 60m scale. Moreover, the statistical disaggregation scheme maintains temporal correlation of simulated daily soil moisture while preserves the mean sub-basin soil moisture. Future work is focused on assessing the performance of this scheme in catchments with various topographic and climate settings.
Randomization in cancer clinical trials: permutation test and development of a computer program.
Ohashi, Y
1990-01-01
When analyzing cancer clinical trial data where the treatment allocation is done using dynamic balancing methods such as the minimization method for balancing the distribution of important prognostic factors in each arm, conservativeness occurs if such a randomization scheme is ignored and a simple unstratified analysis is carried out. In this paper, the above conservativeness is demonstrated by computer simulation, and the development of a computer program that carries out permutation tests of the log-rank statistics for clinical trial data where the allocation is done by the minimization method or a stratified permuted block design is introduced. We are planning to use this program in practice to supplement a usual stratified analysis and model-based methods such as the Cox regression. The most serious problem in cancer clinical trials in Japan is how to carry out the quality control or data management in trials that are initiated and conducted by researchers without support from pharmaceutical companies. In the final section of this paper, one international collaborative work for developing international guidelines on data management in clinical trials of bladder cancer is briefly introduced, and the differences between the system adopted in US/European statistical centers and the Japanese system is described. PMID:2269216
Management Review: Progress and Challenges at the Defense Logistics Agency.
1986-04-01
with safety and worklife problems (warehousing schemes, replacement or improvement of equipment, loading dock shelters, and employee orientation systems... balances . Accuracy of DCASR Contingent The contingent liability record is one of the more important records Liability Records maintained by DCASRs because...needed for making management decisions and for certifying to the accu- racy of ULO balances . Problems in Data Reported to Based on our interviews with
Numerical shockwave anomalies in presence of hydraulic jumps in the SWE with variable bed elevation.
NASA Astrophysics Data System (ADS)
Navas-Montilla, Adrian; Murillo, Javier
2017-04-01
When solving the shallow water equations appropriate numerical solvers must allow energy-dissipative solutions in presence of steady and unsteady hydraulic jumps. Hydraulic jumps are present in surface flows and may produce significant morphological changes. Unfortunately, it has been documented that some numerical anomalies may appear. These anomalies are the incorrect positioning of steady jumps and the presence of a spurious spike of discharge inside the cell containing the jump produced by a non-linearity of the Hugoniot locus connecting the states at both sides of the jump. Therefore, this problem remains unresolved in the context of Godunov's schemes applied to shallow flows. This issue is usually ignored as it does not affect to the solution in steady cases. However, it produces undesirable spurious oscillations in transient cases that can lead to misleading conclusions when moving to realistic scenarios. Using spike-reducing techniques based on the construction of interpolated fluxes, it is possible to define numerical methods including discontinuous topography that reduce the presence of the aforementioned numerical anomalies. References: T. W. Roberts, The behavior of flux difference splitting schemes near slowly moving shock waves, J. Comput. Phys., 90 (1990) 141-160. Y. Stiriba, R. Donat, A numerical study of postshock oscillations in slowly moving shock waves, Comput. Math. with Appl., 46 (2003) 719-739. E. Johnsen, S. K. Lele, Numerical errors generated in simulations of slowly moving shocks, Center for Turbulence Research, Annual Research Briefs, (2008) 1-12. D. W. Zaide, P. L. Roe, Flux functions for reducing numerical shockwave anomalies. ICCFD7, Big Island, Hawaii, (2012) 9-13. D. W. Zaide, Numerical Shockwave Anomalies, PhD thesis, Aerospace Engineering and Scientific Computing, University of Michigan, 2012. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux-ADER schemes with application to hyperbolic conservation laws with geometric source terms, J. Comput. Phys. 317 (2016) 108-147. J. Murillo and A. Navas-Montilla, A comprehensive explanation and exercise of the source terms in hyperbolic systems using Roe type solutions. Application to the 1D-2D shallow water equations, Advances in Water Resources {98} (2016) 70-96.
NASA Astrophysics Data System (ADS)
Jung, Sang-Min; Won, Yong-Yuk; Han, Sang-Kook
2013-12-01
A Novel technique for reducing the OBI noise in optical OFDMA-PON uplink is presented. OFDMA is a multipleaccess/ multiplexing scheme that can provide multiplexing operation of user data streams onto the downlink sub-channels and uplink multiple access by means of dividing OFDM subcarriers as sub-channels. The main issue of high-speed, single-wavelength upstream OFDMA-PON arises from optical beating interference noise. Because the sub-channels are allocated dynamically to multiple access users over same nominal wavelength, it generates the optical beating interference among upstream signals. In this paper, we proposed a novel scheme using self-homodyne balanced detection in the optical line terminal (OLT) to reduce OBI noise which is generated in the uplink transmission of OFDMA-PON system. When multiple OFDMA sub-channels over the same nominal wavelength are received at the same time in the proposed architecture, OBI noises can be removed using balanced detection. Using discrete multitone modulation (DMT) to generate real valued OFDM signals, the proposed technique is verified through experimental demonstration.
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-04-19
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-01-01
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs). However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. PMID:28422062
NASA Astrophysics Data System (ADS)
Benfenati, A.; La Camera, A.; Carbillet, M.
2016-02-01
Aims: High-dynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright point-wise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources. Methods: The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized Kullback-Leibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the trade-off between the Kullback-Leibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise. Results: The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources' position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources' position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star's position concerns a connected region instead of isolated pixels.
Multiple target sound quality balance for hybrid electric powertrain noise
NASA Astrophysics Data System (ADS)
Mosquera-Sánchez, J. A.; Sarrazin, M.; Janssens, K.; de Oliveira, L. P. R.; Desmet, W.
2018-01-01
The integration of the electric motor to the powertrain in hybrid electric vehicles (HEVs) presents acoustic stimuli that elicit new perceptions. The large number of spectral components, as well as the wider bandwidth of this sort of noises, pose new challenges to current noise, vibration and harshness (NVH) approaches. This paper presents a framework for enhancing the sound quality (SQ) of the hybrid electric powertrain noise perceived inside the passenger compartment. Compared with current active sound quality control (ASQC) schemes, where the SQ improvement is just an effect of the control actions, the proposed technique features an optimization stage, which enables the NVH specialist to actively implement the amplitude balance of the tones that better fits into the auditory expectations. Since Loudness, Roughness, Sharpness and Tonality are the most relevant SQ metrics for interior HEV noise, they are used as performance metrics in the concurrent optimization analysis, which, eventually, drives the control design method. Thus, multichannel active sound profiling systems that feature cross-channel compensation schemes are guided by the multi-objective optimization stage, by means of optimal sets of amplitude gain factors that can be implemented at each single sensor location, while minimizing cross-channel effects that can either degrade the original SQ condition, or even hinder the implementation of independent SQ targets. The proposed framework is verified experimentally, with realistic stationary hybrid electric powertrain noise, showing SQ enhancement for multiple locations within a scaled vehicle mock-up. The results show total success rates in excess of 90%, which indicate that the proposed method is promising, not only for the improvement of the SQ of HEV noise, but also for a variety of periodic disturbances with similar features.
Design of coherent receiver optical front end for unamplified applications.
Zhang, Bo; Malouin, Christian; Schmidt, Theodore J
2012-01-30
Advanced modulation schemes together with coherent detection and digital signal processing has enabled the next generation high-bandwidth optical communication systems. One of the key advantages of coherent detection is its superior receiver sensitivity compared to direct detection receivers due to the gain provided by the local oscillator (LO). In unamplified applications, such as metro and edge networks, the ultimate receiver sensitivity is dictated by the amount of shot noise, thermal noise, and the residual beating of the local oscillator with relative intensity noise (LO-RIN). We show that the best sensitivity is achieved when the thermal noise is balanced with the residual LO-RIN beat noise, which results in an optimum LO power. The impact of thermal noise from the transimpedance amplifier (TIA), the RIN from the LO, and the common mode rejection ratio (CMRR) from a balanced photodiode are individually analyzed via analytical models and compared to numerical simulations. The analytical model results match well with those of the numerical simulations, providing a simplified method to quantify the impact of receiver design tradeoffs. For a practical 100 Gb/s integrated coherent receiver with 7% FEC overhead, we show that an optimum receiver sensitivity of -33 dBm can be achieved at GFEC cliff of 8.55E-5 if the LO power is optimized at 11 dBm. We also discuss a potential method to monitor the imperfections of a balanced and integrated coherent receiver.
Inferring Cirrus Size Distributions Through Satellite Remote Sensing and Microphysical Databases
NASA Technical Reports Server (NTRS)
Mitchell, David; D'Entremont, Robert P.; Lawson, R. Paul
2010-01-01
Since cirrus clouds have a substantial influence on the global energy balance that depends on their microphysical properties, climate models should strive to realistically characterize the cirrus ice particle size distribution (PSD), at least in a climatological sense. To date, the airborne in situ measurements of the cirrus PSD have contained large uncertainties due to errors in measuring small ice crystals (D<60 m). This paper presents a method to remotely estimate the concentration of the small ice crystals relative to the larger ones using the 11- and 12- m channels aboard several satellites. By understanding the underlying physics producing the emissivity difference between these channels, this emissivity difference can be used to infer the relative concentration of small ice crystals. This is facilitated by enlisting temperature-dependent characterizations of the PSD (i.e., PSD schemes) based on in situ measurements. An average cirrus emissivity relationship between 12 and 11 m is developed here using the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite instrument and is used to retrieve the PSD based on six different PSD schemes. The PSDs from the measurement-based PSD schemes are compared with corresponding retrieved PSDs to evaluate differences in small ice crystal concentrations. The retrieved PSDs generally had lower concentrations of small ice particles, with total number concentration independent of temperature. In addition, the temperature dependence of the PSD effective diameter De and fall speed Vf for these retrieved PSD schemes exhibited less variability relative to the unmodified PSD schemes. The reduced variability in the retrieved De and Vf was attributed to the lower concentrations of small ice crystals in the retrieved PSD.
Efficient numerical schemes for viscoplastic avalanches. Part 2: The 2D case
NASA Astrophysics Data System (ADS)
Fernández-Nieto, Enrique D.; Gallardo, José M.; Vigneaux, Paul
2018-01-01
This paper deals with the numerical resolution of a shallow water viscoplastic flow model. Viscoplastic materials are characterized by the existence of a yield stress: below a certain critical threshold in the imposed stress, there is no deformation and the material behaves like a rigid solid, but when that yield value is exceeded, the material flows like a fluid. In the context of avalanches, it means that after going down a slope, the material can stop and its free surface has a non-trivial shape, as opposed to the case of water (Newtonian fluid). The model involves variational inequalities associated with the yield threshold: finite volume schemes are used together with duality methods (namely Augmented Lagrangian and Bermúdez-Moreno) to discretize the problem. To be able to accurately simulate the stopping behavior of the avalanche, new schemes need to be designed, involving the classical notion of well-balancing. In the present context, it needs to be extended to take into account the viscoplastic nature of the material as well as general bottoms with wet/dry fronts which are encountered in geophysical geometries. Here we derive such schemes in 2D as the follow up of the companion paper treating the 1D case. Numerical tests include in particular a generalized 2D benchmark for Bingham codes (the Bingham-Couette flow with two non-zero boundary conditions on the velocity) and a simulation of the avalanche path of Taconnaz in Chamonix-Mont-Blanc to show the usability of these schemes on real topographies from digital elevation models (DEM).
Spherical hashing: binary code embedding with hyperspheres.
Heo, Jae-Pil; Lee, Youngwoon; He, Junfeng; Chang, Shih-Fu; Yoon, Sung-Eui
2015-11-01
Many binary code embedding schemes have been actively studied recently, since they can provide efficient similarity search, and compact data representations suitable for handling large scale image databases. Existing binary code embedding techniques encode high-dimensional data by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. We also propose a new binary code distance function, spherical Hamming distance, tailored for our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve both balanced partitioning for each hash function and independence between hashing functions. Furthermore, we generalize spherical hashing to support various similarity measures defined by kernel functions. Our extensive experiments show that our spherical hashing technique significantly outperforms state-of-the-art techniques based on hyperplanes across various benchmarks with sizes ranging from one to 75 million of GIST, BoW and VLAD descriptors. The performance gains are consistent and large, up to 100 percent improvements over the second best method among tested methods. These results confirm the unique merits of using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.
Shortwave radiation parameterization scheme for subgrid topography
NASA Astrophysics Data System (ADS)
Helbig, N.; LöWe, H.
2012-02-01
Topography is well known to alter the shortwave radiation balance at the surface. A detailed radiation balance is therefore required in mountainous terrain. In order to maintain the computational performance of large-scale models while at the same time increasing grid resolutions, subgrid parameterizations are gaining more importance. A complete radiation parameterization scheme for subgrid topography accounting for shading, limited sky view, and terrain reflections is presented. Each radiative flux is parameterized individually as a function of sky view factor, slope and sun elevation angle, and albedo. We validated the parameterization with domain-averaged values computed from a distributed radiation model which includes a detailed shortwave radiation balance. Furthermore, we quantify the individual topographic impacts on the shortwave radiation balance. Rather than using a limited set of real topographies we used a large ensemble of simulated topographies with a wide range of typical terrain characteristics to study all topographic influences on the radiation balance. To this end slopes and partial derivatives of seven real topographies from Switzerland and the United States were analyzed and Gaussian statistics were found to best approximate real topographies. Parameterized direct beam radiation presented previously compared well with modeled values over the entire range of slope angles. The approximation of multiple, anisotropic terrain reflections with single, isotropic terrain reflections was confirmed as long as domain-averaged values are considered. The validation of all parameterized radiative fluxes showed that it is indeed not necessary to compute subgrid fluxes in order to account for all topographic influences in large grid sizes.
A 10 bit 200 MS/s pipeline ADC using loading-balanced architecture in 0.18 μm CMOS
NASA Astrophysics Data System (ADS)
Wang, Linfeng; Meng, Qiao; Zhi, Hao; Li, Fei
2017-07-01
A new loading-balanced architecture for high speed and low power consumption pipeline analog-to-digital converter (ADC) is presented in this paper. The proposed ADC uses SHA-less, op-amp and capacitor-sharing technique, capacitor-scaling scheme to reduce the die area and power consumption. A new capacitor-sharing scheme was proposed to cancel the extra reset phase of the feedback capacitors. The non-standard inter-stage gain increases the feedback factor of the first stage and makes it equal to the second stage, by which, the load capacitor of op-amp shared by the first and second stages is balanced. As for the fourth stage, the capacitor and op-amp no longer scale down. From the system’s point of view, all load capacitors of the shared OTAs are balanced by employing a loading-balanced architecture. The die area and power consumption are optimized maximally. The ADC is implemented in a 0.18 μm 1P6M CMOS technology, and occupies a die area of 1.2 × 1.2 mm{}2. The measurement results show a 55.58 dB signal-to-noise-and-distortion ratio (SNDR) and 62.97 dB spurious-free dynamic range (SFDR) with a 25 MHz input operating at a 200 MS/s sampling rate. The proposed ADC consumes 115 mW at 200 MS/s from a 1.8 V supply.
Optimal powering schemes for legged robotics
NASA Astrophysics Data System (ADS)
Muench, Paul; Bednarz, David; Czerniak, Gregory P.; Cheok, Ka C.
2010-04-01
Legged Robots have tremendous mobility, but they can also be very inefficient. These inefficiencies can be due to suboptimal control schemes, among other things. If your goal is to get from point A to point B in the least amount of time, your control scheme will be different from if your goal is to get there using the least amount of energy. In this paper, we seek a balance between these extremes by looking at both efficiency and speed. We model a walking robot as a rimless wheel, and, using Pontryagin's Maximum Principle (PMP), we find an "on-off" control for the model, and describe the switching curve between these control extremes.
NASA Astrophysics Data System (ADS)
Lin, Zhuosheng; Yu, Simin; Li, Chengqing; Lü, Jinhu; Wang, Qianxue
This paper proposes a chaotic secure video remote communication scheme that can perform on real WAN networks, and implements it on a smartphone hardware platform. First, a joint encryption and compression scheme is designed by embedding a chaotic encryption scheme into the MJPG-Streamer source codes. Then, multiuser smartphone communications between the sender and the receiver are implemented via WAN remote transmission. Finally, the transmitted video data are received with the given IP address and port in an Android smartphone. It should be noted that, this is the first time that chaotic video encryption schemes are implemented on such a hardware platform. The experimental results demonstrate that the technical challenges on hardware implementation of secure video communication are successfully solved, reaching a balance amongst sufficient security level, real-time processing of massive video data, and utilization of available resources in the hardware environment. The proposed scheme can serve as a good application example of chaotic secure communications for smartphone and other mobile facilities in the future.
A low-cost, tunable laser lock without laser frequency modulation
NASA Astrophysics Data System (ADS)
Shea, Margaret E.; Baker, Paul M.; Gauthier, Daniel J.
2015-05-01
Many experiments in optical physics require laser frequency stabilization. This can be achieved by locking to an atomic reference using saturated absorption spectroscopy. Often, the laser frequency is modulated and phase sensitive detection used. This method, while well-proven and robust, relies on expensive components, can introduce an undesirable frequency modulation into the laser, and is not easily frequency tuned. Here, we report a simple locking scheme similar to those implemented previously. We modulate the atomic resonances in a saturated absorption setup with an AC magnetic field created by a single solenoid. The same coil applies a DC field that allows tuning of the lock point. We use an auto-balanced detector to make our scheme more robust against laser power fluctuations and stray magnetic fields. The coil, its driver, and the detector are home-built with simple, cheap components. Our technique is low-cost, simple to setup, tunable, introduces no laser frequency modulation, and only requires one laser. We gratefully acknowledge the financial support of the NSF through Grant # PHY-1206040.
Galerkin methods for Boltzmann-Poisson transport with reflection conditions on rough boundaries
NASA Astrophysics Data System (ADS)
Morales Escalante, José A.; Gamba, Irene M.
2018-06-01
We consider in this paper the mathematical and numerical modeling of reflective boundary conditions (BC) associated to Boltzmann-Poisson systems, including diffusive reflection in addition to specularity, in the context of electron transport in semiconductor device modeling at nano scales, and their implementation in Discontinuous Galerkin (DG) schemes. We study these BC on the physical boundaries of the device and develop a numerical approximation to model an insulating boundary condition, or equivalently, a pointwise zero flux mathematical condition for the electron transport equation. Such condition balances the incident and reflective momentum flux at the microscopic level, pointwise at the boundary, in the case of a more general mixed reflection with momentum dependant specularity probability p (k →). We compare the computational prediction of physical observables given by the numerical implementation of these different reflection conditions in our DG scheme for BP models, and observe that the diffusive condition influences the kinetic moments over the whole domain in position space.
Cooperative Position Aware Mobility Pattern of AUVs for Avoiding Void Zones in Underwater WSNs.
Javaid, Nadeem; Ejaz, Mudassir; Abdul, Wadood; Alamri, Atif; Almogren, Ahmad; Niaz, Iftikhar Azim; Guizani, Nadra
2017-03-13
In this paper, we propose two schemes; position-aware mobility pattern (PAMP) and cooperative PAMP (Co PAMP). The first one is an optimization scheme that avoids void hole occurrence and minimizes the uncertainty in the position estimation of glider's. The second one is a cooperative routing scheme that reduces the packet drop ratio by using the relay cooperation. Both techniques use gliders that stay at sojourn positions for a predefined time, at sojourn position self-confidence (s-confidence) and neighbor-confidence (n-confidence) regions that are estimated for balanced energy consumption. The transmission power of a glider is adjusted according to those confidence regions. Simulation results show that our proposed schemes outperform the compared existing one in terms of packet delivery ratio, void zones and energy consumption.
Robot-Beacon Distributed Range-Only SLAM for Resource-Constrained Operation
Torres-González, Arturo; Martínez-de Dios, Jose Ramiro; Ollero, Anibal
2017-01-01
This work deals with robot-sensor network cooperation where sensor nodes (beacons) are used as landmarks for Range-Only (RO) Simultaneous Localization and Mapping (SLAM). Most existing RO-SLAM techniques consider beacons as passive devices disregarding the sensing, computational and communication capabilities with which they are actually endowed. SLAM is a resource-demanding task. Besides the technological constraints of the robot and beacons, many applications impose further resource consumption limitations. This paper presents a scalable distributed RO-SLAM scheme for resource-constrained operation. It is capable of exploiting robot-beacon cooperation in order to improve SLAM accuracy while meeting a given resource consumption bound expressed as the maximum number of measurements that are integrated in SLAM per iteration. The proposed scheme combines a Sparse Extended Information Filter (SEIF) SLAM method, in which each beacon gathers and integrates robot-beacon and inter-beacon measurements, and a distributed information-driven measurement allocation tool that dynamically selects the measurements that are integrated in SLAM, balancing uncertainty improvement and resource consumption. The scheme adopts a robot-beacon distributed approach in which each beacon participates in the selection, gathering and integration in SLAM of robot-beacon and inter-beacon measurements, resulting in significant estimation accuracies, resource-consumption efficiency and scalability. It has been integrated in an octorotor Unmanned Aerial System (UAS) and evaluated in 3D SLAM outdoor experiments. The experimental results obtained show its performance and robustness and evidence its advantages over existing methods. PMID:28425946
Robot-Beacon Distributed Range-Only SLAM for Resource-Constrained Operation.
Torres-González, Arturo; Martínez-de Dios, Jose Ramiro; Ollero, Anibal
2017-04-20
This work deals with robot-sensor network cooperation where sensor nodes (beacons) are used as landmarks for Range-Only (RO) Simultaneous Localization and Mapping (SLAM). Most existing RO-SLAM techniques consider beacons as passive devices disregarding the sensing, computational and communication capabilities with which they are actually endowed. SLAM is a resource-demanding task. Besides the technological constraints of the robot and beacons, many applications impose further resource consumption limitations. This paper presents a scalable distributed RO-SLAM scheme for resource-constrained operation. It is capable of exploiting robot-beacon cooperation in order to improve SLAM accuracy while meeting a given resource consumption bound expressed as the maximum number of measurements that are integrated in SLAM per iteration. The proposed scheme combines a Sparse Extended Information Filter (SEIF) SLAM method, in which each beacon gathers and integrates robot-beacon and inter-beacon measurements, and a distributed information-driven measurement allocation tool that dynamically selects the measurements that are integrated in SLAM, balancing uncertainty improvement and resource consumption. The scheme adopts a robot-beacon distributed approach in which each beacon participates in the selection, gathering and integration in SLAM of robot-beacon and inter-beacon measurements, resulting in significant estimation accuracies, resource-consumption efficiency and scalability. It has been integrated in an octorotor Unmanned Aerial System (UAS) and evaluated in 3D SLAM outdoor experiments. The experimental results obtained show its performance and robustness and evidence its advantages over existing methods.
2002-06-01
Achievement of Internal Customer Objectives A Graduate Management Project Submitted to The Residency Committee In Candidacy for the Degree of Masters in...internal customer relations, the GPRMC has incorporated use of a Balanced Scorecard within its management scheme. The scorecard serves as a strategy map...headquarters. The goal, "Provide Policy Management , Advocacy and Problem Solving", addresses the relationship between the headquarters and its internal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less
Liu, Jie; Guo, Liang; Jiang, Jiping; Jiang, Dexun; Wang, Peng
2018-04-13
Aiming to minimize the damage caused by river chemical spills, efficient emergency material allocation is critical for an actual emergency rescue decision-making in a quick response. In this study, an emergency material allocation framework based on time-varying supply-demand constraint is developed to allocate emergency material, minimize the emergency response time, and satisfy the dynamic emergency material requirements in post-accident phases dealing with river chemical spills. In this study, the theoretically critical emergency response time is firstly obtained for the emergency material allocation system to select a series of appropriate emergency material warehouses as potential supportive centers. Then, an enumeration method is applied to identify the practically critical emergency response time, the optimum emergency material allocation and replenishment scheme. Finally, the developed framework is applied to a computational experiment based on south-to-north water transfer project in China. The results illustrate that the proposed methodology is a simple and flexible tool for appropriately allocating emergency material to satisfy time-dynamic demands during emergency decision-making. Therefore, the decision-makers can identify an appropriate emergency material allocation scheme in a balance between time-effective and cost-effective objectives under the different emergency pollution conditions.
Study of travelling wave solutions for some special-type nonlinear evolution equations
NASA Astrophysics Data System (ADS)
Song, Junquan; Hu, Lan; Shen, Shoufeng; Ma, Wen-Xiu
2018-07-01
The tanh-function expansion method has been improved and used to construct travelling wave solutions of the form U={\\sum }j=0n{a}j{\\tanh }jξ for some special-type nonlinear evolution equations, which have a variety of physical applications. The positive integer n can be determined by balancing the highest order linear term with the nonlinear term in the evolution equations. We improve the tanh-function expansion method with n = 0 by introducing a new transform U=-W\\prime (ξ )/{W}2. A nonlinear wave equation with source terms, and mKdV-type equations, are considered in order to show the effectiveness of the improved scheme. We also propose the tanh-function expansion method of implicit function form, and apply it to a Harry Dym-type equation as an example.
Parallel discontinuous Galerkin FEM for computing hyperbolic conservation law on unstructured grids
NASA Astrophysics Data System (ADS)
Ma, Xinrong; Duan, Zhijian
2018-04-01
High-order resolution Discontinuous Galerkin finite element methods (DGFEM) has been known as a good method for solving Euler equations and Navier-Stokes equations on unstructured grid, but it costs too much computational resources. An efficient parallel algorithm was presented for solving the compressible Euler equations. Moreover, the multigrid strategy based on three-stage three-order TVD Runge-Kutta scheme was used in order to improve the computational efficiency of DGFEM and accelerate the convergence of the solution of unsteady compressible Euler equations. In order to make each processor maintain load balancing, the domain decomposition method was employed. Numerical experiment performed for the inviscid transonic flow fluid problems around NACA0012 airfoil and M6 wing. The results indicated that our parallel algorithm can improve acceleration and efficiency significantly, which is suitable for calculating the complex flow fluid.
NASA Astrophysics Data System (ADS)
Chen, Yong-fei; Gao, Hong-xia; Wu, Zi-ling; Kang, Hui
2018-01-01
Compressed sensing (CS) has achieved great success in single noise removal. However, it cannot restore the images contaminated with mixed noise efficiently. This paper introduces nonlocal similarity and cosparsity inspired by compressed sensing to overcome the difficulties in mixed noise removal, in which nonlocal similarity explores the signal sparsity from similar patches, and cosparsity assumes that the signal is sparse after a possibly redundant transform. Meanwhile, an adaptive scheme is designed to keep the balance between mixed noise removal and detail preservation based on local variance. Finally, IRLSM and RACoSaMP are adopted to solve the objective function. Experimental results demonstrate that the proposed method is superior to conventional CS methods, like K-SVD and state-of-art method nonlocally centralized sparse representation (NCSR), in terms of both visual results and quantitative measures.
An efficient method for hybrid density functional calculation with spin-orbit coupling
NASA Astrophysics Data System (ADS)
Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui
2018-03-01
In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.
Zhang, Liping; Zhu, Shaohui; Tang, Shanyu
2017-03-01
Telecare medicine information systems (TMIS) provide flexible and convenient e-health care. However, the medical records transmitted in TMIS are exposed to unsecured public networks, so TMIS are more vulnerable to various types of security threats and attacks. To provide privacy protection for TMIS, a secure and efficient authenticated key agreement scheme is urgently needed to protect the sensitive medical data. Recently, Mishra et al. proposed a biometrics-based authenticated key agreement scheme for TMIS by using hash function and nonce, they claimed that their scheme could eliminate the security weaknesses of Yan et al.'s scheme and provide dynamic identity protection and user anonymity. In this paper, however, we demonstrate that Mishra et al.'s scheme suffers from replay attacks, man-in-the-middle attacks and fails to provide perfect forward secrecy. To overcome the weaknesses of Mishra et al.'s scheme, we then propose a three-factor authenticated key agreement scheme to enable the patient to enjoy the remote healthcare services via TMIS with privacy protection. The chaotic map-based cryptography is employed in the proposed scheme to achieve a delicate balance of security and performance. Security analysis demonstrates that the proposed scheme resists various attacks and provides several attractive security properties. Performance evaluation shows that the proposed scheme increases efficiency in comparison with other related schemes.
NASA Astrophysics Data System (ADS)
Najarbashi, G.; Mirzaei, S.
2016-03-01
Multi-mode entangled coherent states are important resources for linear optics quantum computation and teleportation. Here we introduce the generalized balanced N-mode coherent states which recast in the multi-qudit case. The necessary and sufficient condition for bi-separability of such balanced N-mode coherent states is found. We particularly focus on pure and mixed multi-qubit and multi-qutrit like states and examine the degree of bipartite as well as tripartite entanglement using the concurrence measure. Unlike the N-qubit case, it is shown that there are qutrit states violating monogamy inequality. Using parity, displacement operator and beam splitters, we will propose a scheme for generating balanced N-mode entangled coherent states for even number of terms in superposition.
A national medical register: balancing public transparency and professional privacy.
Healy, Judith M; Maffi, Costanza L; Dugdale, Paul
2008-02-18
The first aim of a medical registration scheme should be to protect patients. Medical registration boards currently offer variable information to the public on doctors' registration status. Current reform proposals for a national registration scheme should include free public access to professional profiles of registered medical practitioners. Practitioner profiles should include: practitioner's full name and practice address; type of qualifications; year first registered, and duration and type of registration; any conditions on registration and practice; any disciplinary action taken; and participation in continuing professional education.
A Simple Encryption Algorithm for Quantum Color Image
NASA Astrophysics Data System (ADS)
Li, Panchi; Zhao, Ya
2017-06-01
In this paper, a simple encryption scheme for quantum color image is proposed. Firstly, a color image is transformed into a quantum superposition state by employing NEQR (novel enhanced quantum representation), where the R,G,B values of every pixel in a 24-bit RGB true color image are represented by 24 single-qubit basic states, and each value has 8 qubits. Then, these 24 qubits are respectively transformed from a basic state into a balanced superposition state by employed the controlled rotation gates. At this time, the gray-scale values of R, G, B of every pixel are in a balanced superposition of 224 multi-qubits basic states. After measuring, the whole image is an uniform white noise, which does not provide any information. Decryption is the reverse process of encryption. The experimental results on the classical computer show that the proposed encryption scheme has better security.
Variational methods for direct/inverse problems of atmospheric dynamics and chemistry
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena
2013-04-01
We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).
NASA Astrophysics Data System (ADS)
Fernández-Nieto, E. D.; Garres-Díaz, J.; Mangeney, A.; Narbona-Reina, G.
2018-03-01
We present here numerical modelling of granular flows with the μ (I) rheology in confined channels. The contribution is twofold: (i) a model to approximate the Navier-Stokes equations with the μ (I) rheology through an asymptotic analysis; under the hypothesis of a one-dimensional flow, this model takes into account side walls friction; (ii) a multilayer discretization following Fernández-Nieto et al. (2016) [20]. In this new numerical scheme, we propose an appropriate treatment of the rheological terms through a hydrostatic reconstruction which allows this scheme to be well-balanced and therefore to deal with dry areas. Based on academic tests, we first evaluate the influence of the width of the channel on the normal profiles of the downslope velocity thanks to the multilayer approach that is intrinsically able to describe changes from Bagnold to S-shaped (and vice versa) velocity profiles. We also check the well-balanced property of the proposed numerical scheme. We show that approximating side walls friction using single-layer models may lead to strong errors. Secondly, we compare the numerical results with experimental data on granular collapses. We show that the proposed scheme allows us to qualitatively reproduce the deposit in the case of a rigid bed (i.e. dry area) and that the error made by replacing the dry area by a small layer of material may be large if this layer is not thin enough. The proposed model is also able to reproduce the time evolution of the free surface and of the flow/no-flow interface. In addition, it reproduces the effect of erosion for granular flows over initially static material lying on the bed. This is possible when using a variable friction coefficient μ (I) but not with a constant friction coefficient.
Parameter regionalization of a monthly water balance model for the conterminous United States
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2016-01-01
A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash–Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.
Parameter regionalization of a monthly water balance model for the conterminous United States
NASA Astrophysics Data System (ADS)
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2016-07-01
A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.
Zhang, Liping; Zhu, Shaohui
2015-05-01
To protect the transmission of the sensitive medical data, a secure and efficient authenticated key agreement scheme should be deployed when the healthcare delivery session is established via Telecare Medicine Information Systems (TMIS) over the unsecure public network. Recently, Islam and Khan proposed an authenticated key agreement scheme using elliptic curve cryptography for TMIS. They claimed that their proposed scheme is provably secure against various attacks in random oracle model and enjoys some good properties such as user anonymity. In this paper, however, we point out that any legal but malicious patient can reveal other user's identity. Consequently, their scheme suffers from server spoofing attack and off-line password guessing attack. Moreover, if the malicious patient performs the same time of the registration as other users, she can further launch the impersonation attack, man-in-the-middle attack, modification attack, replay attack, and strong replay attack successfully. To eliminate these weaknesses, we propose an improved ECC-based authenticated key agreement scheme. Security analysis demonstrates that the proposed scheme can resist various attacks and enables the patient to enjoy the remote healthcare services with privacy protection. Through the performance evaluation, we show that the proposed scheme achieves a desired balance between security and performance in comparisons with other related schemes.
Tang, Chengpei; Shokla, Sanesy Kumcr; Modhawar, George; Wang, Qiang
2016-02-19
Collaborative strategies for mobile sensor nodes ensure the efficiency and the robustness of data processing, while limiting the required communication bandwidth. In order to solve the problem of pipeline inspection and oil leakage monitoring, a collaborative weighted mobile sensing scheme is proposed. By adopting a weighted mobile sensing scheme, the adaptive collaborative clustering protocol can realize an even distribution of energy load among the mobile sensor nodes in each round, and make the best use of battery energy. A detailed theoretical analysis and experimental results revealed that the proposed protocol is an energy efficient collaborative strategy such that the sensor nodes can communicate with a fusion center and produce high power gain.
Cooperative Position Aware Mobility Pattern of AUVs for Avoiding Void Zones in Underwater WSNs
Javaid, Nadeem; Ejaz, Mudassir; Abdul, Wadood; Alamri, Atif; Almogren, Ahmad; Niaz, Iftikhar Azim; Guizani, Nadra
2017-01-01
In this paper, we propose two schemes; position-aware mobility pattern (PAMP) and cooperative PAMP (Co PAMP). The first one is an optimization scheme that avoids void hole occurrence and minimizes the uncertainty in the position estimation of glider’s. The second one is a cooperative routing scheme that reduces the packet drop ratio by using the relay cooperation. Both techniques use gliders that stay at sojourn positions for a predefined time, at sojourn position self-confidence (s-confidence) and neighbor-confidence (n-confidence) regions that are estimated for balanced energy consumption. The transmission power of a glider is adjusted according to those confidence regions. Simulation results show that our proposed schemes outperform the compared existing one in terms of packet delivery ratio, void zones and energy consumption. PMID:28335377
Design of two wheel self balancing car
NASA Astrophysics Data System (ADS)
He, Chun-hong; Ren, Bin
2018-02-01
This paper proposes a design scheme of the two-wheel self-balancing dolly, the integration of the gyroscope and accelerometer MPU6050 constitutes the car position detection device.System selects 32-bit MCU stmicroelectronics company as the control core, completed the processing of sensor signals, the realization of the filtering algorithm, motion control and human-computer interaction. Produced and debugging in the whole system is completed, the car can realize the independent balance under the condition of no intervention. The introduction of a suitable amount of interference, the car can adjust quickly to recover and steady state. Through remote control car bluetooth module complete forward, backward, turn left and other basic action..
NASA Astrophysics Data System (ADS)
Hamdi, R.; Schayes, G.
2005-07-01
The Martilli's urban parameterization scheme is improved and implemented in a mesoscale model in order to take into account the typical effects of a real city on the air temperature near the ground and on the surface exchange fluxes. The mesoscale model is run on a single column using atmospheric data and radiation recorded above roof level as forcing. Here, the authors validate the Martilli's urban boundary layer scheme using measurements from two mid-latitude European cities: Basel, Switzerland and Marseilles, France. For Basel, the model performance is evaluated with observations of canyon temperature, surface radiation, and energy balance fluxes obtained during the Basel urban boundary layer experiment (BUBBLE). The results show that the urban parameterization scheme is able to reproduce the generation of the Urban Heat Island (UHI) effect over urban area and represents correctly most of the behavior of the fluxes typical of the city center of Basel, including the large heat uptake by the urban fabric and the positive sensible heat flux at night. For Marseilles, the model performance is evaluated with observations of surface temperature, canyon temperature, surface radiation, and energy balance fluxes collected during the field experiments to constrain models of atmospheric pollution and transport of emissions (ESCOMPTE) and its urban boundary layer (UBL) campaign. At both urban sites, vegetation cover is less than 20%, therefore, particular attention was directed to the ability of the Martilli's urban boundary layer scheme to reproduce the observations for the Marseilles city center, where the urban parameters and the synoptic forcing are totally different from Basel. Evaluation of the model with wall, road, and roof surface temperatures gave good results. The model correctly simulates the net radiation, canyon temperature, and the partitioning between the turbulent and storage heat fluxes.
NASA Astrophysics Data System (ADS)
Hamdi, R.; Schayes, G.
2007-08-01
Martilli's urban parameterization scheme is improved and implemented in a mesoscale model in order to take into account the typical effects of a real city on the air temperature near the ground and on the surface exchange fluxes. The mesoscale model is run on a single column using atmospheric data and radiation recorded above roof level as forcing. Here, the authors validate Martilli's urban boundary layer scheme using measurements from two mid-latitude European cities: Basel, Switzerland and Marseilles, France. For Basel, the model performance is evaluated with observations of canyon temperature, surface radiation, and energy balance fluxes obtained during the Basel urban boundary layer experiment (BUBBLE). The results show that the urban parameterization scheme represents correctly most of the behavior of the fluxes typical of the city center of Basel, including the large heat uptake by the urban fabric and the positive sensible heat flux at night. For Marseilles, the model performance is evaluated with observations of surface temperature, canyon temperature, surface radiation, and energy balance fluxes collected during the field experiments to constrain models of atmospheric pollution and transport of emissions (ESCOMPTE) and its urban boundary layer (UBL) campaign. At both urban sites, vegetation cover is less than 20%, therefore, particular attention was directed to the ability of Martilli's urban boundary layer scheme to reproduce the observations for the Marseilles city center, where the urban parameters and the synoptic forcing are totally different from Basel. Evaluation of the model with wall, road, and roof surface temperatures gave good results. The model correctly simulates the net radiation, canyon temperature, and the partitioning between the turbulent and storage heat fluxes.
NASA Astrophysics Data System (ADS)
Ersoy, Mehmet; Lakkis, Omar; Townsend, Philip
2016-04-01
The flow of water in rivers and oceans can, under general assumptions, be efficiently modelled using Saint-Venant's shallow water system of equations (SWE). SWE is a hyperbolic system of conservation laws (HSCL) which can be derived from a starting point of incompressible Navier-Stokes. A common difficulty in the numerical simulation of HSCLs is the conservation of physical entropy. Work by Audusse, Bristeau, Perthame (2000) and Perthame, Simeoni (2001), proposed numerical SWE solvers known as kinetic schemes (KSs), which can be shown to have desirable entropy-consistent properties, and are thus called well-balanced schemes. A KS is derived from kinetic equations that can be integrated into the SWE. In flood risk assessment models the SWE must be coupled with other equations describing interacting meteorological and hydrogeological phenomena such as rain and groundwater flows. The SWE must therefore be appropriately modified to accommodate source and sink terms, so kinetic schemes are no longer valid. While modifications of SWE in this direction have been recently proposed, e.g., Delestre (2010), we depart from the extant literature by proposing a novel model that is "entropy-consistent" and naturally extends the SWE by respecting its kinetic formulation connections. This allows us to derive a system of partial differential equations modelling flow of a one-dimensional river with both a precipitation term and a groundwater flow model to account for potential infiltration and recharge. We exhibit numerical simulations of the corresponding kinetic schemes. These simulations can be applied to both real world flood prediction and the tackling of wider issues on how climate and societal change are affecting flood risk.
Adaptive Aggregation Routing to Reduce Delay for Multi-Layer Wireless Sensor Networks.
Li, Xujing; Liu, Anfeng; Xie, Mande; Xiong, Neal N; Zeng, Zhiwen; Cai, Zhiping
2018-04-16
The quality of service (QoS) regarding delay, lifetime and reliability is the key to the application of wireless sensor networks (WSNs). Data aggregation is a method to effectively reduce the data transmission volume and improve the lifetime of a network. In the previous study, a common strategy required that data wait in the queue. When the length of the queue is greater than or equal to the predetermined aggregation threshold ( N t ) or the waiting time is equal to the aggregation timer ( T t ), data are forwarded at the expense of an increase in the delay. The primary contributions of the proposed Adaptive Aggregation Routing (AAR) scheme are the following: (a) the senders select the forwarding node dynamically according to the length of the data queue, which effectively reduces the delay. In the AAR scheme, the senders send data to the nodes with a long data queue. The advantages are that first, the nodes with a long data queue need a small amount of data to perform aggregation; therefore, the transmitted data can be fully utilized to make these nodes aggregate. Second, this scheme balances the aggregating and data sending load; thus, the lifetime increases. (b) An improved AAR scheme is proposed to improve the QoS. The aggregation deadline ( T t ) and the aggregation threshold ( N t ) are dynamically changed in the network. In WSNs, nodes far from the sink have residual energy because these nodes transmit less data than the other nodes. In the improved AAR scheme, the nodes far from the sink have a small value of T t and N t to reduce delay, and the nodes near the sink are set to a large value of T t and N t to reduce energy consumption. Thus, the end to end delay is reduced, a longer lifetime is achieved, and the residual energy is fully used. Simulation results demonstrate that compared with the previous scheme, the performance of the AAR scheme is improved. This scheme reduces the delay by 14.91%, improves the lifetime by 30.91%, and increases energy efficiency by 76.40%.
Adaptive Aggregation Routing to Reduce Delay for Multi-Layer Wireless Sensor Networks
Li, Xujing; Xie, Mande; Zeng, Zhiwen; Cai, Zhiping
2018-01-01
The quality of service (QoS) regarding delay, lifetime and reliability is the key to the application of wireless sensor networks (WSNs). Data aggregation is a method to effectively reduce the data transmission volume and improve the lifetime of a network. In the previous study, a common strategy required that data wait in the queue. When the length of the queue is greater than or equal to the predetermined aggregation threshold (Nt) or the waiting time is equal to the aggregation timer (Tt), data are forwarded at the expense of an increase in the delay. The primary contributions of the proposed Adaptive Aggregation Routing (AAR) scheme are the following: (a) the senders select the forwarding node dynamically according to the length of the data queue, which effectively reduces the delay. In the AAR scheme, the senders send data to the nodes with a long data queue. The advantages are that first, the nodes with a long data queue need a small amount of data to perform aggregation; therefore, the transmitted data can be fully utilized to make these nodes aggregate. Second, this scheme balances the aggregating and data sending load; thus, the lifetime increases. (b) An improved AAR scheme is proposed to improve the QoS. The aggregation deadline (Tt) and the aggregation threshold (Nt) are dynamically changed in the network. In WSNs, nodes far from the sink have residual energy because these nodes transmit less data than the other nodes. In the improved AAR scheme, the nodes far from the sink have a small value of Tt and Nt to reduce delay, and the nodes near the sink are set to a large value of Tt and Nt to reduce energy consumption. Thus, the end to end delay is reduced, a longer lifetime is achieved, and the residual energy is fully used. Simulation results demonstrate that compared with the previous scheme, the performance of the AAR scheme is improved. This scheme reduces the delay by 14.91%, improves the lifetime by 30.91%, and increases energy efficiency by 76.40%. PMID:29659535
Design of horizontal-axis wind turbine using blade element momentum method
NASA Astrophysics Data System (ADS)
Bobonea, Andreea; Pricop, Mihai Victor
2013-10-01
The study of mathematical models applied to wind turbine design in recent years, principally in electrical energy generation, has become significant due to the increasing use of renewable energy sources with low environmental impact. Thus, this paper shows an alternative mathematical scheme for the wind turbine design, based on the Blade Element Momentum (BEM) Theory. The results from the BEM method are greatly dependent on the precision of the lift and drag coefficients. The basic of BEM method assumes the blade can be analyzed as a number of independent element in spanwise direction. The induced velocity at each element is determined by performing the momentum balance for a control volume containing the blade element. The aerodynamic forces on the element are calculated using the lift and drag coefficient from the empirical two-dimensional wind tunnel test data at the geometric angle of attack (AOA) of the blade element relative to the local flow velocity.
Molloy, Kevin; Shehu, Amarda
2013-01-01
Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers.
Sun, Hui; Zhou, Shenggao; Moore, David K; Cheng, Li-Tien; Li, Bo
2016-05-01
We design and implement numerical methods for the incompressible Stokes solvent flow and solute-solvent interface motion for nonpolar molecules in aqueous solvent. The balance of viscous force, surface tension, and van der Waals type dispersive force leads to a traction boundary condition on the solute-solvent interface. To allow the change of solute volume, we design special numerical boundary conditions on the boundary of a computational domain through a consistency condition. We use a finite difference ghost fluid scheme to discretize the Stokes equation with such boundary conditions. The method is tested to have a second-order accuracy. We combine this ghost fluid method with the level-set method to simulate the motion of the solute-solvent interface that is governed by the solvent fluid velocity. Numerical examples show that our method can predict accurately the blow up time for a test example of curvature flow and reproduce the polymodal (e.g., dry and wet) states of hydration of some simple model molecular systems.
Sun, Hui; Zhou, Shenggao; Moore, David K.; Cheng, Li-Tien; Li, Bo
2015-01-01
We design and implement numerical methods for the incompressible Stokes solvent flow and solute-solvent interface motion for nonpolar molecules in aqueous solvent. The balance of viscous force, surface tension, and van der Waals type dispersive force leads to a traction boundary condition on the solute-solvent interface. To allow the change of solute volume, we design special numerical boundary conditions on the boundary of a computational domain through a consistency condition. We use a finite difference ghost fluid scheme to discretize the Stokes equation with such boundary conditions. The method is tested to have a second-order accuracy. We combine this ghost fluid method with the level-set method to simulate the motion of the solute-solvent interface that is governed by the solvent fluid velocity. Numerical examples show that our method can predict accurately the blow up time for a test example of curvature flow and reproduce the polymodal (e.g., dry and wet) states of hydration of some simple model molecular systems. PMID:27365866
URINE SOURCE SEPARATION AND TREATMENT: NUTRIENT RECOVERY USING LOW-COST MATERIALS
Successful completion of this P3 Project will achieve the following expected outputs: identification of low-cost materials that can effectively recover ammonium, phosphate, and potassium from urine; material balance calculations for different urine separation and treatment scheme...
Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks.
Pena, Rodrigo F O; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C; Lindner, Benjamin
2018-01-01
Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.
Tang, Chengpei; Shokla, Sanesy Kumcr; Modhawar, George; Wang, Qiang
2016-01-01
Collaborative strategies for mobile sensor nodes ensure the efficiency and the robustness of data processing, while limiting the required communication bandwidth. In order to solve the problem of pipeline inspection and oil leakage monitoring, a collaborative weighted mobile sensing scheme is proposed. By adopting a weighted mobile sensing scheme, the adaptive collaborative clustering protocol can realize an even distribution of energy load among the mobile sensor nodes in each round, and make the best use of battery energy. A detailed theoretical analysis and experimental results revealed that the proposed protocol is an energy efficient collaborative strategy such that the sensor nodes can communicate with a fusion center and produce high power gain. PMID:26907285
Hielscher, Andreas H; Bartel, Sebastian
2004-02-01
Optical tomography (OT) is a fast developing novel imaging modality that uses near-infrared (NIR) light to obtain cross-sectional views of optical properties inside the human body. A major challenge remains the time-consuming, computational-intensive image reconstruction problem that converts NIR transmission measurements into cross-sectional images. To increase the speed of iterative image reconstruction schemes that are commonly applied for OT, we have developed and implemented several parallel algorithms on a cluster of workstations. Static process distribution as well as dynamic load balancing schemes suitable for heterogeneous clusters and varying machine performances are introduced and tested. The resulting algorithms are shown to accelerate the reconstruction process to various degrees, substantially reducing the computation times for clinically relevant problems.
ARM - Midlatitude Continental Convective Clouds
Jensen, Mike; Bartholomew, Mary Jane; Genio, Anthony Del; Giangrande, Scott; Kollias, Pavlos
2012-01-19
Convective processes play a critical role in the Earth's energy balance through the redistribution of heat and moisture in the atmosphere and their link to the hydrological cycle. Accurate representation of convective processes in numerical models is vital towards improving current and future simulations of Earths climate system. Despite improvements in computing power, current operational weather and global climate models are unable to resolve the natural temporal and spatial scales important to convective processes and therefore must turn to parameterization schemes to represent these processes. In turn, parameterization schemes in cloud-resolving models need to be evaluated for their generality and application to a variety of atmospheric conditions. Data from field campaigns with appropriate forcing descriptors have been traditionally used by modelers for evaluating and improving parameterization schemes.
ARM - Midlatitude Continental Convective Clouds (comstock-hvps)
Jensen, Mike; Comstock, Jennifer; Genio, Anthony Del; Giangrande, Scott; Kollias, Pavlos
2012-01-06
Convective processes play a critical role in the Earth's energy balance through the redistribution of heat and moisture in the atmosphere and their link to the hydrological cycle. Accurate representation of convective processes in numerical models is vital towards improving current and future simulations of Earths climate system. Despite improvements in computing power, current operational weather and global climate models are unable to resolve the natural temporal and spatial scales important to convective processes and therefore must turn to parameterization schemes to represent these processes. In turn, parameterization schemes in cloud-resolving models need to be evaluated for their generality and application to a variety of atmospheric conditions. Data from field campaigns with appropriate forcing descriptors have been traditionally used by modelers for evaluating and improving parameterization schemes.
Relaxation approximations to second-order traffic flow models by high-resolution schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.
2015-03-10
A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reportedmore » demonstrate the simplicity and versatility of relaxation schemes as numerical solvers.« less
Hardware-assisted software clock synchronization for homogeneous distributed systems
NASA Technical Reports Server (NTRS)
Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.
1990-01-01
A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.
Using fuzzy models in machining control system and assessment of sustainability
NASA Astrophysics Data System (ADS)
Grinek, A. V.; Boychuk, I. P.; Dantsevich, I. M.
2018-03-01
Description of the complex relationship of the optimum velocity with the temperature-strength state in the cutting zone for machining a fuzzy model is proposed. The fuzzy-logical conclusion allows determining the processing speed, which ensures effective, from the point of view of ensuring the quality of the surface layer, the temperature in the cutting zone and the maximum allowable cutting force. A scheme for stabilizing the temperature-strength state in the cutting zone using a nonlinear fuzzy PD–controller is proposed. The stability of the nonlinear system is estimated with the help of grapho–analytical realization of the method of harmonic balance and by modeling in MatLab.
Microwave photonic link with improved phase noise using a balanced detection scheme
NASA Astrophysics Data System (ADS)
Hu, Jingjing; Gu, Yiying; Tan, Wengang; Zhu, Wenwu; Wang, Linghua; Zhao, Mingshan
2016-07-01
A microwave photonic link (MPL) with improved phase noise performance using a dual output Mach-Zehnder modulator (DP-MZM) and balanced detection is proposed and experimentally demonstrated. The fundamental concept of the approach is based on the two complementary outputs of DP-MZM and the destructive combination of the photocurrent in balanced photodetector (BPD). Theoretical analysis is performed to numerical evaluate the additive phase noise performance and shows a good agreement with the experiment. Experimental results are presented for 4 GHz, 8 GHz and 12 GHz transmission link and an 11 dB improvement of phase noise performance at 10 MHz offset is achieved compared to the conventional intensity-modulation and direct-detection (IMDD) MPL.
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-01-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Astrophysics Data System (ADS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-08-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Pelivanov, Ivan; Buma, Takashi; Xia, Jinjun; Wei, Chen-Wei; O'Donnell, Matthew
2014-01-01
Laser ultrasonic (LU) inspection represents an attractive, non-contact method to evaluate composite materials. Current non-contact systems, however, have relatively low sensitivity compared to contact piezoelectric detection. They are also difficult to adjust, very expensive, and strongly influenced by environmental noise. Here, we demonstrate that most of these drawbacks can be eliminated by combining a new generation of compact, inexpensive fiber lasers with new developments in fiber telecommunication optics and an optimally designed balanced probe scheme. In particular, a new type of a balanced fiber-optic Sagnac interferometer is presented as part of an all-optical LU pump-probe system for non-destructive testing and evaluation of aircraft composites. The performance of the LU system is demonstrated on a composite sample with known defects. Wide-band ultrasound probe signals are generated directly at the sample surface with a pulsed fiber laser delivering nanosecond laser pulses at a repetition rate up to 76 kHz rate with a pulse energy of 0.6 mJ. A balanced fiber-optic Sagnac interferometer is employed to detect pressure signals at the same point on the composite surface. A- and B-scans obtained with the Sagnac interferometer are compared to those made with a contact wide-band polyvinylidene fluoride transducer. PMID:24737921
Exergetic analysis of autonomous power complex for drilling rig
NASA Astrophysics Data System (ADS)
Lebedev, V. A.; Karabuta, V. S.
2017-10-01
The article considers the issue of increasing the energy efficiency of power equipment of the drilling rig. At present diverse types of power plants are used in power supply systems. When designing and choosing a power plant, one of the main criteria is its energy efficiency. The main indicator in this case is the effective efficiency factor calculated by the method of thermal balances. In the article, it is suggested to use the exergy method to determine energy efficiency, which allows to perform estimations of the thermodynamic perfection degree of the system by the example of a gas turbine plant: relative estimation (exergetic efficiency factor) and an absolute estimation. An exergetic analysis of the gas turbine plant operating in a simple scheme was carried out using the program WaterSteamPro. Exergy losses in equipment elements are calculated.
NASA Astrophysics Data System (ADS)
Lee, Sang Jun
Autonomous structural health monitoring (SHM) systems using active sensing devices have been studied extensively to diagnose the current state of aerospace, civil infrastructure and mechanical systems in near real-time and aims to eventually reduce life-cycle costs by replacing current schedule-based maintenance with condition-based maintenance. This research develops four schemes for SHM applications: (1) a simple and reliable PZT transducer self-sensing scheme; (2) a smart PZT self-diagnosis scheme; (3) an instantaneous reciprocity-based PZT diagnosis scheme; and (4) an effective PZT transducer tuning scheme. First, this research develops a PZT transducer self-sensing scheme, which is a necessary condition to accomplish a PZT transducer self-diagnosis. Main advantages of the proposed self-sensing approach are its simplicity and adaptability. The necessary hardware is only an additional self-sensing circuit which includes a minimum of electric components. With this circuit, the self-sensing parameters can be calibrated instantaneously in the presence of changing operational and environmental conditions of the system. In particular, this self-sensing scheme focuses on estimating the mechanical response in the time domain for the subsequent applications of the PZT transducer self-diagnosis and tuning with guided wave propagation. The most significant challenge of this self-sensing comes from the fact that the magnitude of the mechanical response is generally several orders of magnitude smaller than that of the input signal. The proposed self-sensing scheme fully takes advantage of the fact that any user-defined input signals can be applied to a host structure and the input waveform is known. The performance of the proposed self-sensing scheme is demonstrated by theoretical analysis, numerical simulations and various experiments. Second, this research proposes a smart PZT transducer self-diagnosis scheme based on the developed self-sensing scheme. Conventionally, the capacitance change of the PZT wafer is monitored to identify the abnormal PZT condition because the capacitance of the PZT wafer is linearly proportional to its size and also related to the bonding condition. However, temperature variation is another primary factor that affects the PZT capacitance. To ensure the reliable transducer self-diagnosis, two different self-diagnosis features are proposed to differentiate two main PZT wafer defects, i.e., PZT debonding and PZT cracking, from temperature variations and structural damages. The PZT debonding is identified using two indices based on time reversal process (TRP) without any baseline data. Also, the PZT cracking is identified by monitoring the change of the generated Lamb wave power ratio index with respect to the driving frequency. The uniqueness of this self-diagnosis scheme is that the self-diagnosis features can differentiate the PZT defects from environmental variations and structural damages. Therefore, it is expected to minimize false-alarms which are induced by operational or environmental variations as well as structural damages. The applicability of the proposed self-diagnosis scheme is verified by theoretical analysis, numerical simulations, and experimental tests. Third, a new methodology of guided wave-based PZT transducer diagnosis is developed to identify PZT transducer defects without using prior baseline data. This methodology can be applied when a number of same-size PZT transducers are attached to a target structure to form a sensor network. The advantage of the proposed technique is that abnormal PZT transducers among intact PZT transducers can be detected even when the system being monitored is subjected to varying operational and environmental conditions or changing structural conditions. To achieve this goal, the proposed diagnosis technique utilizes the linear reciprocity of guided wave propagation between a pair of surface-bonded PZT transducers. Finally, a PZT transducer tuning scheme is being developed for selective Lamb wave excitation and sensing. This is useful for structural damage detection based on Lamb wave propagation because the proper transducer size and the corresponding input frequency can be is crucial for selective Lamb wave excitation and sensing. The circular PZT response model is derived, and the energy balance is included for a better prediction of the PZT responses because the existing PZT response models do not consider any energy balance between Lamb wave modes. In addition, two calibration methods are also suggested in order to model the PZT responses more accurately by considering a bonding layer effect. (Abstract shortened by UMI.)
Warscher, M; Strasser, U; Kraller, G; Marke, T; Franz, H; Kunstmann, H
2013-05-01
[1] Runoff generation in Alpine regions is typically affected by snow processes. Snow accumulation, storage, redistribution, and ablation control the availability of water. In this study, several robust parameterizations describing snow processes in Alpine environments were implemented in a fully distributed, physically based hydrological model. Snow cover development is simulated using different methods from a simple temperature index approach, followed by an energy balance scheme, to additionally accounting for gravitational and wind-driven lateral snow redistribution. Test site for the study is the Berchtesgaden National Park (Bavarian Alps, Germany) which is characterized by extreme topography and climate conditions. The performance of the model system in reproducing snow cover dynamics and resulting discharge generation is analyzed and validated via measurements of snow water equivalent and snow depth, satellite-based remote sensing data, and runoff gauge data. Model efficiency (the Nash-Sutcliffe coefficient) for simulated runoff increases from 0.57 to 0.68 in a high Alpine headwater catchment and from 0.62 to 0.64 in total with increasing snow model complexity. In particular, the results show that the introduction of the energy balance scheme reproduces daily fluctuations in the snowmelt rates that trace down to the channel stream. These daily cycles measured in snowmelt and resulting runoff rates could not be reproduced by using the temperature index approach. In addition, accounting for lateral snow transport changes the seasonal distribution of modeled snowmelt amounts, which leads to a higher accuracy in modeling runoff characteristics.
Hirsch, Philipp Emanuel; Schillinger, Sebastian; Weigt, Hannes; Burkhardt-Holm, Patricia
2014-01-01
Water level fluctuations in lakes lead to shoreline displacement. The seasonality of flooding or beaching of the littoral area affects nutrient cycling, redox gradients in sediments, and life cycles of aquatic organisms. Despite the ecological importance of water level fluctuations, we still lack a method that assesses water levels in the context of hydropower operations. Water levels in reservoirs are influenced by the operator of a hydropower plant, who discharges water through the turbines or stores water in the reservoir, in a fashion that maximizes profit. This rationale governs the seasonal operation scheme and hence determines the water levels within the boundaries of the reservoir's water balance. For progress towards a sustainable development of hydropower, the benefits of this form of electricity generation have to be weighed against the possible detrimental effects of the anthropogenic water level fluctuations. We developed a hydro-economic model that combines an economic optimization function with hydrological estimators of the water balance of a reservoir. Applying this model allowed us to accurately predict water level fluctuations in a reservoir. The hydro-economic model also allowed for scenario calculation of how water levels change with climate change scenarios and with a change in operating scheme of the reservoir (increase in turbine capacity). Further model development will enable the consideration of a variety of additional parameters, such as water withdrawal for irrigation, drinking water supply, or altered energy policies. This advances our ability to sustainably manage water resources that must meet both economic and environmental demands.
Reintroducing radiometric surface temperature into the Penman-Monteith formulation
NASA Astrophysics Data System (ADS)
Mallick, Kaniska; Boegh, Eva; Trebs, Ivonne; Alfieri, Joseph G.; Kustas, William P.; Prueger, John H.; Niyogi, Dev; Das, Narendra; Drewry, Darren T.; Hoffmann, Lucien; Jarvis, Andrew J.
2015-08-01
Here we demonstrate a novel method to physically integrate radiometric surface temperature (TR) into the Penman-Monteith (PM) formulation for estimating the terrestrial sensible and latent heat fluxes (H and λE) in the framework of a modified Surface Temperature Initiated Closure (STIC). It combines TR data with standard energy balance closure models for deriving a hybrid scheme that does not require parameterization of the surface (or stomatal) and aerodynamic conductances (gS and gB). STIC is formed by the simultaneous solution of four state equations and it uses TR as an additional data source for retrieving the "near surface" moisture availability (M) and the Priestley-Taylor coefficient (α). The performance of STIC is tested using high-temporal resolution TR observations collected from different international surface energy flux experiments in conjunction with corresponding net radiation (RN), ground heat flux (G), air temperature (TA), and relative humidity (RH) measurements. A comparison of the STIC outputs with the eddy covariance measurements of λE and H revealed RMSDs of 7-16% and 40-74% in half-hourly λE and H estimates. These statistics were 5-13% and 10-44% in daily λE and H. The errors and uncertainties in both surface fluxes are comparable to the models that typically use land surface parameterizations for determining the unobserved components (gS and gB) of the surface energy balance models. However, the scheme is simpler, has the capabilities for generating spatially explicit surface energy fluxes and independent of submodels for boundary layer developments. This article was corrected on 27 AUG 2015. See the end of the full text for details.
Hirsch, Philipp Emanuel; Schillinger, Sebastian; Weigt, Hannes; Burkhardt-Holm, Patricia
2014-01-01
Water level fluctuations in lakes lead to shoreline displacement. The seasonality of flooding or beaching of the littoral area affects nutrient cycling, redox gradients in sediments, and life cycles of aquatic organisms. Despite the ecological importance of water level fluctuations, we still lack a method that assesses water levels in the context of hydropower operations. Water levels in reservoirs are influenced by the operator of a hydropower plant, who discharges water through the turbines or stores water in the reservoir, in a fashion that maximizes profit. This rationale governs the seasonal operation scheme and hence determines the water levels within the boundaries of the reservoir's water balance. For progress towards a sustainable development of hydropower, the benefits of this form of electricity generation have to be weighed against the possible detrimental effects of the anthropogenic water level fluctuations. We developed a hydro-economic model that combines an economic optimization function with hydrological estimators of the water balance of a reservoir. Applying this model allowed us to accurately predict water level fluctuations in a reservoir. The hydro-economic model also allowed for scenario calculation of how water levels change with climate change scenarios and with a change in operating scheme of the reservoir (increase in turbine capacity). Further model development will enable the consideration of a variety of additional parameters, such as water withdrawal for irrigation, drinking water supply, or altered energy policies. This advances our ability to sustainably manage water resources that must meet both economic and environmental demands. PMID:25526619
Aerostructural Level Set Topology Optimization for a Common Research Model Wing
NASA Technical Reports Server (NTRS)
Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia
2014-01-01
The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.
Assimilation of snow covered area information into hydrologic and land-surface models
Clark, M.P.; Slater, A.G.; Barrett, A.P.; Hay, L.E.; McCabe, G.J.; Rajagopalan, B.; Leavesley, G.H.
2006-01-01
This paper describes a data assimilation method that uses observations of snow covered area (SCA) to update hydrologic model states in a mountainous catchment in Colorado. The assimilation method uses SCA information as part of an ensemble Kalman filter to alter the sub-basin distribution of snow as well as the basin water balance. This method permits an optimal combination of model simulations and observations, as well as propagation of information across model states. Sensitivity experiments are conducted with a fairly simple snowpack/water-balance model to evaluate effects of the data assimilation scheme on simulations of streamflow. The assimilation of SCA information results in minor improvements in the accuracy of streamflow simulations near the end of the snowmelt season. The small effect from SCA assimilation is initially surprising. It can be explained both because a substantial portion of snowmelts before any bare ground is exposed, and because the transition from 100% to 0% snow coverage occurs fairly quickly. Both of these factors are basin-dependent. Satellite SCA information is expected to be most useful in basins where snow cover is ephemeral. The data assimilation strategy presented in this study improved the accuracy of the streamflow simulation, indicating that SCA is a useful source of independent information that can be used as part of an integrated data assimilation strategy. ?? 2005 Elsevier Ltd. All rights reserved.
Constant-pH Hybrid Nonequilibrium Molecular Dynamics–Monte Carlo Simulation Method
2016-01-01
A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys.2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD–MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD–MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems. PMID:26300709
Modeling the Surface Energy Balance of the Core of an Old Mediterranean City: Marseille.
NASA Astrophysics Data System (ADS)
Lemonsu, A.; Grimmond, C. S. B.; Masson, V.
2004-02-01
The Town Energy Balance (TEB) model, which parameterizes the local-scale energy and water exchanges between urban surfaces and the atmosphere by treating the urban area as a series of urban canyons, coupled to the Interactions between Soil, Biosphere, and Atmosphere (ISBA) scheme, was run in offline mode for Marseille, France. TEB's performance is evaluated with observations of surface temperatures and surface energy balance fluxes collected during the field experiments to constrain models of atmospheric pollution and transport of emissions (ESCOMPTE) urban boundary layer (UBL) campaign. Particular attention was directed to the influence of different surface databases, used for input parameters, on model predictions. Comparison of simulated canyon temperatures with observations resulted in improvements to TEB parameterizations by increasing the ventilation. Evaluation of the model with wall, road, and roof surface temperatures gave good results. The model succeeds in simulating a sensible heat flux larger than heat storage, as observed. A sensitivity comparison using generic dense city parameters, derived from the Coordination of Information on the Environment (CORINE) land cover database, and those from a surface database developed specifically for the Marseille city center shows the importance of correctly documenting the urban surface. Overall, the TEB scheme is shown to be fairly robust, consistent with results from previous studies.
NASA Astrophysics Data System (ADS)
Huang, Yong; Ryou, Jae-Hyun; Dupuis, Russell D.; Zuo, Daniel; Kesler, Benjamin; Chuang, Shun-Lien; Hu, Hefei; Kim, Kyou-Hyun; Ting Lu, Yen; Hsieh, K. C.; Zuo, Jian-Min
2011-07-01
We propose and demonstrate strain-balanced InAs/GaSb type-II superlattices (T2SLs) grown on InAs substrates employing GaAs-like interfacial (IF) layers by metalorganic chemical vapor deposition (MOCVD) for effective strain management, simplified growth scheme, improved materials crystalline quality, and reduced substrate absorption. The in-plane compressive strain from the GaSb layers in the T2SLs on the InAs was completely balanced by the GaAs-like IF layers formed by controlled precursor carry-over and anion exchange effects, avoiding the use of complicated IF layers and precursor switching schemes that were used for the MOCVD growth of T2SLs on GaSb. An infrared (IR) p-i-n photodiode structure with 320-period InAs/GaSb T2SLs on InAs was grown and the fabricated devices show improved performance characteristics with a peak responsivity of ˜1.9 A/W and a detectivity of ˜6.78 × 109 Jones at 8 μm at 78 K. In addition, the InAs buffer layer and substrate show a lower IR absorption coefficient than GaSb substrates in most of the mid- and long-IR spectral range.
Evaporation rate of nucleating clusters.
Zapadinsky, Evgeni
2011-11-21
The Becker-Döring kinetic scheme is the most frequently used approach to vapor liquid nucleation. In the present study it has been extended so that master equations for all cluster configurations are included into consideration. In the Becker-Döring kinetic scheme the nucleation rate is calculated through comparison of the balanced steady state and unbalanced steady state solutions of the set of kinetic equations. It is usually assumed that the balanced steady state produces equilibrium cluster distribution, and the evaporation rates are identical in the balanced and unbalanced steady state cases. In the present study we have shown that the evaporation rates are not identical in the equilibrium and unbalanced steady state cases. The evaporation rate depends on the number of clusters at the limit of the cluster definition. We have shown that the ratio of the number of n-clusters at the limit of the cluster definition to the total number of n-clusters is different in equilibrium and unbalanced steady state cases. This causes difference in evaporation rates for these cases and results in a correction factor to the nucleation rate. According to rough estimation it is 10(-1) by the order of magnitude and can be lower if carrier gas effectively equilibrates the clusters. The developed approach allows one to refine the correction factor with Monte Carlo and molecular dynamic simulations.
NASA Astrophysics Data System (ADS)
Roy, A.; Royer, A.; Montpetit, B.; Bartlett, P. A.; Langlois, A.
2012-12-01
Snow grain size is a key parameter for modeling microwave snow emission properties and the surface energy balance because of its influence on the snow albedo, thermal conductivity and diffusivity. A model of the specific surface area (SSA) of snow was implemented in the one-layer snow model in the Canadian LAnd Surface Scheme (CLASS) version 3.4. This offline multilayer model (CLASS-SSA) simulates the decrease of SSA based on snow age, snow temperature and the temperature gradient under dry snow conditions, whereas it considers the liquid water content for wet snow metamorphism. We compare the model with ground-based measurements from several sites (alpine, Arctic and sub-Arctic) with different types of snow. The model provides simulated SSA in good agreement with measurements with an overall point-to-point comparison RMSE of 8.1 m2 kg-1, and a RMSE of 4.9 m2 kg-1 for the snowpack average SSA. The model, however, is limited under wet conditions due to the single-layer nature of the CLASS model, leading to a single liquid water content value for the whole snowpack. The SSA simulations are of great interest for satellite passive microwave brightness temperature assimilations, snow mass balance retrievals and surface energy balance calculations with associated climate feedbacks.
Velasco-Tapia, Fernando
2014-01-01
Magmatic processes have usually been identified and evaluated using qualitative or semiquantitative geochemical or isotopic tools based on a restricted number of variables. However, a more complete and quantitative view could be reached applying multivariate analysis, mass balance techniques, and statistical tests. As an example, in this work a statistical and quantitative scheme is applied to analyze the geochemical features for the Sierra de las Cruces (SC) volcanic range (Mexican Volcanic Belt). In this locality, the volcanic activity (3.7 to 0.5 Ma) was dominantly dacitic, but the presence of spheroidal andesitic enclaves and/or diverse disequilibrium features in majority of lavas confirms the operation of magma mixing/mingling. New discriminant-function-based multidimensional diagrams were used to discriminate tectonic setting. Statistical tests of discordancy and significance were applied to evaluate the influence of the subducting Cocos plate, which seems to be rather negligible for the SC magmas in relation to several major and trace elements. A cluster analysis following Ward's linkage rule was carried out to classify the SC volcanic rocks geochemical groups. Finally, two mass-balance schemes were applied for the quantitative evaluation of the proportion of the end-member components (dacitic and andesitic magmas) in the comingled lavas (binary mixtures).
Processing lunar soils for oxygen and other materials
NASA Technical Reports Server (NTRS)
Knudsen, Christian W.; Gibson, Michael A.
1992-01-01
Two types of lunar materials are excellent candidates for lunar oxygen production: ilmenite and silicates such as anorthite. Both are lunar surface minable, occurring in soils, breccias, and basalts. Because silicates are considerably more abundant than ilmenite, they may be preferred as source materials. Depending on the processing method chosen for oxygen production and the feedstock material, various useful metals and bulk materials can be produced as byproducts. Available processing techniques include hydrogen reduction of ilmenite and electrochemical and chemical reductions of silicates. Processes in these categories are generally in preliminary development stages and need significant research and development support to carry them to practical deployment, particularly as a lunar-based operation. The goal of beginning lunar processing operations by 2010 requires that planning and research and development emphasize the simplest processing schemes. However, more complex schemes that now appear to present difficult technical challenges may offer more valuable metal byproducts later. While they require more time and effort to perfect, the more complex or difficult schemes may provide important processing and product improvements with which to extend and elaborate the initial lunar processing facilities. A balanced R&D program should take this into account. The following topics are discussed: (1) ilmenite--semi-continuous process; (2) ilmenite--continuous fluid-bed reduction; (3) utilization of spent ilmenite to produce bulk materials; (4) silicates--electrochemical reduction; and (5) silicates--chemical reduction.
Experiments with a three-dimensional statistical objective analysis scheme using FGGE data
NASA Technical Reports Server (NTRS)
Baker, Wayman E.; Bloom, Stephen C.; Woollen, John S.; Nestler, Mark S.; Brin, Eugenia
1987-01-01
A three-dimensional (3D), multivariate, statistical objective analysis scheme (referred to as optimum interpolation or OI) has been developed for use in numerical weather prediction studies with the FGGE data. Some novel aspects of the present scheme include: (1) a multivariate surface analysis over the oceans, which employs an Ekman balance instead of the usual geostrophic relationship, to model the pressure-wind error cross correlations, and (2) the capability to use an error correlation function which is geographically dependent. A series of 4-day data assimilation experiments are conducted to examine the importance of some of the key features of the OI in terms of their effects on forecast skill, as well as to compare the forecast skill using the OI with that utilizing a successive correction method (SCM) of analysis developed earlier. For the three cases examined, the forecast skill is found to be rather insensitive to varying the error correlation function geographically. However, significant differences are noted between forecasts from a two-dimensional (2D) version of the OI and those from the 3D OI, with the 3D OI forecasts exhibiting better forecast skill. The 3D OI forecasts are also more accurate than those from the SCM initial conditions. The 3D OI with the multivariate oceanic surface analysis was found to produce forecasts which were slightly more accurate, on the average, than a univariate version.
Some results on numerical methods for hyperbolic conservation laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang Huanan.
1989-01-01
This dissertation contains some results on the numerical solutions of hyperbolic conservation laws. (1) The author introduced an artificial compression method as a correction to the basic ENO schemes. The method successfully prevents contact discontinuities from being smeared. This is achieved by increasing the slopes of the ENO reconstructions in such a way that the essentially non-oscillatory property of the schemes is kept. He analyzes the non-oscillatory property of the new artificial compression method by applying it to the UNO scheme which is a second order accurate ENO scheme, and proves that the resulting scheme is indeed non-oscillatory. Extensive 1-Dmore » numerical results and some preliminary 2-D ones are provided to show the strong performance of the method. (2) He combines the ENO schemes and the centered difference schemes into self-adjusting hybrid schemes which will be called the localized ENO schemes. At or near the jumps, he uses the ENO schemes with the field by field decompositions, otherwise he simply uses the centered difference schemes without the field by field decompositions. The method involves a new interpolation analysis. In the numerical experiments on several standard test problems, the quality of the numerical results of this method is close to that of the pure ENO results. The localized ENO schemes can be equipped with the above artificial compression method. In this way, he dramatically improves the resolutions of the contact discontinuities at very little additional costs. (3) He introduces a space-time mesh refinement method for time dependent problems.« less
A Comprehensive Study of Data Collection Schemes Using Mobile Sinks in Wireless Sensor Networks
Khan, Abdul Waheed; Abdullah, Abdul Hanan; Anisi, Mohammad Hossein; Bangash, Javed Iqbal
2014-01-01
Recently sink mobility has been exploited in numerous schemes to prolong the lifetime of wireless sensor networks (WSNs). Contrary to traditional WSNs where sensory data from sensor field is ultimately sent to a static sink, mobile sink-based approaches alleviate energy-holes issues thereby facilitating balanced energy consumption among nodes. In mobility scenarios, nodes need to keep track of the latest location of mobile sinks for data delivery. However, frequent propagation of sink topological updates undermines the energy conservation goal and therefore should be controlled. Furthermore, controlled propagation of sinks' topological updates affects the performance of routing strategies thereby increasing data delivery latency and reducing packet delivery ratios. This paper presents a taxonomy of various data collection/dissemination schemes that exploit sink mobility. Based on how sink mobility is exploited in the sensor field, we classify existing schemes into three classes, namely path constrained, path unconstrained, and controlled sink mobility-based schemes. We also organize existing schemes based on their primary goals and provide a comparative study to aid readers in selecting the appropriate scheme in accordance with their particular intended applications and network dynamics. Finally, we conclude our discussion with the identification of some unresolved issues in pursuit of data delivery to a mobile sink. PMID:24504107
Autonomous distributed self-organization for mobile wireless sensor networks.
Wen, Chih-Yu; Tang, Hung-Kai
2009-01-01
This paper presents an adaptive combined-metrics-based clustering scheme for mobile wireless sensor networks, which manages the mobile sensors by utilizing the hierarchical network structure and allocates network resources efficiently A local criteria is used to help mobile sensors form a new cluster or join a current cluster. The messages transmitted during hierarchical clustering are applied to choose distributed gateways such that communication for adjacent clusters and distributed topology control can be achieved. In order to balance the load among clusters and govern the topology change, a cluster reformation scheme using localized criterions is implemented. The proposed scheme is simulated and analyzed to abstract the network behaviors in a number of settings. The experimental results show that the proposed algorithm provides efficient network topology management and achieves high scalability in mobile sensor networks.
Lin, Chung-Ho; Lerch, Robert N.; Thurman, E. Michael; Garrett , Harold E.; George, Milon F.
2002-01-01
Balance (isoxaflutole, IXF) belongs to a new family of herbicides referred to as isoxazoles. IXF has a very short soil half-life (<24 h), degrading to a biologically active diketonitrile (DKN) metabolite that is more polar and considerably more stable. Further degradation of the DKN metabolite produces a nonbiologically active benzoic acid (BA) metabolite. Analytical methods using solid phase extraction followed by high-performance liquid chromatography−UV (HPLC-UV) or high-performance liquid chromatography−mass spectrometry (HPLC-MS) were developed for the analysis of IXF and its metabolites in distilled deionized water and ground water samples. To successfully detect and quantify the BA metabolite by HPLC-UV from ground water samples, a sequential elution scheme was necessary. Using HPLC-UV, the mean recoveries from sequential elution of the parent and its two metabolites from fortified ground water samples ranged from 68.6 to 101.4%. For HPLC-MS, solid phase extraction of ground water samples was performed using a polystyrene divinylbenzene polymer resin. The mean HPLC-MS recoveries of the three compounds from ground water samples spiked at 0.05−2 μg/L ranged from 100.9 to 110.3%. The limits of quantitation for HPLC-UV are approximately 150 ng/L for IXF, 100 ng/L for DKN, and 250 ng/L for BA. The limit of quantitation by HPLC-MS is 50 ng/L for each compound. The methods developed in this work can be applied to determine the transport and fate of Balance in the environment.
Lin, Chung-Ho; Lerch, Robert N; Thurman, E Michael; Garrett, Harold E; George, Milon F
2002-10-09
Balance (isoxaflutole, IXF) belongs to a new family of herbicides referred to as isoxazoles. IXF has a very short soil half-life (<24 h), degrading to a biologically active diketonitrile (DKN) metabolite that is more polar and considerably more stable. Further degradation of the DKN metabolite produces a nonbiologically active benzoic acid (BA) metabolite. Analytical methods using solid phase extraction followed by high-performance liquid chromatography-UV (HPLC-UV) or high-performance liquid chromatography-mass spectrometry (HPLC-MS) were developed for the analysis of IXF and its metabolites in distilled deionized water and ground water samples. To successfully detect and quantify the BA metabolite by HPLC-UV from ground water samples, a sequential elution scheme was necessary. Using HPLC-UV, the mean recoveries from sequential elution of the parent and its two metabolites from fortified ground water samples ranged from 68.6 to 101.4%. For HPLC-MS, solid phase extraction of ground water samples was performed using a polystyrene divinylbenzene polymer resin. The mean HPLC-MS recoveries of the three compounds from ground water samples spiked at 0.05-2 microg/L ranged from 100.9 to 110.3%. The limits of quantitation for HPLC-UV are approximately 150 ng/L for IXF, 100 ng/L for DKN, and 250 ng/L for BA. The limit of quantitation by HPLC-MS is 50 ng/L for each compound. The methods developed in this work can be applied to determine the transport and fate of Balance in the environment.
Reduced Stress Tensor and Dissipation and the Transport of Lamb Vector
NASA Technical Reports Server (NTRS)
Wu, Jie-Zhi; Zhou, Ye; Wu, Jian-Ming
1996-01-01
We develop a methodology to ensure that the stress tensor, regardless of its number of independent components, can be reduced to an exactly equivalent one which has the same number of independent components as the surface force. It is applicable to the momentum balance if the shear viscosity is constant. A direct application of this method to the energy balance also leads to a reduction of the dissipation rate of kinetic energy. Following this procedure, significant saving in analysis and computation may be achieved. For turbulent flows, this strategy immediately implies that a given Reynolds stress model can always be replaced by a reduced one before putting it into computation. Furthermore, we show how the modeling of Reynolds stress tensor can be reduced to that of the mean turbulent Lamb vector alone, which is much simpler. As a first step of this alternative modeling development, we derive the governing equations for the Lamb vector and its square. These equations form a basis of new second-order closure schemes and, we believe, should be favorably compared to that of traditional Reynolds stress transport equation.
Parameter regionalization of a monthly water balance model for the conterminous United States
NASA Astrophysics Data System (ADS)
Bock, A. R.; Hay, L. E.; McCabe, G. J.; Markstrom, S. L.; Atkinson, R. D.
2015-09-01
A parameter regionalization scheme to transfer parameter values and model uncertainty information from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe Efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.
Load Balancing Strategies for Multiphase Flows on Structured Grids
NASA Astrophysics Data System (ADS)
Olshefski, Kristopher; Owkes, Mark
2017-11-01
The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.
NASA Astrophysics Data System (ADS)
Wu, Guocan; Zheng, Xiaogu; Dan, Bo
2016-04-01
The shallow soil moisture observations are assimilated into Common Land Model (CoLM) to estimate the soil moisture in different layers. The forecast error is inflated to improve the analysis state accuracy and the water balance constraint is adopted to reduce the water budget residual in the assimilation procedure. The experiment results illustrate that the adaptive forecast error inflation can reduce the analysis error, while the proper inflation layer can be selected based on the -2log-likelihood function of the innovation statistic. The water balance constraint can result in reducing water budget residual substantially, at a low cost of assimilation accuracy loss. The assimilation scheme can be potentially applied to assimilate the remote sensing data.
NASA Astrophysics Data System (ADS)
Magda, Danièle; de Sainte Marie, Christine; Plantureux, Sylvain; Agreil, Cyril; Amiaud, Bernard; Mestelan, Philippe; Mihout, Sarah
2015-11-01
Current agri-environmental schemes for reconciling agricultural production with biodiversity conservation are proving ineffective Europe-wide, increasing interest in results-based schemes (RBSs). We describe here the French "Flowering Meadows" competition, rewarding the "best agroecological balance" in semi-natural grasslands managed by livestock farmers. This competition, which was entered by about a thousand farmers in 50 regional nature parks between 2007 and 2014, explicitly promotes a new style of agri-environmental scheme focusing on an ability to reach the desired outcome rather than adherence to prescriptive management rules. Building on our experience in the design and monitoring of the competition, we argue that the cornerstone of successful RBSs is a collective learning process in which the reconciliation of agriculture and environment is reconsidered in terms of synergistic relationships between agricultural and ecological functioning. We present the interactive, iterative process by which we defined an original method for assessing species-rich grasslands in agroecological terms. This approach was based on the integration of new criteria, such as flexibility, feeding value, and consistency of use, into the assessment of forage production performance and the consideration of biodiversity conservation through its functional role within the grassland ecosystem, rather than simply noting the presence or abundance of species. We describe the adaptation of this methodology on the basis of competition feedback, to bring about a significant shift in the conventional working methods of agronomists and conservationists (including researchers).The potential and efficacy of RBSs for promoting ecologically sound livestock systems are discussed in the concluding remarks, and they relate to the ecological intensification debate.
Vectorized schemes for conical potential flow using the artificial density method
NASA Technical Reports Server (NTRS)
Bradley, P. F.; Dwoyer, D. L.; South, J. C., Jr.; Keen, J. M.
1984-01-01
A method is developed to determine solutions to the full-potential equation for steady supersonic conical flow using the artificial density method. Various update schemes used generally for transonic potential solutions are investigated. The schemes are compared for speed and robustness. All versions of the computer code have been vectorized and are currently running on the CYBER-203 computer. The update schemes are vectorized, where possible, either fully (explicit schemes) or partially (implicit schemes). Since each version of the code differs only by the update scheme and elements other than the update scheme are completely vectorizable, comparisons of computational effort and convergence rate among schemes are a measure of the specific scheme's performance. Results are presented for circular and elliptical cones at angle of attack for subcritical and supercritical crossflows.
Experiments in balance with a 2D one-legged hopping machine
NASA Astrophysics Data System (ADS)
Raibert, M. H.; Brown, H. B., Jr.
1984-03-01
The ability to balance is important to the mobility obtained by legged creatures found in nature, and may someday lead to versatile legged vehicles. In order to study the role of balance in legged locomotion and to develop appropriate control strategies, a 2D hopping machine was constructed for experimentation. The machine has one leg on which it hops and runs, making balance a prime consideration. Control of the machine's locomotion was decomposed into three separate parts: a vertical height control part, a horizontal velocity part, and an angular attitude control part. Experiments showed that the three part control scheme, while very simple to implement, was powerful enough to permit the machine to hop in place, to run at a desired rate, to translate from place to place, and to leap over obstacles. Results from modeling and computer simulation of a similar one-legged device are described by Raibert (1983).
2013-01-01
Background Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. Methods We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Results and conclusions Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers. PMID:24565158
Seven Measures of the Ways That Deciders Frame Their Career Decisions.
ERIC Educational Resources Information Center
Cochran, Larry
1983-01-01
Illustrates seven different measures of the ways people structure a career decision. Given sets of occupational alternatives and considerations, the career grid is a decisional balance sheet that indicates the way each occupation is judged on each consideration. It can be used to correct faulty decision schemes. (JAC)
LHCb trigger streams optimization
NASA Astrophysics Data System (ADS)
Derkach, D.; Kazeev, N.; Neychev, R.; Panin, A.; Trofimov, I.; Ustyuzhanin, A.; Vesterinen, M.
2017-10-01
The LHCb experiment stores around 1011 collision events per year. A typical physics analysis deals with a final sample of up to 107 events. Event preselection algorithms (lines) are used for data reduction. Since the data are stored in a format that requires sequential access, the lines are grouped into several output file streams, in order to increase the efficiency of user analysis jobs that read these data. The scheme efficiency heavily depends on the stream composition. By putting similar lines together and balancing the stream sizes it is possible to reduce the overhead. We present a method for finding an optimal stream composition. The method is applied to a part of the LHCb data (Turbo stream) on the stage where it is prepared for user physics analysis. This results in an expected improvement of 15% in the speed of user analysis jobs, and will be applied on data to be recorded in 2017.
Flexible Residential Smart Grid Simulation Framework
NASA Astrophysics Data System (ADS)
Xiang, Wang
Different scheduling and coordination algorithms controlling household appliances' operations can potentially lead to energy consumption reduction and/or load balancing in conjunction with different electricity pricing methods used in smart grid programs. In order to easily implement different algorithms and evaluate their efficiency against other ideas, a flexible simulation framework is desirable in both research and business fields. However, such a platform is currently lacking or underdeveloped. In this thesis, we provide a simulation framework to focus on demand side residential energy consumption coordination in response to different pricing methods. This simulation framework, equipped with an appliance consumption library using realistic values, aims to closely represent the average usage of different types of appliances. The simulation results of traditional usage yield close matching values compared to surveyed real life consumption records. Several sample coordination algorithms, pricing schemes, and communication scenarios are also implemented to illustrate the use of the simulation framework.
Self-referenced single-shot THz detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russell, Brandon K.; Ofori-Okai, Benjamin K.; Chen, Zhijiang
We demonstrate a self-referencing method to reduce noise in a single-shot terahertz detection scheme. By splitting a single terahertz pulse and using a reflective echelon, both the signal and reference terahertz time-domain waveforms were measured using one laser pulse. Simultaneous acquisition of these waveforms significantly reduces noise originating from shot-to-shot fluctuations. Here, we show that correlation function based referencing, which is not limited to polarization dependent measurements, can achieve a noise floor that is comparable to state-of-the-art polarization-gated balanced detection. Lastly, we extract the DC conductivity of a 30 nm free-standing gold film using a single THz pulse. The measuredmore » value of σ 0 = 1.3 ± 0.4 × 10 7 S m -1 is in good agreement with the value measured by four-point probe, indicating the viability of this method for measuring dynamical changes and small signals.« less
Self-referenced single-shot THz detection
Russell, Brandon K.; Ofori-Okai, Benjamin K.; Chen, Zhijiang; ...
2017-06-29
We demonstrate a self-referencing method to reduce noise in a single-shot terahertz detection scheme. By splitting a single terahertz pulse and using a reflective echelon, both the signal and reference terahertz time-domain waveforms were measured using one laser pulse. Simultaneous acquisition of these waveforms significantly reduces noise originating from shot-to-shot fluctuations. Here, we show that correlation function based referencing, which is not limited to polarization dependent measurements, can achieve a noise floor that is comparable to state-of-the-art polarization-gated balanced detection. Lastly, we extract the DC conductivity of a 30 nm free-standing gold film using a single THz pulse. The measuredmore » value of σ 0 = 1.3 ± 0.4 × 10 7 S m -1 is in good agreement with the value measured by four-point probe, indicating the viability of this method for measuring dynamical changes and small signals.« less
Jeon, Gwanggil; Dubois, Eric
2013-01-01
This paper adapts the least-squares luma-chroma demultiplexing (LSLCD) demosaicking method to noisy Bayer color filter array (CFA) images. A model is presented for the noise in white-balanced gamma-corrected CFA images. A method to estimate the noise level in each of the red, green, and blue color channels is then developed. Based on the estimated noise parameters, one of a finite set of configurations adapted to a particular level of noise is selected to demosaic the noisy data. The noise-adaptive demosaicking scheme is called LSLCD with noise estimation (LSLCD-NE). Experimental results demonstrate state-of-the-art performance over a wide range of noise levels, with low computational complexity. Many results with several algorithms, noise levels, and images are presented on our companion web site along with software to allow reproduction of our results.
Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.
Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei
2017-04-01
Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.
Schwientek, Marc; Guillet, Gaëlle; Rügner, Hermann; Kuch, Bertram; Grathwohl, Peter
2016-01-01
Increasing numbers of organic micropollutants are emitted into rivers via municipal wastewaters. Due to their persistence many pollutants pass wastewater treatment plants without substantial removal. Transport and fate of pollutants in receiving waters and export to downstream ecosystems is not well understood. In particular, a better knowledge of processes governing their environmental behavior is needed. Although a lot of data are available concerning the ubiquitous presence of micropollutants in rivers, accurate data on transport and removal rates are lacking. In this paper, a mass balance approach is presented, which is based on the Lagrangian sampling scheme, but extended to account for precise transport velocities and mixing along river stretches. The calculated mass balances allow accurate quantification of pollutants' reactivity along river segments. This is demonstrated for representative members of important groups of micropollutants, e.g. pharmaceuticals, musk fragrances, flame retardants, and pesticides. A model-aided analysis of the measured data series gives insight into the temporal dynamics of removal processes. The occurrence of different removal mechanisms such as photooxidation, microbial degradation, and volatilization is discussed. The results demonstrate, that removal processes are highly variable in time and space and this has to be considered for future studies. The high precision sampling scheme presented could be a powerful tool for quantifying removal processes under different boundary conditions and in river segments with contrasting properties. Copyright © 2015. Published by Elsevier B.V.
A Review of High-Order and Optimized Finite-Difference Methods for Simulating Linear Wave Phenomena
NASA Technical Reports Server (NTRS)
Zingg, David W.
1996-01-01
This paper presents a review of high-order and optimized finite-difference methods for numerically simulating the propagation and scattering of linear waves, such as electromagnetic, acoustic, or elastic waves. The spatial operators reviewed include compact schemes, non-compact schemes, schemes on staggered grids, and schemes which are optimized to produce specific characteristics. The time-marching methods discussed include Runge-Kutta methods, Adams-Bashforth methods, and the leapfrog method. In addition, the following fourth-order fully-discrete finite-difference methods are considered: a one-step implicit scheme with a three-point spatial stencil, a one-step explicit scheme with a five-point spatial stencil, and a two-step explicit scheme with a five-point spatial stencil. For each method studied, the number of grid points per wavelength required for accurate simulation of wave propagation over large distances is presented. Recommendations are made with respect to the suitability of the methods for specific problems and practical aspects of their use, such as appropriate Courant numbers and grid densities. Avenues for future research are suggested.
Balance and its Clinical Assessment in Older Adults – A Review
Nnodim, Joseph O.; Yung, Raymond L.
2016-01-01
Background Human beings rely on multiple systems to maintain their balance as they perform their activities of daily living. These systems may be undermined functionally by both disease and the normal aging process. Balance impairment is associated with increased fall risk. Purpose This paper examines the dynamic formulation of balance as activity and reviews the biological mechanisms for its control. A “minimal-technology” scheme for its clinical evaluation in the ambulatory care setting is proposed. Methods The PubMed, Scopus and CINAHL databases were searched for relevant articles using the following terms in combination with balance: aging, impairment, control mechanisms, clinical assessment. Only articles which describe test procedures, their psychometrics and rely exclusively on equipment found in a regular physician office were reviewed. Results Human bipedal stance and gait are inherently low in stability. Accordingly, an elaborate sensory apparatus comprising visual, vestibular and proprioceptive elements, constantly monitors the position and movement of the body in its environment and sends signals to the central nervous system. The sensory inputs are processed and motor commands are generated. In response to efferent signals, the musculoskeletal system moves the body as is necessary to maintain or regain balance. The combination of senescent decline in organ function and the higher prevalence of diseases of the balance control systems in older adults predisposes this population subset to balance impairment. Older adults with balance impairment are likely to present with “dizziness”. The history should concentrate on the first experience, with an attempt made to categorize it as a Drachman type. Since the symptomatology is often vague, several of the recommended physical tests are provocative maneuvers aimed at reproducing the patient’s complaint. Well-validated questionnaires are available for evaluating the impact of “dizziness” on various domains of patient’s lives, including their fear of falling. Aspects of a good history and physical examination not otherwise addressed to balance function, such as medications review and cognitive assessment, also yield information that contributes to a better understanding of the patient’s complaint. Ordinal scales, which are aggregates of functional performance tests, enable detailed quantitative assessments of balance activity. Conclusion The integrity of balance function is essential for activities of daily living efficacy. Its deterioration with aging and disease places older adults at increased risk of falls and dependency. Balance can be effectively evaluated in the ambulatory care setting, using a combination of scalar questionnaires, dedicated history-taking and physical tests that do not require sophisticated instrumentation. PMID:26942231
NASA Technical Reports Server (NTRS)
Diak, George R.; Stewart, Tod R.
1989-01-01
A method is presented for evaluating the fluxes of sensible and latent heating at the land surface, using satellite-measured surface temperature changes in a composite surface layer-mixed layer representation of the planetary boundary layer. The basic prognostic model is tested by comparison with synoptic station information at sites where surface evaporation climatology is well known. The remote sensing version of the model, using satellite-measured surface temperature changes, is then used to quantify the sharp spatial gradient in surface heating/evaporation across the central United States. An error analysis indicates that perhaps five levels of evaporation are recognizable by these methods and that the chief cause of error is the interaction of errors in the measurement of surface temperature change with errors in the assigment of surface roughness character. Finally, two new potential methods for remote sensing of the land-surface energy balance are suggested which will relay on space-borne instrumentation planned for the 1990s.
Parallelization Issues and Particle-In Codes.
NASA Astrophysics Data System (ADS)
Elster, Anne Cathrine
1994-01-01
"Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on the simulation may lead to further improvements. For example, in the case of mean particle drift, it is often advantageous to partition the grid primarily along the direction of the drift. The particle-in-cell codes for this study were tested using physical parameters, which lead to predictable phenomena including plasma oscillations and two-stream instabilities. An overview of the most central references related to parallel particle codes is also given.
NASA Astrophysics Data System (ADS)
Guzinski, R.; Anderson, M. C.; Kustas, W. P.; Nieto, H.; Sandholt, I.
2013-02-01
The Dual Temperature Difference (DTD) model, introduced by Norman et al. (2000), uses a two source energy balance modelling scheme driven by remotely sensed observations of diurnal changes in land surface temperature (LST) to estimate surface energy fluxes. By using a time differential temperature measurement as input, the approach reduces model sensitivity to errors in absolute temperature retrieval. The original formulation of the DTD required an early morning LST observation (approximately 1 h after sunrise) when surface fluxes are minimal, limiting application to data provided by geostationary satellites at sub-hourly temporal resolution. The DTD model has been applied primarily during the active growth phase of agricultural crops and rangeland vegetation grasses, and has not been rigorously evaluated during senescence or in forested ecosystems. In this paper we present modifications to the DTD model that enable applications using thermal observation from polar orbiting satellites, such as Terra and Aqua, with day and night overpass times over the area of interest. This allows the application of the DTD model in high latitude regions where large viewing angles preclude the use of geostationary satellites, and also exploits the higher spatial resolution provided by polar orbiting satellites. A method for estimating nocturnal surface fluxes and a scheme for estimating the fraction of green vegetation are developed and evaluated. Modification for green vegetation fraction leads to significantly improved estimation of the heat fluxes from the vegetation canopy during senescence and in forests. Land-cover based modifications to the Priestley-Taylor scheme, used to estimate transpiration fluxes, are explored based on prior findings for conifer forests. When the modified DTD model is run with LST measurements acquired with the Moderate Resolution Imaging Spectroradiometer (MODIS) on board the Terra and Aqua satellites, generally satisfactory agreement with field measurements is obtained for a number of ecosystems in Denmark and the United States. Finally, regional maps of energy fluxes are produced for the Danish Hydrological ObsErvatory (HOBE) in western Denmark, indicating realistic patterns based on land use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beverly Law; David Turner; Warren Cohen
2008-05-22
The goal is to quantify and explain the carbon (C) budget for Oregon and N. California. The research compares "bottom -up" and "top-down" methods, and develops prototype analytical systems for regional analysis of the carbon balance that are potentially applicable to other continental regions, and that can be used to explore climate, disturbance and land-use effects on the carbon cycle. Objectives are: 1) Improve, test and apply a bottom up approach that synthesizes a spatially nested hierarchy of observations (multispectral remote sensing, inventories, flux and extensive sites), and the Biome-BGC model to quantify the C balance across the region; 2)more » Improve, test and apply a top down approach for regional and global C flux modeling that uses a model-data fusion scheme (MODIS products, AmeriFlux, atmospheric CO2 concentration network), and a boundary layer model to estimate net ecosystem production (NEP) across the region and partition it among GPP, R(a) and R(h). 3) Provide critical understanding of the controls on regional C balance (how NEP and carbon stocks are influenced by disturbance from fire and management, land use, and interannual climate variation). The key science questions are, "What are the magnitudes and distributions of C sources and sinks on seasonal to decadal time scales, and what processes are controlling their dynamics? What are regional spatial and temporal variations of C sources and sinks? What are the errors and uncertainties in the data products and results (i.e., in situ observations, remote sensing, models)?« less
Insured without moral hazard in the health care reform of China.
Wong, Chack-Kie; Cheung, Chau-Kiu; Tang, Kwong-Leung
2012-01-01
Public insurance possibly increases the use of health care because of the insured person's interest in maximizing benefits without incurring out-of-pocket costs. A newly reformed public insurance scheme in China that builds on personal responsibility is thus likely to provide insurance without causing moral hazard. This possibility is the focus of this study, which surveyed 303 employees in a large city in China. The results show that the coverage and use of the public insurance scheme did not show a significant positive effect on the average employee's frequency of physician consultation. In contrast, the employee who endorsed public responsibility for health care visited physicians more frequently in response to some insurance factors. On balance, public insurance did not tempt the average employee to consult physicians frequently, presumably due to personal responsibility requirements in the insurance scheme.
Computer Science Techniques Applied to Parallel Atomistic Simulation
NASA Astrophysics Data System (ADS)
Nakano, Aiichiro
1998-03-01
Recent developments in parallel processing technology and multiresolution numerical algorithms have established large-scale molecular dynamics (MD) simulations as a new research mode for studying materials phenomena such as fracture. However, this requires large system sizes and long simulated times. We have developed: i) Space-time multiresolution schemes; ii) fuzzy-clustering approach to hierarchical dynamics; iii) wavelet-based adaptive curvilinear-coordinate load balancing; iv) multilevel preconditioned conjugate gradient method; and v) spacefilling-curve-based data compression for parallel I/O. Using these techniques, million-atom parallel MD simulations are performed for the oxidation dynamics of nanocrystalline Al. The simulations take into account the effect of dynamic charge transfer between Al and O using the electronegativity equalization scheme. The resulting long-range Coulomb interaction is calculated efficiently with the fast multipole method. Results for temperature and charge distributions, residual stresses, bond lengths and bond angles, and diffusivities of Al and O will be presented. The oxidation of nanocrystalline Al is elucidated through immersive visualization in virtual environments. A unique dual-degree education program at Louisiana State University will also be discussed in which students can obtain a Ph.D. in Physics & Astronomy and a M.S. from the Department of Computer Science in five years. This program fosters interdisciplinary research activities for interfacing High Performance Computing and Communications with large-scale atomistic simulations of advanced materials. This work was supported by NSF (CAREER Program), ARO, PRF, and Louisiana LEQSF.
DNS, Enstrophy Balance, and the Dissipation Equation in a Separated Turbulent Channel Flow
NASA Technical Reports Server (NTRS)
Balakumar, Ponnampalam; Rubinstein, Robert; Rumsey, Christopher L.
2013-01-01
The turbulent flows through a plane channel and a channel with a constriction (2-D hill) are numerically simulated using DNS and RANS calculations. The Navier-Stokes equations in the DNS are solved using a higher order kinetic energy preserving central schemes and a fifth order accurate upwind biased WENO scheme for the space discretization. RANS calculations are performed using the NASA code CFL3D with the komega SST two-equation model and a full Reynolds stress model. Using DNS, the magnitudes of different terms that appear in the enstrophy equation are evaluated. The results show that the dissipation and the diffusion terms reach large values at the wall. All the vortex stretching terms have similar magnitudes within the buffer region. Beyond that the triple correlation among the vorticity and strain rate fluctuations becomes the important kinematic term in the enstrophy equation. This term is balanced by the viscous dissipation. In the separated flow, the triple correlation term and the viscous dissipation term peak locally and balance each other near the separated shear layer region. These findings concur with the analysis of Tennekes and Lumley, confirming that the energy transfer terms associated with the small-scale dissipation and the fluctuations of the vortex stretching essentially cancel each other, leaving an equation for the dissipation that is governed by the large-scale motion.
Velasco-Tapia, Fernando
2014-01-01
Magmatic processes have usually been identified and evaluated using qualitative or semiquantitative geochemical or isotopic tools based on a restricted number of variables. However, a more complete and quantitative view could be reached applying multivariate analysis, mass balance techniques, and statistical tests. As an example, in this work a statistical and quantitative scheme is applied to analyze the geochemical features for the Sierra de las Cruces (SC) volcanic range (Mexican Volcanic Belt). In this locality, the volcanic activity (3.7 to 0.5 Ma) was dominantly dacitic, but the presence of spheroidal andesitic enclaves and/or diverse disequilibrium features in majority of lavas confirms the operation of magma mixing/mingling. New discriminant-function-based multidimensional diagrams were used to discriminate tectonic setting. Statistical tests of discordancy and significance were applied to evaluate the influence of the subducting Cocos plate, which seems to be rather negligible for the SC magmas in relation to several major and trace elements. A cluster analysis following Ward's linkage rule was carried out to classify the SC volcanic rocks geochemical groups. Finally, two mass-balance schemes were applied for the quantitative evaluation of the proportion of the end-member components (dacitic and andesitic magmas) in the comingled lavas (binary mixtures). PMID:24737994
Stratospheric water vapor in the NCAR CCM2
NASA Technical Reports Server (NTRS)
Mote, Philip W.; Holton, James R.
1992-01-01
Results are presented of the water vapor distribution in a 3D GCM with good vertical resolution, a state-of-the-art transport scheme, and a realistic water vapor source in the middle atmosphere. In addition to water vapor, the model transported methane and an idealized clock tracer, which provides transport times to and within the middle atmosphere. The water vapor and methane distributions are compared with Nimbus 7 SAMS and LIMS data and with in situ measurements. It is argued that the hygropause in the model is maintained not by 'freeze-drying' at the tops of tropical cumulonimbus, but by a balance between two sources and one sink. Since the southern winter dehydration is unrealistically intense, this balance most likely does not resemble the balance in the real atmosphere.
Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks
Pena, Rodrigo F. O.; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C.; Lindner, Benjamin
2018-01-01
Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks. PMID:29551968
NASA Astrophysics Data System (ADS)
Patel, Jitendra Kumar; Natarajan, Ganesh
2017-12-01
We discuss the development and assessment of a robust numerical algorithm for simulating multiphase flows with complex interfaces and high density ratios on arbitrary polygonal meshes. The algorithm combines the volume-of-fluid method with an incremental projection approach for incompressible multiphase flows in a novel hybrid staggered/non-staggered framework. The key principles that characterise the algorithm are the consistent treatment of discrete mass and momentum transport and the similar discretisation of force terms appearing in the momentum equation. The former is achieved by invoking identical schemes for convective transport of volume fraction and momentum in the respective discrete equations while the latter is realised by representing the gravity and surface tension terms as gradients of suitable scalars which are then discretised in identical fashion resulting in a balanced formulation. The hybrid staggered/non-staggered framework employed herein solves for the scalar normal momentum at the cell faces, while the volume fraction is computed at the cell centroids. This is shown to naturally lead to similar terms for pressure and its correction in the momentum and pressure correction equations respectively, which are again treated discretely in a similar manner. We show that spurious currents that corrupt the solution may arise both from an unbalanced formulation where forces (gravity and surface tension) are discretised in dissimilar manner and from an inconsistent approach where different schemes are used to convect the mass and momentum, with the latter prominent in flows which are convection-dominant with high density ratios. Interestingly, the inconsistent approach is shown to perform as well as the consistent approach even for high density ratio flows in some cases while it exhibits anomalous behaviour for other scenarios, even at low density ratios. Using a plethora of test problems of increasing complexity, we conclusively demonstrate that the consistent transport and balanced force treatment results in a numerically stable solution procedure and physically consistent results. The algorithm proposed in this study qualifies as a robust approach to simulate multiphase flows with high density ratios on unstructured meshes and may be realised in existing flow solvers with relative ease.
Active Inference, Epistemic Value, and Vicarious Trial and Error
ERIC Educational Resources Information Center
Pezzulo, Giovanni; Cartoni, Emilio; Rigoli, Francesco; io-Lopez, Léo; Friston, Karl
2016-01-01
Balancing habitual and deliberate forms of choice entails a comparison of their respective merits--the former being faster but inflexible, and the latter slower but more versatile. Here, we show that arbitration between these two forms of control can be derived from first principles within an Active Inference scheme. We illustrate our arguments…
Judges' Agreement and Disagreement Patterns When Encoding Verbal Protocols.
ERIC Educational Resources Information Center
Schael, Jocelyne; Dionne, Jean-Paul
The basis of agreement or disagreement among judges/evaluators when applying a coding scheme to concurrent verbal protocols was studied. The sample included 20 university graduates, from varied backgrounds; 10 subjects had and 10 subjects did not have experience in protocol analysis. The total sample was divided into four balanced groups according…
Asia in the European Classroom: The CDCC's Teachers Bursaries Scheme.
ERIC Educational Resources Information Center
Bahree, Patricia
Asia now claims more than half of the world's population and economically presents a challenge to the former western domination of the world's markets. With these changes, education for international understanding is essential. How can the classroom become the site for effective and balanced instruction about Asia? This document presents numerous…
Heritability construction for provenance and family selection
Fan H. Kung; Calvin F. Bey
1977-01-01
Concepts and procedures for heritability estimations through the variance components and the unified F-statistics approach are described. The variance components approach is illustrated by five possible family selection schemes within a diallel mating test, while the unified F-statistics approach is demonstrated by a geographic variation study. In a balance design, the...
USDA-ARS?s Scientific Manuscript database
Thermal-infrared remote sensing of land surface temperature provides valuable information for quantifying root-zone water availability, evapotranspiration (ET) and crop condition. A thermal-based scheme, called the Two-Source Energy Balance (TSEB) model, solves for the soil/substrate and canopy temp...
Modeling of power control schemes in induction cooking devices
NASA Astrophysics Data System (ADS)
Beato, Alessio; Conti, Massimo; Turchetti, Claudio; Orcioni, Simone
2005-06-01
In recent years, with remarkable advancements of power semiconductor devices and electronic control systems, it becomes possible to apply the induction heating technique for domestic use. In order to achieve the supply power required by these devices, high-frequency resonant inverters are used: the force commutated, half-bridge series resonant converter is well suited for induction cooking since it offers an appropriate balance between complexity and performances. Power control is a key issue to attain efficient and reliable products. This paper describes and compares four power control schemes applied to the half-bridge series resonant inverter. The pulse frequency modulation is the most common control scheme: according to this strategy, the output power is regulated by varying the switching frequency of the inverter circuit. Other considered methods, originally developed for induction heating industrial applications, are: pulse amplitude modulation, asymmetrical duty cycle and pulse density modulation which are respectively based on variation of the amplitude of the input supply voltage, on variation of the duty cycle of the switching signals and on variation of the number of switching pulses. Each description is provided with a detailed mathematical analysis; an analytical model, built to simulate the circuit topology, is implemented in the Matlab environment in order to obtain the steady-state values and waveforms of currents and voltages. For purposes of this study, switches and all reactive components are modelled as ideal and the "heating-coil/pan" system is represented by an equivalent circuit made up of a series connected resistance and inductance.
NASA Astrophysics Data System (ADS)
Chen, Duan; Chen, Qiuwen; Li, Ruonan; Blanckaert, Koen; Cai, Desuo
2014-06-01
Ecologically-friendly reservoir operation procedures aim to conserve key ecosystem properties in the rivers, while minimizing the sacrifice of socioeconomic interests. This study focused on the Jinping cascaded reservoirs as a case study. An optimization model was developed to explore a balance between the ecological flow requirement (EFR) of a target fish species ( Schizothorax chongi) in the dewatered natural channel section, and annual power production. The EFR for the channel was determined by the Tennant method and a fish habitat model, respectively. The optimization model was solved by using an adaptive real-coded genetic algorithm. Several operation scenarios corresponding to the ecological flow series were evaluated using the optimization model. Through comparisons, an optimal operational scheme, which combines relatively low power production loss with a preferred ecological flow regime in the dewatered channel, is proposed for the cascaded reservoirs. Under the recommended scheme, the discharge into the Dahewan river reach in the dry season ranges from 36 to 50 m3/s. This will enable at least 50% of the target fish habitats in the channel to be conserved, at a cost of only 2.5% annual power production loss. The study demonstrates that the use of EFRs is an efficient approach to the optimization of reservoir operation in an ecologically friendly way. Similar modeling, for other important fish species and ecosystem functions, supplemented by field validation of results, is needed in order to secure the long-term conservation of the affected river ecosystem.
NASA Astrophysics Data System (ADS)
Engwirda, Darren; Kelley, Maxwell; Marshall, John
2017-08-01
Discretisation of the horizontal pressure gradient force in layered ocean models is a challenging task, with non-trivial interactions between the thermodynamics of the fluid and the geometry of the layers often leading to numerical difficulties. We present two new finite-volume schemes for the pressure gradient operator designed to address these issues. In each case, the horizontal acceleration is computed as an integration of the contact pressure force that acts along the perimeter of an associated momentum control-volume. A pair of new schemes are developed by exploring different control-volume geometries. Non-linearities in the underlying equation-of-state definitions and thermodynamic profiles are treated using a high-order accurate numerical integration framework, designed to preserve hydrostatic balance in a non-linear manner. Numerical experiments show that the new methods achieve high levels of consistency, maintaining hydrostatic and thermobaric equilibrium in the presence of strongly-sloping layer geometries, non-linear equations-of-state and non-uniform vertical stratification profiles. These results suggest that the new pressure gradient formulations may be appropriate for general circulation models that employ hybrid vertical coordinates and/or terrain-following representations.
A Dependable Localization Algorithm for Survivable Belt-Type Sensor Networks.
Zhu, Mingqiang; Song, Fei; Xu, Lei; Seo, Jung Taek; You, Ilsun
2017-11-29
As the key element, sensor networks are widely investigated by the Internet of Things (IoT) community. When massive numbers of devices are well connected, malicious attackers may deliberately propagate fake position information to confuse the ordinary users and lower the network survivability in belt-type situation. However, most existing positioning solutions only focus on the algorithm accuracy and do not consider any security aspects. In this paper, we propose a comprehensive scheme for node localization protection, which aims to improve the energy-efficient, reliability and accuracy. To handle the unbalanced resource consumption, a node deployment mechanism is presented to satisfy the energy balancing strategy in resource-constrained scenarios. According to cooperation localization theory and network connection property, the parameter estimation model is established. To achieve reliable estimations and eliminate large errors, an improved localization algorithm is created based on modified average hop distances. In order to further improve the algorithms, the node positioning accuracy is enhanced by using the steepest descent method. The experimental simulations illustrate the performance of new scheme can meet the previous targets. The results also demonstrate that it improves the belt-type sensor networks' survivability, in terms of anti-interference, network energy saving, etc.
A Dependable Localization Algorithm for Survivable Belt-Type Sensor Networks
Zhu, Mingqiang; Song, Fei; Xu, Lei; Seo, Jung Taek
2017-01-01
As the key element, sensor networks are widely investigated by the Internet of Things (IoT) community. When massive numbers of devices are well connected, malicious attackers may deliberately propagate fake position information to confuse the ordinary users and lower the network survivability in belt-type situation. However, most existing positioning solutions only focus on the algorithm accuracy and do not consider any security aspects. In this paper, we propose a comprehensive scheme for node localization protection, which aims to improve the energy-efficient, reliability and accuracy. To handle the unbalanced resource consumption, a node deployment mechanism is presented to satisfy the energy balancing strategy in resource-constrained scenarios. According to cooperation localization theory and network connection property, the parameter estimation model is established. To achieve reliable estimations and eliminate large errors, an improved localization algorithm is created based on modified average hop distances. In order to further improve the algorithms, the node positioning accuracy is enhanced by using the steepest descent method. The experimental simulations illustrate the performance of new scheme can meet the previous targets. The results also demonstrate that it improves the belt-type sensor networks’ survivability, in terms of anti-interference, network energy saving, etc. PMID:29186072
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2017-03-01
Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.
Key Management Scheme Based on Route Planning of Mobile Sink in Wireless Sensor Networks.
Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Jiang, Shengming; Chen, Wei
2016-01-29
In many wireless sensor network application scenarios the key management scheme with a Mobile Sink (MS) should be fully investigated. This paper proposes a key management scheme based on dynamic clustering and optimal-routing choice of MS. The concept of Traveling Salesman Problem with Neighbor areas (TSPN) in dynamic clustering for data exchange is proposed, and the selection probability is used in MS route planning. The proposed scheme extends static key management to dynamic key management by considering the dynamic clustering and mobility of MSs, which can effectively balance the total energy consumption during the activities. Considering the different resources available to the member nodes and sink node, the session key between cluster head and MS is established by modified an ECC encryption with Diffie-Hellman key exchange (ECDH) algorithm and the session key between member node and cluster head is built with a binary symmetric polynomial. By analyzing the security of data storage, data transfer and the mechanism of dynamic key management, the proposed scheme has more advantages to help improve the resilience of the key management system of the network on the premise of satisfying higher connectivity and storage efficiency.
Hard versus soft dynamics for adsorption-desorption kinetics: Exact results in one-dimension.
Manzi, S J; Huespe, V J; Belardinelli, R E; Pereyra, V D
2009-11-01
The adsorption-desorption kinetics is discussed in the framework of the kinetic lattice-gas model. The master equation formalism has been introduced to describe the evolution of the system, where the transition probabilities are written as an expansion of the occupation configurations of all neighboring sites. Since the detailed balance principle determines half of the coefficients that arise from the expansion, it is necessary to introduce ad hoc, a dynamic scheme to get the rest of them. Three schemes of the so-called hard dynamics, in which the probability of transition from single site cannot be factored into a part which depends only on the interaction energy and one that only depends on the field energy, and five schemes of the so-called soft dynamics, in which this factorization is possible, were introduced for this purpose. It is observed that for the hard dynamic schemes, the equilibrium and nonequilibrium observables, such as adsorption isotherms, sticking coefficients, and thermal desorption spectra, have a normal or physical sustainable behavior. While for the soft dynamics schemes, with the exception of the transition state theory, the equilibrium and nonequilibrium observables have several problems. Some of them can be regarded as abnormal behavior.
NASA Astrophysics Data System (ADS)
Braghiere, Renato; Quaife, Tristan; Black, Emily
2016-04-01
Incoming shortwave radiation is the primary source of energy driving the majority of the Earth's climate system. The partitioning of shortwave radiation by vegetation into absorbed, reflected, and transmitted terms is important for most of biogeophysical processes, including leaf temperature changes and photosynthesis, and it is currently calculated by most of land surface schemes (LSS) of climate and/or numerical weather prediction models. The most commonly used radiative transfer scheme in LSS is the two-stream approximation, however it does not explicitly account for vegetation architectural effects on shortwave radiation partitioning. Detailed three-dimensional (3D) canopy radiative transfer schemes have been developed, but they are too computationally expensive to address large-scale related studies over long time periods. Using a straightforward one-dimensional (1D) parameterisation proposed by Pinty et al. (2006), we modified a two-stream radiative transfer scheme by including a simple function of Sun zenith angle, so-called "structure factor", which does not require an explicit description and understanding of the complex phenomena arising from the presence of vegetation heterogeneous architecture, and it guarantees accurate simulations of the radiative balance consistently with 3D representations. In order to evaluate the ability of the proposed parameterisation in accurately represent the radiative balance of more complex 3D schemes, a comparison between the modified two-stream approximation with the "structure factor" parameterisation and state-of-art 3D radiative transfer schemes was conducted, following a set of virtual scenarios described in the RAMI4PILPS experiment. These experiments have been evaluating the radiative balance of several models under perfectly controlled conditions in order to eliminate uncertainties arising from an incomplete or erroneous knowledge of the structural, spectral and illumination related canopy characteristics typical of model comparisons with in-situ observations. The structure factor parameters were obtained for each canopy structure through the inversion against direct and diffuse fraction of absorbed photosynthetically active radiation (fAPAR), and albedo PAR. Overall, the modified two-stream approximation consistently showed a good agreement with the RAMI4PILPS reference values under direct and diffuse illumination conditions. It is an efficient and accurate tool to derive PAR absorptance and reflectance for scenarios with different canopy densities, leaf densities and soil background albedos, with especial attention to brighter backgrounds, i.e., snowy. The major difficulty of its applicability in the real world is to acquire the parameterisation parameters from in-situ observations. The derivation of parameters from Digital Hemispherical Photographs (DHP) is highly promising at forest stands scales. DHP provide a permanent record and are a valuable information source for position, size, density, and distribution of canopy gaps. The modified two-stream approximation parameters were derived from gap probability data extracted from DHP obtained in a woody savannah in California, USA. Values of fAPAR and albedo PAR were evaluated against a tree-based vegetation canopy model, MAESPA, which used airborne LiDAR data to define the individual-tree locations, and extract structural information such as tree height and crown diameter. The parameterisation improved the performance of a two-stream approximation by making it achieves comparable results to complex 3D model calculations under observed conditions.
Willis, Catherine; Rubin, Jacob
1987-01-01
A moving boundary problem which arises during transport with precipitation-dissolution reactions is solved by three different numerical methods. Two of these methods (one explicit and one implicit) are based on an integral formulation of mass balance and lead to an approximation of a weak solution. These methods are compared to a front-tracking scheme. Although the two approaches are conceptually different, the numerical solutions showed good agreement. As the ratio of dispersion to convection decreases, the methods based on the integral formulation become computationally more efficient. Specific reactions were modeled to examine the dependence of the system on the physical and chemical parameters. Although the water flow rate does not explicitly appear in the equation for the velocity of the moving boundary, the speed of the boundary depends more on the flux rate than on the dispersion coefficient. The discontinuity in the gradient of the solute concentration profile at the boundary increases with convection and with the initial concentration of the mineral. Our implicit method is extended to allow participation of the solutes in complexation reactions as well as the precipitation-dissolution reaction. This extension is easily made and does not change the basic method.
A multigrid LU-SSOR scheme for approximate Newton iteration applied to the Euler equations
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Jameson, Antony
1986-01-01
A new efficient relaxation scheme in conjunction with a multigrid method is developed for the Euler equations. The LU SSOR scheme is based on a central difference scheme and does not need flux splitting for Newton iteration. Application to transonic flow shows that the new method surpasses the performance of the LU implicit scheme.
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.
Well balancing of the SWE schemes for moving-water steady flows
NASA Astrophysics Data System (ADS)
Caleffi, Valerio; Valiani, Alessandro
2017-08-01
In this work, the exact reproduction of a moving-water steady flow via the numerical solution of the one-dimensional shallow water equations is studied. A new scheme based on a modified version of the HLLEM approximate Riemann solver (Dumbser and Balsara (2016) [18]) that exactly preserves the total head and the discharge in the simulation of smooth steady flows and that correctly dissipates mechanical energy in the presence of hydraulic jumps is presented. This model is compared with a selected set of schemes from the literature, including models that exactly preserve quiescent flows and models that exactly preserve moving-water steady flows. The comparison highlights the strengths and weaknesses of the different approaches. In particular, the results show that the increase in accuracy in the steady state reproduction is counterbalanced by a reduced robustness and numerical efficiency of the models. Some solutions to reduce these drawbacks, at the cost of increased algorithm complexity, are presented.
NASA Astrophysics Data System (ADS)
Quaife, T. L.; Davenport, I. J.; Lines, E.; Styles, J.; Lewis, P.; Gurney, R. J.
2012-12-01
Satellite observations offer a spatially and temporally synoptic data source for constraining models of land surface processes, but exploitation of these data for such purposes has been largely ad-hoc to date. In part this is because traditional land surface models, and hence most land surface data assimilation schemes, have tended to focus on a specific component of the land surface problem; typically either surface fluxes of water and energy or biogeochemical cycles such as carbon and nitrogen. Furthermore the assimilation of satellite data into such models tends to be restricted to a single wavelength domain, for example passive microwave, thermal or optical, depending on the problem at hand. The next generation of land surface schemes, such as the Joint UK Land Environment Simulator (JULES) and the US Community Land Model (CLM) represent a broader range of processes but at the expense of increasing overall model complexity and in some cases reducing the level of detail in specific processes to accommodate this. Typically, the level of physical detail used to represent the interaction of electromagnetic radiation with the surface is not sufficient to enable prediction of intrinsic satellite observations (reflectance, brightness temperature and so on) and consequently these are not assimilated directly into the models. A seemingly attractive alternative is to assimilate high-level products derived from satellite observations but these are often only superficially related to the corresponding variables in land surface models due to conflicting assumptions between the two. This poster describes the water and energy balance modeling components of a project funded by the European Space Agency to develop a data assimilation scheme for the land surface and observation operators to translate between models and the intrinsic observations acquired by satellite missions. The rationale behind the design of the underlying process model is to represent the physics of the water and energy balance in as parsimonious manner as possible, using a force-restore approach, but describing the physics of electromagnetic radiation scattering at the surface sufficiently well that it is possible to assimilate the intrinsic observations made by remote sensing instruments. In this manner the initial configuration of the resulting scheme will be able to make optimal use of available satellite observations at arbitrary wavelengths and geometries. Model complexity can then be built up from this point whilst ensuring consistency with satellite observations.
Optimised cross-layer synchronisation schemes for wireless sensor networks
NASA Astrophysics Data System (ADS)
Nasri, Nejah; Ben Fradj, Awatef; Kachouri, Abdennaceur
2017-07-01
This paper aims at synchronisation between the sensor nodes. Indeed, in the context of wireless sensor networks, it is necessary to take into consideration the energy cost induced by the synchronisation, which can represent the majority of the energy consumed. On communication, an already identified hard point consists in imagining a fine synchronisation protocol which must be sufficiently robust to the intermittent energy in the sensors. Hence, this paper worked on aspects of performance and energy saving, in particular on the optimisation of the synchronisation protocol using cross-layer design method such as synchronisation between layers. Our approach consists in balancing the energy consumption between the sensors and choosing the cluster head with the highest residual energy in order to guarantee the reliability, integrity and continuity of communication (i.e. maximising the network lifetime).
Upscaling and Downscaling of Land Surface Fluxes with Surface Temperature
NASA Astrophysics Data System (ADS)
Kustas, W. P.; Anderson, M. C.; Hain, C.; Albertson, J. D.; Gao, F.; Yang, Y.
2015-12-01
Land surface temperature (LST) is a key surface boundary condition that is significantly correlated to surface flux partitioning between latent and sensible heat. The spatial and temporal variation in LST is driven by radiation, wind, vegetation cover and roughness as well as soil moisture status in the surface and root zone. Data from airborne and satellite-based platforms provide LST from ~10 km to sub meter resolutions. A land surface scheme called the Two-Source Energy Balance (TSEB) model has been incorporated into a multi-scale regional modeling system ALEXI (Atmosphere Land Exchange Inverse) and a disaggregation scheme (DisALEXI) using higher resolution LST. Results with this modeling system indicates that it can be applied over heterogeneous land surfaces and estimate reliable surface fluxes with minimal in situ information. Consequently, this modeling system allows for scaling energy fluxes from subfield to regional scales in regions with little ground data. In addition, the TSEB scheme has been incorporated into a large Eddy Simulation (LES) model for investigating dynamic interactions between variations in the land surface state reflected in the spatial pattern in LST and the lower atmospheric air properties affecting energy exchange. An overview of research results on scaling of fluxes and interactions with the lower atmosphere from the subfield level to regional scales using the TSEB, ALEX/DisALEX and the LES-TSEB approaches will be presented. Some unresolved issues in the use of LST at different spatial resolutions for estimating surface energy balance and upscaling fluxes, particularly evapotranspiration, will be discussed.
Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve
1987-01-01
Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.
Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2015-11-01
The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.
Latency Hiding in Dynamic Partitioning and Load Balancing of Grid Computing Applications
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak
2001-01-01
The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing platform for large-scale distributed computations, by hiding the intricacies of highly heterogeneous environment and yet maintaining adequate security. In this paper, we propose a latency-tolerant partitioning scheme that dynamically balances processor workloads on the.IPG, and minimizes data movement and runtime communication. By simulating an unsteady adaptive mesh application on a wide area network, we study the performance of our load balancer under the Globus environment. The number of IPG nodes, the number of processors per node, and the interconnected speeds are parameterized to derive conditions under which the IPG would be suitable for parallel distributed processing of such applications. Experimental results demonstrate that effective solution are achieved when the IPG nodes are connected by a high-speed asynchronous interconnection network.
A Resource Management Tool for Implementing Strategic Direction in an Academic Department
ERIC Educational Resources Information Center
Ringwood, John V.; Devitt, Frank; Doherty, Sean; Farrell, Ronan; Lawlor, Bob; McLoone, Sean C.; McLoone, Seamus F.; Rogers, Alan; Villing, Rudi; Ward, Tomas
2005-01-01
This paper reports on a load balancing system for an academic department, which can be used as an implementation mechanism for strategic planning. In essence, it consists of weighting each activity within the department and performing workload allocation based on this transparent scheme. The experience to date has been very positive, in terms of…
A Proposal for Kelly CriterionBased Lossy Network Compression
2016-03-01
warehousing and data mining techniques for cyber security. New York (NY): Springer; 2007. p. 83–108. 34. Münz G, Li S, Carle G. Traffic anomaly...p. 188–196. 48. Kim NU, Park MW, Park SH, Jung SM, Eom JH, Chung TM. A study on ef- fective hash-based load balancing scheme for parallel nids. In
How Teachers Can Assess Kindergarten Children's Motor Performance in Hong Kong.
ERIC Educational Resources Information Center
Lam, Mei Yung; Ip, Man Hing; Lui, Ping Keung; Koong, May Kay
2003-01-01
This project developed a motor performance instrument for a motor performance award scheme for kindergartners to be used by the Hong Kong Childhealth Foundation. Findings indicated that boys outperformed girls on agility and throwing. Girls performed better than boys on static balance. A marked improvement in agility was noted at age 5.5 years,…
On High-Order Upwind Methods for Advection
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2017-01-01
In the fourth installment of the celebrated series of five papers entitled "Towards the ultimate conservative difference scheme", Van Leer (1977) introduced five schemes for advection, the first three are piecewise linear, and the last two, piecewise parabolic. Among the five, scheme I, which is the least accurate, extends with relative ease to systems of equations in multiple dimensions. As a result, it became the most popular and is widely known as the MUSCL scheme (monotone upstream-centered schemes for conservation laws). Schemes III and V have the same accuracy, are the most accurate, and are closely related to current high-order methods. Scheme III uses a piecewise linear approximation that is discontinuous across cells, and can be considered as a precursor of the discontinuous Galerkin methods. Scheme V employs a piecewise quadratic approximation that is, as opposed to the case of scheme III, continuous across cells. This method is the basis for the on-going "active flux scheme" developed by Roe and collaborators. Here, schemes III and V are shown to be equivalent in the sense that they yield identical (reconstructed) solutions, provided the initial condition for scheme III is defined from that of scheme V in a manner dependent on the CFL number. This equivalence is counter intuitive since it is generally believed that piecewise linear and piecewise parabolic methods cannot produce the same solutions due to their different degrees of approximation. The finding also shows a key connection between the approaches of discontinuous and continuous polynomial approximations. In addition to the discussed equivalence, a framework using both projection and interpolation that extends schemes III and V into a single family of high-order schemes is introduced. For these high-order extensions, it is demonstrated via Fourier analysis that schemes with the same number of degrees of freedom ?? per cell, in spite of the different piecewise polynomial degrees, share the same sets of eigenvalues and thus, have the same stability and accuracy. Moreover, these schemes are accurate to order 2??-1, which is higher than the expected order of ??.
Begum, S; Achary, P Ganga Raju
2015-01-01
Quantitative structure-activity relationship (QSAR) models were built for the prediction of inhibition (pIC50, i.e. negative logarithm of the 50% effective concentration) of MAP kinase-interacting protein kinase (MNK1) by 43 potent inhibitors. The pIC50 values were modelled with five random splits, with the representations of the molecular structures by simplified molecular input line entry system (SMILES). QSAR model building was performed by the Monte Carlo optimisation using three methods: classic scheme; balance of correlations; and balance correlation with ideal slopes. The robustness of these models were checked by parameters as rm(2), r(*)m(2), [Formula: see text] and randomisation technique. The best QSAR model based on single optimal descriptors was applied to study in vitro structure-activity relationships of 6-(4-(2-(piperidin-1-yl) ethoxy) phenyl)-3-(pyridin-4-yl) pyrazolo [1,5-a] pyrimidine derivatives as a screening tool for the development of novel potent MNK1 inhibitors. The effects of alkyl group, -OH, -NO2, F, Cl, Br, I, etc. on the IC50 values towards the inhibition of MNK1 were also reported.
Multimodal Estimation of Distribution Algorithms.
Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun
2016-02-15
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.
Projection methods for incompressible flow problems with WENO finite difference schemes
NASA Astrophysics Data System (ADS)
de Frutos, Javier; John, Volker; Novo, Julia
2016-03-01
Weighted essentially non-oscillatory (WENO) finite difference schemes have been recommended in a competitive study of discretizations for scalar evolutionary convection-diffusion equations [20]. This paper explores the applicability of these schemes for the simulation of incompressible flows. To this end, WENO schemes are used in several non-incremental and incremental projection methods for the incompressible Navier-Stokes equations. Velocity and pressure are discretized on the same grid. A pressure stabilization Petrov-Galerkin (PSPG) type of stabilization is introduced in the incremental schemes to account for the violation of the discrete inf-sup condition. Algorithmic aspects of the proposed schemes are discussed. The schemes are studied on several examples with different features. It is shown that the WENO finite difference idea can be transferred to the simulation of incompressible flows. Some shortcomings of the methods, which are due to the splitting in projection schemes, become also obvious.
NASA Astrophysics Data System (ADS)
Boone, Aaron; Samuelsson, Patrick; Gollvik, Stefan; Napoly, Adrien; Jarlan, Lionel; Brun, Eric; Decharme, Bertrand
2017-02-01
Land surface models (LSMs) are pushing towards improved realism owing to an increasing number of observations at the local scale, constantly improving satellite data sets and the associated methodologies to best exploit such data, improved computing resources, and in response to the user community. As a part of the trend in LSM development, there have been ongoing efforts to improve the representation of the land surface processes in the interactions between the soil-biosphere-atmosphere (ISBA) LSM within the EXternalized SURFace (SURFEX) model platform. The force-restore approach in ISBA has been replaced in recent years by multi-layer explicit physically based options for sub-surface heat transfer, soil hydrological processes, and the composite snowpack. The representation of vegetation processes in SURFEX has also become much more sophisticated in recent years, including photosynthesis and respiration and biochemical processes. It became clear that the conceptual limits of the composite soil-vegetation scheme within ISBA had been reached and there was a need to explicitly separate the canopy vegetation from the soil surface. In response to this issue, a collaboration began in 2008 between the high-resolution limited area model (HIRLAM) consortium and Météo-France with the intention to develop an explicit representation of the vegetation in ISBA under the SURFEX platform. A new parameterization has been developed called the ISBA multi-energy balance (MEB) in order to address these issues. ISBA-MEB consists in a fully implicit numerical coupling between a multi-layer physically based snowpack model, a variable-layer soil scheme, an explicit litter layer, a bulk vegetation scheme, and the atmosphere. It also includes a feature that permits a coupling transition of the snowpack from the canopy air to the free atmosphere. It shares many of the routines and physics parameterizations with the standard version of ISBA. This paper is the first of two parts; in part one, the ISBA-MEB model equations, numerical schemes, and theoretical background are presented. In part two (Napoly et al., 2016), which is a separate companion paper, a local scale evaluation of the new scheme is presented along with a detailed description of the new forest litter scheme.
NASA Astrophysics Data System (ADS)
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
2016-08-01
Numerical solutions of the hydrodynamical model of semiconductor devices are presented in one and two-space dimension. The model describes the charge transport in semiconductor devices. Mathematically, the models can be written as a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the conservation element and solution element (CE/SE) method for hyperbolic step, and a semi-implicit scheme for the relaxation step. The numerical results of the suggested scheme are compared with the splitting scheme based on Nessyahu-Tadmor (NT) central scheme for convection step and the same semi-implicit scheme for the relaxation step. The effects of various parameters such as low field mobility, device length, lattice temperature and voltages for one-space dimensional hydrodynamic model are explored to further validate the generic applicability of the CE/SE method for the current model equations. A two dimensional simulation is also performed by CE/SE method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
26 CFR 1.167(b)-2 - Declining balance method.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Declining balance method. 1.167(b)-2 Section 1... Declining balance method. (a) Application of method. Under the declining balance method a uniform rate is... declining balance rate may be determined without resort to formula. Such rate determined under section 167(b...
26 CFR 1.167(b)-2 - Declining balance method.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 2 2011-04-01 2011-04-01 false Declining balance method. 1.167(b)-2 Section 1... Declining balance method. (a) Application of method. Under the declining balance method a uniform rate is... declining balance rate may be determined without resort to formula. Such rate determined under section 167(b...
Modified symplectic schemes with nearly-analytic discrete operators for acoustic wave simulations
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Yang, Dinghui; Lang, Chao; Wang, Wenshuai; Pan, Zhide
2017-04-01
Using a structure-preserving algorithm significantly increases the computational efficiency of solving wave equations. However, only a few explicit symplectic schemes are available in the literature, and the capabilities of these symplectic schemes have not been sufficiently exploited. Here, we propose a modified strategy to construct explicit symplectic schemes for time advance. The acoustic wave equation is transformed into a Hamiltonian system. The classical symplectic partitioned Runge-Kutta (PRK) method is used for the temporal discretization. Additional spatial differential terms are added to the PRK schemes to form the modified symplectic methods and then two modified time-advancing symplectic methods with all of positive symplectic coefficients are then constructed. The spatial differential operators are approximated by nearly-analytic discrete (NAD) operators, and we call the fully discretized scheme modified symplectic nearly analytic discrete (MSNAD) method. Theoretical analyses show that the MSNAD methods exhibit less numerical dispersion and higher stability limits than conventional methods. Three numerical experiments are conducted to verify the advantages of the MSNAD methods, such as their numerical accuracy, computational cost, stability, and long-term calculation capability.
Hypermatrix scheme for finite element systems on CDC STAR-100 computer
NASA Technical Reports Server (NTRS)
Noor, A. K.; Voigt, S. J.
1975-01-01
A study is made of the adaptation of the hypermatrix (block matrix) scheme for solving large systems of finite element equations to the CDC STAR-100 computer. Discussion is focused on the organization of the hypermatrix computation using Cholesky decomposition and the mode of storage of the different submatrices to take advantage of the STAR pipeline (streaming) capability. Consideration is also given to the associated data handling problems and the means of balancing the I/Q and cpu times in the solution process. Numerical examples are presented showing anticipated gain in cpu speed over the CDC 6600 to be obtained by using the proposed algorithms on the STAR computer.
Optimal feedback control of turbulent channel flow
NASA Technical Reports Server (NTRS)
Bewley, Thomas; Choi, Haecheon; Temam, Roger; Moin, Parviz
1993-01-01
Feedback control equations were developed and tested for computing wall normal control velocities to control turbulent flow in a channel with the objective of reducing drag. The technique used is the minimization of a 'cost functional' which is constructed to represent some balance of the drag integrated over the wall and the net control effort. A distribution of wall velocities is found which minimizes this cost functional some time shortly in the future based on current observations of the flow near the wall. Preliminary direct numerical simulations of the scheme applied to turbulent channel flow indicates it provides approximately 17 percent drag reduction. The mechanism apparent when the scheme is applied to a simplified flow situation is also discussed.
An Automatic Networking and Routing Algorithm for Mesh Network in PLC System
NASA Astrophysics Data System (ADS)
Liu, Xiaosheng; Liu, Hao; Liu, Jiasheng; Xu, Dianguo
2017-05-01
Power line communication (PLC) is considered to be one of the best communication technologies in smart grid. However, the topology of low voltage distribution network is complex, meanwhile power line channel has characteristics of time varying and attenuation, which lead to the unreliability of power line communication. In this paper, an automatic networking and routing algorithm is introduced which can be adapted to the "blind state" topology. The results of simulation and test show that the scheme is feasible, the routing overhead is small, and the load balance performance is good, which can achieve the establishment and maintenance of network quickly and effectively. The scheme is of great significance to improve the reliability of PLC.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.
Voltage Drop Compensation Method for Active Matrix Organic Light Emitting Diode Displays
NASA Astrophysics Data System (ADS)
Choi, Sang-moo; Ryu, Do-hyung; Kim, Keum-nam; Choi, Jae-beom; Kim, Byung-hee; Berkeley, Brian
2011-03-01
In this paper, the conventional voltage drop compensation methods are reviewed and the novel design and driving scheme, the advanced power de-coupled (aPDC) driving method, is proposed to effectively compensate the voltage IR drop of active matrix light emitting diode (AMOLED) displays. The advanced PDC driving scheme can be applied to general AMOLED pixel circuits that have been developed with only minor modification or without requiring modification in pixel circuit. A 14-in. AMOLED panel with the aPDC driving scheme was fabricated. Long range uniformity (LRU) of the 14-in. AMOLED panel was improved from 43% without the aPDC driving scheme, to over 87% at the same brightness by using the scheme and the layout complexity of the panel with new design scheme is less than that of the panel with the conventional design scheme.
An adaptive coupling strategy for joint inversions that use petrophysical information as constraints
NASA Astrophysics Data System (ADS)
Heincke, Björn; Jegen, Marion; Moorkamp, Max; Hobbs, Richard W.; Chen, Jin
2017-01-01
Joint inversion strategies for geophysical data have become increasingly popular as they allow for the efficient combination of complementary information from different data sets. The algorithm used for the joint inversion needs to be flexible in its description of the subsurface so as to be able to handle the diverse nature of the data. Hence, joint inversion schemes are needed that 1) adequately balance data from the different methods, 2) have stable convergence behavior, 3) consider the different resolution power of the methods used and 4) link the parameter models in a way that they are suited for a wide range of applications. Here, we combine active source seismic P-wave tomography, gravity and magnetotelluric (MT) data in a petrophysical joint inversion that accounts for these issues. Data from the different methods are inverted separately but are linked through constraints accounting for parameter relationships. An advantage of performing the inversions separately is that no relative weighting between the data sets is required. To avoid perturbing the convergence behavior of the inversions by the coupling, the strengths of the constraints are readjusted at each iteration. The criterion we use to control the adaption of the coupling strengths is based on variations in the objective functions of the individual inversions from one to the next iteration. Adaption of the coupling strengths makes the joint inversion scheme also applicable to subsurface conditions, where assumed relationships are not valid everywhere, because the individual inversions decouple if it is not possible to reach adequately low data misfits for the made assumptions. In addition, the coupling constraints depend on the relative resolutions of the methods, which leads to an improved convergence behavior of the joint inversion. Another benefit of the proposed scheme is that structural information can easily be incorporated in the petrophysical joint inversion (no additional terms are added in the objective functions) by using mutually controlled structural weights for the smoothing constraints. We test our scheme using data generated from a synthetic 2-D sub-basalt model. We observe that the adaption of the coupling strengths makes the convergence of the inversions very robust (data misfits of all methods are close to the target misfits) and that final results are always close to the true models independent of the parameter choices. Finally, the scheme is applied on real data sets from the Faroe-Shetland Basin to image a basaltic sequence and underlying structures. The presence of a borehole and a 3-D reflection seismic survey in this region allows direct comparison and, hence, evaluate the quality of the joint inversion results. The results from joint inversion are more consistent with results from other studies than the ones from the corresponding individual inversions and the shape of the basaltic sequence is better resolved. However, due to the limited resolution of the individual methods used it was not possible to resolve structures underneath the basalt in detail, indicating that additional geophysical information (e.g. CSEM, reflection onsets) needs to be included.
NASA Astrophysics Data System (ADS)
Schroeder, Sascha Thorsten; Costa, Ana; Obé, Elisabeth
In recent years, fuel cell based micro-combined heat and power (mCHP) has received increasing attention due to its potential contribution to European energy policy goals, i.e., sustainability, competitiveness and security of supply. Besides technical advances, regulatory framework and ownership structures are of crucial importance in order to achieve greater diffusion of the technology in residential applications. This paper analyses the interplay of policy and ownership structures for the future deployment of mCHP. Furthermore, it regards the three country cases Denmark, France and Portugal. Firstly, the implications of different kinds of support schemes on investment risk and the diffusion of a technology are explained conceptually. Secondly, ownership arrangements are addressed. Then, a cross-country comparison on present support schemes for mCHP and competing technologies discusses the national implementation of European legislation in Denmark, France and Portugal. Finally, resulting implications for ownership arrangements on the choice of support scheme are explained. From a conceptual point of view, investment support, feed-in tariffs and price premiums are the most appropriate schemes for fuel cell mCHP. This can be used for improved analysis of operational strategies. The interaction of this plethora of elements necessitates careful balancing from a private- and socio-economic point of view.
Mediator- and co-catalyst-free direct Z-scheme composites of Bi2WO6-Cu3P for solar-water splitting.
Rauf, Ali; Ma, Ming; Kim, Sungsoon; Sher Shah, Md Selim Arif; Chung, Chan-Hwa; Park, Jong Hyeok; Yoo, Pil J
2018-02-08
Exploring new single, active photocatalysts for solar-water splitting is highly desirable to expedite current research on solar-chemical energy conversion. In particular, Z-scheme-based composites (ZBCs) have attracted extensive attention due to their unique charge transfer pathway, broader redox range, and stronger redox power compared to conventional heterostructures. In the present report, we have for the first time explored Cu 3 P, a new, single photocatalyst for solar-water splitting applications. Moreover, a novel ZBC system composed of Bi 2 WO 6 -Cu 3 P was designed employing a simple method of ball-milling complexation. The synthesized materials were examined and further investigated through various microscopic, spectroscopic, and surface area characterization methods, which have confirmed the successful hybridization between Bi 2 WO 6 and Cu 3 P and the formation of a ZBC system that shows the ideal position of energy levels for solar-water splitting. Notably, the ZBC composed of Bi 2 WO 6 -Cu 3 P is a mediator- and co-catalyst-free photocatalyst system. The improved photocatalytic efficiency obtained with this system compared to other ZBC systems assisted by mediators and co-catalysts establishes the critical importance of interfacial solid-solid contact and the well-balanced position of energy levels for solar-water splitting. The promising solar-water splitting under optimum composition conditions highlighted the relationship between effective charge separation and composition.
Pitching effect on transonic wing stall of a blended flying wing with low aspect ratio
NASA Astrophysics Data System (ADS)
Tao, Yang; Zhao, Zhongliang; Wu, Junqiang; Fan, Zhaolin; Zhang, Yi
2018-05-01
Numerical simulation of the pitching effect on transonic wing stall of a blended flying wing with low aspect ratio was performed using improved delayed detached eddy simulation (IDDES). To capture the discontinuity caused by shock wave, a second-order upwind scheme with Roe’s flux-difference splitting is introduced into the inviscid flux. The artificial dissipation is also turned off in the region where the upwind scheme is applied. To reveal the pitching effect, the implicit approximate-factorization method with sub-iterations and second-order temporal accuracy is employed to avoid the time integration of the unsteady Navier-Stokes equations solved by finite volume method at Arbitrary Lagrange-Euler (ALE) form. The leading edge vortex (LEV) development and LEV circulation of pitch-up wings at a free-stream Mach number M = 0.9 and a Reynolds number Re = 9.6 × 106 is studied. The Q-criterion is used to capture the LEV structure from shear layer. The result shows that a shock wave/vortex interaction is responsible for the vortex breakdown which eventually causes the wing stall. The balance of the vortex strength and axial flow, and the shock strength, is examined to provide an explanation of the sensitivity of the breakdown location. Pitching motion has great influence on shock wave and shock wave/vortex interactions, which can significantly affect the vortex breakdown behavior and wing stall onset of low aspect ratio blended flying wing.
NASA Astrophysics Data System (ADS)
Lundberg, A.; Gustafsson, D.
2009-04-01
Modeling of forest snow processes is complicated and especially problematic seems to be the separation of precipitation phase in climates where a large part of the precipitation falls at temperatures near zero degrees Celsius. When the precipitation is classified as snow, the tree crowns can carry an order of magnitude more canopy storage as compared to when the precipitation is classified as rain, and snow in the trees also alters the albedo of the forest while rain does not. Many different schemes for the precipitation phase separation are used by various snow models. Some models use just one air temperature threshold (TR/S) below which all precipitation is assumed to be snow and above which all precipitation is classified as rain. A more common approach for forest snow models is to use two temperature thresholds. The snow fraction (SF) is then set to one below the snow threshold (TS) and to zero above the rain threshold (TR) and SF is assumed to decrease linearly between these two thresholds. Also more sophisticated schemes exist, but three seems to be a lack of agreement on how the precipitation phase separations should be performed. The aim with this study is to use a hydrological model including canopy snow processes to illustrate the sensitivity for different formulations of the precipitation phase separation on a) the simulated maximum snow pack storage b) the interception evaporation loss and c) snow melt runoff. In other words, to investigate of the choice of precipitation phase separation has an impact on the simulated wintertime water balance. Simulations are made for sites in different climates and for both open fields and forest sites in different regions of Sweden from north to south. In general, precipitation phase separation methods that classified snowfall at higher temperatures resulted in a larger proportion of the precipitation lost by interception evaporation as a result of the increased interception capacity. However, the maximum snow accumulation was also increased in some cases due to the overall increased snowfall, depending on canopy density and precipitation and temperature regimes. Results show that the choice of precipitation phase separation method can have an significant impact on the simulated wintertime water balance, especially in forested regions.
NASA Astrophysics Data System (ADS)
Kumar, Vivek; Raghurama Rao, S. V.
2008-04-01
Non-standard finite difference methods (NSFDM) introduced by Mickens [ Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers-Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791-797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250-2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235-276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter ( λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.
Time integration algorithms for the two-dimensional Euler equations on unstructured meshes
NASA Technical Reports Server (NTRS)
Slack, David C.; Whitaker, D. L.; Walters, Robert W.
1994-01-01
Explicit and implicit time integration algorithms for the two-dimensional Euler equations on unstructured grids are presented. Both cell-centered and cell-vertex finite volume upwind schemes utilizing Roe's approximate Riemann solver are developed. For the cell-vertex scheme, a four-stage Runge-Kutta time integration, a fourstage Runge-Kutta time integration with implicit residual averaging, a point Jacobi method, a symmetric point Gauss-Seidel method and two methods utilizing preconditioned sparse matrix solvers are presented. For the cell-centered scheme, a Runge-Kutta scheme, an implicit tridiagonal relaxation scheme modeled after line Gauss-Seidel, a fully implicit lower-upper (LU) decomposition, and a hybrid scheme utilizing both Runge-Kutta and LU methods are presented. A reverse Cuthill-McKee renumbering scheme is employed for the direct solver to decrease CPU time by reducing the fill of the Jacobian matrix. A comparison of the various time integration schemes is made for both first-order and higher order accurate solutions using several mesh sizes, higher order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The results obtained for a transonic flow over a circular arc suggest that the preconditioned sparse matrix solvers perform better than the other methods as the number of elements in the mesh increases.
Hom, Erik F. Y.; Marchis, Franck; Lee, Timothy K.; Haase, Sebastian; Agard, David A.; Sedat, John W.
2011-01-01
We describe an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. Included in AIDA is a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. We validated AIDA using synthetic data spanning a broad range of signal-to-noise ratios and image types and demonstrated the algorithm to be effective for experimental data from adaptive optics–equipped telescope systems and wide-field microscopy. PMID:17491626
A dynamic replication management strategy in distributed GIS
NASA Astrophysics Data System (ADS)
Pan, Shaoming; Xiong, Lian; Xu, Zhengquan; Chong, Yanwen; Meng, Qingxiang
2018-03-01
Replication strategy is one of effective solutions to meet the requirement of service response time by preparing data in advance to avoid the delay of reading data from disks. This paper presents a brand-new method to create copies considering the selection of replicas set, the number of copies for each replica and the placement strategy of all copies. First, the popularities of all data are computed considering both the historical access records and the timeliness of the records. Then, replica set can be selected based on their recent popularities. Also, an enhanced Q-value scheme is proposed to assign the number of copies for each replica. Finally, a reasonable copies placement strategy is designed to meet the requirement of load balance. In addition, we present several experiments that compare the proposed method with techniques that use other replication management strategies. The results show that the proposed model has better performance than other algorithms in all respects. Moreover, the experiments based on different parameters also demonstrated the effectiveness and adaptability of the proposed algorithm.
A stabilized element-based finite volume method for poroelastic problems
NASA Astrophysics Data System (ADS)
Honório, Hermínio T.; Maliska, Clovis R.; Ferronato, Massimiliano; Janna, Carlo
2018-07-01
The coupled equations of Biot's poroelasticity, consisting of stress equilibrium and fluid mass balance in deforming porous media, are numerically solved. The governing partial differential equations are discretized by an Element-based Finite Volume Method (EbFVM), which can be used in three dimensional unstructured grids composed of elements of different types. One of the difficulties for solving these equations is the numerical pressure instability that can arise when undrained conditions take place. In this paper, a stabilization technique is developed to overcome this problem by employing an interpolation function for displacements that considers also the pressure gradient effect. The interpolation function is obtained by the so-called Physical Influence Scheme (PIS), typically employed for solving incompressible fluid flows governed by the Navier-Stokes equations. Classical problems with analytical solutions, as well as three-dimensional realistic cases are addressed. The results reveal that the proposed stabilization technique is able to eliminate the spurious pressure instabilities arising under undrained conditions at a low computational cost.
Vectorization of a particle code used in the simulation of rarefied hypersonic flow
NASA Technical Reports Server (NTRS)
Baganoff, D.
1990-01-01
A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.
Dynamic Stability and Gravitational Balancing of Multiple Extended Bodies
NASA Technical Reports Server (NTRS)
Quadrelli, Marco
2008-01-01
Feasibility of a non-invasive compensation scheme was analyzed for precise positioning of a massive extended body in free fall using gravitational forces influenced by surrounding source masses in close proximity. The N-body problem of classical mechanics is a paradigm used to gain insight into the physics of the equivalent N-body problem subject to control forces. The analysis addressed how a number of control masses move around the proof mass so that the proof mass position can be accurately and remotely compensated when exogenous disturbances are acting on it, while its sensitivity to gravitational waves remains unaffected. Past methods to correct the dynamics of the proof mass have considered active electrostatic or capacitive methods, but the possibility of stray capacitances on the surfaces of the proof mass have prompted the investigation of other alternatives, such as the method presented in this paper. While more rigorous analyses of the problem should be carried out, the data show that, by means of a combined feedback and feed-forward control approach, the control masses succeeded in driving the proof mass along the specified trajectory, which implies that the proof mass can, in principle, be balanced via gravitational forces only while external perturbations are acting on it. This concept involves the dynamic stability of a group of massive objects interacting gravitationally under active control, and can apply to drag-free control of spacecraft during missions, to successor gravitational wave space borne sensors, or to any application requiring flying objects to be precisely controlled in position and attitude relative to another body via gravitational interactions only.
Evaluation of Surface Flux Parameterizations with Long-Term ARM Observations
Liu, Gang; Liu, Yangang; Endo, Satoshi
2013-02-01
Surface momentum, sensible heat, and latent heat fluxes are critical for atmospheric processes such as clouds and precipitation, and are parameterized in a variety of models ranging from cloud-resolving models to large-scale weather and climate models. However, direct evaluation of the parameterization schemes for these surface fluxes is rare due to limited observations. This study takes advantage of the long-term observations of surface fluxes collected at the Southern Great Plains site by the Department of Energy Atmospheric Radiation Measurement program to evaluate the six surface flux parameterization schemes commonly used in the Weather Research and Forecasting (WRF) model and threemore » U.S. general circulation models (GCMs). The unprecedented 7-yr-long measurements by the eddy correlation (EC) and energy balance Bowen ratio (EBBR) methods permit statistical evaluation of all six parameterizations under a variety of stability conditions, diurnal cycles, and seasonal variations. The statistical analyses show that the momentum flux parameterization agrees best with the EC observations, followed by latent heat flux, sensible heat flux, and evaporation ratio/Bowen ratio. The overall performance of the parameterizations depends on atmospheric stability, being best under neutral stratification and deteriorating toward both more stable and more unstable conditions. Further diagnostic analysis reveals that in addition to the parameterization schemes themselves, the discrepancies between observed and parameterized sensible and latent heat fluxes may stem from inadequate use of input variables such as surface temperature, moisture availability, and roughness length. The results demonstrate the need for improving the land surface models and measurements of surface properties, which would permit the evaluation of full land surface models.« less
ERIC Educational Resources Information Center
Gasskov, Vladimir
For the countries of Eastern Europe and the former Soviet Union, financing schemes of vocational education and training (VET) of other industrialized countries are possible prototypes. These "Partner Countries of the European Training Foundation (ETF)" should focus on the balance of responsibilities between central and local bodies and…
Random numbers from vacuum fluctuations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Yicheng; Kurtsiefer, Christian, E-mail: christian.kurtsiefer@gmail.com; Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543
2016-07-25
We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.
Information for forest process models: a review of NRS-FIA vegetation measurements
Charles D. Canham; William H. McWilliams
2012-01-01
The Forest and Analysis Program of the Northern Research Station (NRS-FIA) has re-designed Phase 3 measurements and intensified the sample intensity following a study to balance costs, utility, and sample size. The sampling scheme consists of estimating canopy-cover percent for six vegetation growth habits on 24-foot-radius subplots in four height classes and as an...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norman, Matthew R
2014-01-01
The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronizationmore » and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.« less
Discrete conservation properties for shallow water flows using mixed mimetic spectral elements
NASA Astrophysics Data System (ADS)
Lee, D.; Palha, A.; Gerritsma, M.
2018-03-01
A mixed mimetic spectral element method is applied to solve the rotating shallow water equations. The mixed method uses the recently developed spectral element histopolation functions, which exactly satisfy the fundamental theorem of calculus with respect to the standard Lagrange basis functions in one dimension. These are used to construct tensor product solution spaces which satisfy the generalized Stokes theorem, as well as the annihilation of the gradient operator by the curl and the curl by the divergence. This allows for the exact conservation of first order moments (mass, vorticity), as well as higher moments (energy, potential enstrophy), subject to the truncation error of the time stepping scheme. The continuity equation is solved in the strong form, such that mass conservation holds point wise, while the momentum equation is solved in the weak form such that vorticity is globally conserved. While mass, vorticity and energy conservation hold for any quadrature rule, potential enstrophy conservation is dependent on exact spatial integration. The method possesses a weak form statement of geostrophic balance due to the compatible nature of the solution spaces and arbitrarily high order spatial error convergence.
Mori, Masanobu; Nakano, Koji; Sasaki, Masaya; Shinozaki, Haruka; Suzuki, Shiho; Okawara, Chitose; Miró, Manuel; Itabashi, Hideyuki
2016-02-01
A dynamic flow-through microcolumn extraction system based on extractant re-circulation is herein proposed as a novel analytical approach for simplification of bioaccessibility tests of trace elements in sediments. On-line metal leaching is undertaken in the format of all injection (AI) analysis, which is a sequel of flow injection analysis, but involving extraction under steady-state conditions. The minimum circulation times and flow rates required to determine the maximum bioaccessible pools of target metals (viz., Cu, Zn, Cd, and Pb) from lake and river sediment samples were estimated using Tessier's sequential extraction scheme and an acid single extraction test. The on-line AIA method was successfully validated by mass balance studies of CRM and real sediment samples. Tessier's test in on-line AI format demonstrated to be carried out by one third of extraction time (6h against more than 17 h by the conventional method), with better analytical precision (<9.2% against >15% by the conventional method) and significant decrease in blank readouts as compared with the manual batch counterpart. Copyright © 2015 Elsevier B.V. All rights reserved.
A Survey on Data Storage and Information Discovery in the WSANs-Based Edge Computing Systems
Liang, Junbin; Liu, Renping; Ni, Wei; Li, Yin; Li, Ran; Ma, Wenpeng; Qi, Chuanda
2018-01-01
In the post-Cloud era, the proliferation of Internet of Things (IoT) has pushed the horizon of Edge computing, which is a new computing paradigm with data processed at the edge of the network. As the important systems of Edge computing, wireless sensor and actuator networks (WSANs) play an important role in collecting and processing the sensing data from the surrounding environment as well as taking actions on the events happening in the environment. In WSANs, in-network data storage and information discovery schemes with high energy efficiency, high load balance and low latency are needed because of the limited resources of the sensor nodes and the real-time requirement of some specific applications, such as putting out a big fire in a forest. In this article, the existing schemes of WSANs on data storage and information discovery are surveyed with detailed analysis on their advancements and shortcomings, and possible solutions are proposed on how to achieve high efficiency, good load balance, and perfect real-time performances at the same time, hoping that it can provide a good reference for the future research of the WSANs-based Edge computing systems. PMID:29439442
A Survey on Data Storage and Information Discovery in the WSANs-Based Edge Computing Systems.
Ma, Xingpo; Liang, Junbin; Liu, Renping; Ni, Wei; Li, Yin; Li, Ran; Ma, Wenpeng; Qi, Chuanda
2018-02-10
In the post-Cloud era, the proliferation of Internet of Things (IoT) has pushed the horizon of Edge computing, which is a new computing paradigm with data are processed at the edge of the network. As the important systems of Edge computing, wireless sensor and actuator networks (WSANs) play an important role in collecting and processing the sensing data from the surrounding environment as well as taking actions on the events happening in the environment. In WSANs, in-network data storage and information discovery schemes with high energy efficiency, high load balance and low latency are needed because of the limited resources of the sensor nodes and the real-time requirement of some specific applications, such as putting out a big fire in a forest. In this article, the existing schemes of WSANs on data storage and information discovery are surveyed with detailed analysis on their advancements and shortcomings, and possible solutions are proposed on how to achieve high efficiency, good load balance, and perfect real-time performances at the same time, hoping that it can provide a good reference for the future research of the WSANs-based Edge computing systems.
Mass-corrections for the conservative coupling of flow and transport on collocated meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waluga, Christian, E-mail: waluga@ma.tum.de; Wohlmuth, Barbara; Rüde, Ulrich
2016-01-15
Buoyancy-driven flow models demand a careful treatment of the mass-balance equation to avoid spurious source and sink terms in the non-linear coupling between flow and transport. In the context of finite-elements, it is therefore commonly proposed to employ sufficiently rich pressure spaces, containing piecewise constant shape functions to obtain local or even strong mass-conservation. In three-dimensional computations, this usually requires nonconforming approaches, special meshes or higher order velocities, which make these schemes prohibitively expensive for some applications and complicate the implementation into legacy code. In this paper, we therefore propose a lean and conservatively coupled scheme based on standard stabilizedmore » linear equal-order finite elements for the Stokes part and vertex-centered finite volumes for the energy equation. We show that in a weak mass-balance it is possible to recover exact conservation properties by a local flux-correction which can be computed efficiently on the control volume boundaries of the transport mesh. We discuss implementation aspects and demonstrate the effectiveness of the flux-correction by different two- and three-dimensional examples which are motivated by geophysical applications.« less
Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing
2015-12-01
We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.
NASA Astrophysics Data System (ADS)
Won, Yong-Yuk; Jung, Sang-Min; Han, Sang-Kook
2014-08-01
A new technique, which reduces optical beat interference (OBI) noise in orthogonal frequency division multiple access-passive optical network (OFDMA-PON) links, is proposed. A self-homodyne balanced detection, which uses a single laser for the optical line terminal (OLT) as well as for the optical network unit (ONU), reduces OBI noise and also improves the signal to noise ratio (SNR) of the discrete multi-tone (DMT) signal. The proposed scheme is verified by transmitting quadrature phase shift keying (QPSK)-modulated DMT signal over a 20-km single mode fiber. The optical signal to noise ratio (OSNR), that is required for BER of 10-5, is reduced by 2 dB in the balanced detection compared with a single channel due to the cancellation of OBI noise in conjunction with the local laser.
Diagnostic analysis of two-dimensional monthly average ozone balance with Chapman chemistry
NASA Technical Reports Server (NTRS)
Stolarski, Richard S.; Jackman, Charles H.; Kaye, Jack A.
1986-01-01
Chapman chemistry has been used in a two-dimensional model to simulate ozone balance phenomenology. The similarity between regions of ozone production and loss calculated using Chapman chemistry and those computed using LIMS and SAMS data with a photochemical equilibrium model indicate that such simplified chemistry is useful in studying gross features in stratospheric ozone balance. Net ozone production or loss rates are brought about by departures from the photochemical equilibrium (PCE) condition. If transport drives ozone above its PCE condition, then photochemical loss dominates production. If transport drives ozone below its PCE condition, then photochemical production dominates loss. Gross features of ozone loss/production (L/P) inferred for the real atmosphere from data are also simulated using only eddy diffusion. This indicates that one must be careful in assigning a transport scheme for a two-dimensional model that mimics only behavior of the observed ozone L/P.
A balanced water layer concept for subglacial hydrology in large scale ice sheet models
NASA Astrophysics Data System (ADS)
Goeller, S.; Thoma, M.; Grosfeld, K.; Miller, H.
2012-12-01
There is currently no doubt about the existence of a wide-spread hydrological network under the Antarctic ice sheet, which lubricates the ice base and thus leads to increased ice velocities. Consequently, ice models should incorporate basal hydrology to obtain meaningful results for future ice dynamics and their contribution to global sea level rise. Here, we introduce the balanced water layer concept, covering two prominent subglacial hydrological features for ice sheet modeling on a continental scale: the evolution of subglacial lakes and balance water fluxes. We couple it to the thermomechanical ice-flow model RIMBAY and apply it to a synthetic model domain inspired by the Gamburtsev Mountains, Antarctica. In our experiments we demonstrate the dynamic generation of subglacial lakes and their impact on the velocity field of the overlaying ice sheet, resulting in a negative ice mass balance. Furthermore, we introduce an elementary parametrization of the water flux-basal sliding coupling and reveal the predominance of the ice loss through the resulting ice streams against the stabilizing influence of less hydrologically active areas. We point out, that established balance flux schemes quantify these effects only partially as their ability to store subglacial water is lacking.
Development of a new flux splitting scheme
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
The use of a new splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.
Development of a new flux splitting scheme
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
The successful use of a novel splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.
Numerical experiments with a symmetric high-resolution shock-capturing scheme
NASA Technical Reports Server (NTRS)
Yee, H. C.
1986-01-01
Characteristic-based explicit and implicit total variation diminishing (TVD) schemes for the two-dimensional compressible Euler equations have recently been developed. This is a generalization of recent work of Roe and Davis to a wider class of symmetric (non-upwind) TVD schemes other than Lax-Wendroff. The Roe and Davis schemes can be viewed as a subset of the class of explicit methods. The main properties of the present class of schemes are that they can be implicit, and, when steady-state calculations are sought, the numerical solution is independent of the time step. In a recent paper, a comparison of a linearized form of the present implicit symmetric TVD scheme with an implicit upwind TVD scheme originally developed by Harten and modified by Yee was given. Results favored the symmetric method. It was found that the latter is just as accurate as the upwind method while requiring less computational effort. Currently, more numerical experiments are being conducted on time-accurate calculations and on the effect of grid topology, numerical boundary condition procedures, and different flow conditions on the behavior of the method for steady-state applications. The purpose here is to report experiences with this type of scheme and give guidelines for its use.
A data colocation grid framework for big data medical image processing: backend design
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design
Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-01-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668
NASA Astrophysics Data System (ADS)
Xu, Feinan; Wang, Weizhen; Wang, Jiemin; Xu, Ziwei; Qi, Yuan; Wu, Yueru
2017-08-01
The determination of area-averaged evapotranspiration (ET) at the satellite pixel scale/model grid scale over a heterogeneous land surface plays a significant role in developing and improving the parameterization schemes of the remote sensing based ET estimation models and general hydro-meteorological models. The Heihe Watershed Allied Telemetry Experimental Research (HiWATER) flux matrix provided a unique opportunity to build an aggregation scheme for area-averaged fluxes. On the basis of the HiWATER flux matrix dataset and high-resolution land-cover map, this study focused on estimating the area-averaged ET over a heterogeneous landscape with footprint analysis and multivariate regression. The procedure is as follows. Firstly, quality control and uncertainty estimation for the data of the flux matrix, including 17 eddy-covariance (EC) sites and four groups of large-aperture scintillometers (LASs), were carefully done. Secondly, the representativeness of each EC site was quantitatively evaluated; footprint analysis was also performed for each LAS path. Thirdly, based on the high-resolution land-cover map derived from aircraft remote sensing, a flux aggregation method was established combining footprint analysis and multiple-linear regression. Then, the area-averaged sensible heat fluxes obtained from the EC flux matrix were validated by the LAS measurements. Finally, the area-averaged ET of the kernel experimental area of HiWATER was estimated. Compared with the formerly used and rather simple approaches, such as the arithmetic average and area-weighted methods, the present scheme is not only with a much better database, but also has a solid grounding in physics and mathematics in the integration of area-averaged fluxes over a heterogeneous surface. Results from this study, both instantaneous and daily ET at the satellite pixel scale, can be used for the validation of relevant remote sensing models and land surface process models. Furthermore, this work will be extended to the water balance study of the whole Heihe River basin.
NASA Astrophysics Data System (ADS)
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
In this article, one and two-dimensional hydrodynamical models of semiconductor devices are numerically investigated. The models treat the propagation of electrons in a semiconductor device as the flow of a charged compressible fluid. It plays an important role in predicting the behavior of electron flow in semiconductor devices. Mathematically, the governing equations form a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the kinetic flux-vector splitting (KFVS) method for the hyperbolic step, and a semi-implicit Runge-Kutta method for the relaxation step. The KFVS method is based on the direct splitting of macroscopic flux functions of the system on the cell interfaces. The second order accuracy of the scheme is achieved by using MUSCL-type initial reconstruction and Runge-Kutta time stepping method. Several case studies are considered. For validation, the results of current scheme are compared with those obtained from the splitting scheme based on the NT central scheme. The effects of various parameters such as low field mobility, device length, lattice temperature and voltage are analyzed. The accuracy, efficiency and simplicity of the proposed KFVS scheme validates its generic applicability to the given model equations. A two dimensional simulation is also performed by KFVS method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
The construction of high-accuracy schemes for acoustic equations
NASA Technical Reports Server (NTRS)
Tang, Lei; Baeder, James D.
1995-01-01
An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.
Density measurement in air with saturable absorbing seed gas
NASA Technical Reports Server (NTRS)
Baganoff, D.
1982-01-01
Approaches which have the potential to make density measurements in a compressible flow, where one or more laser beams are used as probes, were investigated. Saturation in sulfur hexafluoride iodine and a crossed beam technique where one beam acts as a saturating beam and the other is at low intensity and acts as a probe beam are considered. It is shown that a balance between an increase in fluorescence intensity with increasing pressure from line broadening and the normal decrease in intensity with increasing pressure from quenching can be used to develop a linear relation between fluorescence intensity and number density and lead to a new density measurement scheme. The method is used to obtain a density image of the cross section of an iodine seeded underexpanded supersonic jet of nitrogen, by illuminating the cross section by a sheet of laser light.
NASA Astrophysics Data System (ADS)
Dai, C.; Qin, X. S.; Chen, Y.; Guo, H. C.
2018-06-01
A Gini-coefficient based stochastic optimization (GBSO) model was developed by integrating the hydrological model, water balance model, Gini coefficient and chance-constrained programming (CCP) into a general multi-objective optimization modeling framework for supporting water resources allocation at a watershed scale. The framework was advantageous in reflecting the conflicting equity and benefit objectives for water allocation, maintaining the water balance of watershed, and dealing with system uncertainties. GBSO was solved by the non-dominated sorting Genetic Algorithms-II (NSGA-II), after the parameter uncertainties of the hydrological model have been quantified into the probability distribution of runoff as the inputs of CCP model, and the chance constraints were converted to the corresponding deterministic versions. The proposed model was applied to identify the Pareto optimal water allocation schemes in the Lake Dianchi watershed, China. The optimal Pareto-front results reflected the tradeoff between system benefit (αSB) and Gini coefficient (αG) under different significance levels (i.e. q) and different drought scenarios, which reveals the conflicting nature of equity and efficiency in water allocation problems. A lower q generally implies a lower risk of violating the system constraints and a worse drought intensity scenario corresponds to less available water resources, both of which would lead to a decreased system benefit and a less equitable water allocation scheme. Thus, the proposed modeling framework could help obtain the Pareto optimal schemes under complexity and ensure that the proposed water allocation solutions are effective for coping with drought conditions, with a proper tradeoff between system benefit and water allocation equity.
Wu, Shaobo; Chou, Wusheng; Niu, Jianwei; Guizani, Mohsen
2018-03-18
Wireless sensor networks (WSNs) involve more mobile elements with their widespread development in industries. Exploiting mobility present in WSNs for data collection can effectively improve the network performance. However, when the sink (i.e., data collector) path is fixed and the movement is uncontrollable, existing schemes fail to guarantee delay requirements while achieving high energy efficiency. This paper proposes a delay-aware energy-efficient routing algorithm for WSNs with a path-fixed mobile sink, named DERM, which can strike a desirable balance between the delivery latency and energy conservation. We characterize the object of DERM as realizing the energy-optimal anycast to time-varying destination regions, and introduce a location-based forwarding technique tailored for this problem. To reduce the control overhead, a lightweight sink location calibration method is devised, which cooperates with the rough estimation based on the mobility pattern to determine the sink location. We also design a fault-tolerant mechanism called track routing to tackle location errors for ensuring reliable and on-time data delivery. We comprehensively evaluate DERM by comparing it with two canonical routing schemes and a baseline solution presented in this work. Extensive evaluation results demonstrate that DERM can provide considerable energy savings while meeting the delay constraint and maintaining a high delivery ratio.
Dynamic coupling of subsurface and seepage flows solved within a regularized partition formulation
NASA Astrophysics Data System (ADS)
Marçais, J.; de Dreuzy, J.-R.; Erhel, J.
2017-11-01
Hillslope response to precipitations is characterized by sharp transitions from purely subsurface flow dynamics to simultaneous surface and subsurface flows. Locally, the transition between these two regimes is triggered by soil saturation. Here we develop an integrative approach to simultaneously solve the subsurface flow, locate the potential fully saturated areas and deduce the generated saturation excess overland flow. This approach combines the different dynamics and transitions in a single partition formulation using discontinuous functions. We propose to regularize the system of partial differential equations and to use classic spatial and temporal discretization schemes. We illustrate our methodology on the 1D hillslope storage Boussinesq equations (Troch et al., 2003). We first validate the numerical scheme on previous numerical experiments without saturation excess overland flow. Then we apply our model to a test case with dynamic transitions from purely subsurface flow dynamics to simultaneous surface and subsurface flows. Our results show that discretization respects mass balance both locally and globally, converges when the mesh or time step are refined. Moreover the regularization parameter can be taken small enough to ensure accuracy without suffering of numerical artefacts. Applied to some hundreds of realistic hillslope cases taken from Western side of France (Brittany), the developed method appears to be robust and efficient.
Wu, Shaobo; Chou, Wusheng; Niu, Jianwei; Guizani, Mohsen
2018-01-01
Wireless sensor networks (WSNs) involve more mobile elements with their widespread development in industries. Exploiting mobility present in WSNs for data collection can effectively improve the network performance. However, when the sink (i.e., data collector) path is fixed and the movement is uncontrollable, existing schemes fail to guarantee delay requirements while achieving high energy efficiency. This paper proposes a delay-aware energy-efficient routing algorithm for WSNs with a path-fixed mobile sink, named DERM, which can strike a desirable balance between the delivery latency and energy conservation. We characterize the object of DERM as realizing the energy-optimal anycast to time-varying destination regions, and introduce a location-based forwarding technique tailored for this problem. To reduce the control overhead, a lightweight sink location calibration method is devised, which cooperates with the rough estimation based on the mobility pattern to determine the sink location. We also design a fault-tolerant mechanism called track routing to tackle location errors for ensuring reliable and on-time data delivery. We comprehensively evaluate DERM by comparing it with two canonical routing schemes and a baseline solution presented in this work. Extensive evaluation results demonstrate that DERM can provide considerable energy savings while meeting the delay constraint and maintaining a high delivery ratio. PMID:29562628
Stratospheric ozone changes under solar geoengineering: implications for UV exposure and air quality
NASA Astrophysics Data System (ADS)
Nowack, Peer Johannes; Abraham, Nathan Luke; Braesicke, Peter; Pyle, John Adrian
2016-03-01
Various forms of geoengineering have been proposed to counter anthropogenic climate change. Methods which aim to modify the Earth's energy balance by reducing insolation are often subsumed under the term solar radiation management (SRM). Here, we present results of a standard SRM modelling experiment in which the incoming solar irradiance is reduced to offset the global mean warming induced by a quadrupling of atmospheric carbon dioxide. For the first time in an atmosphere-ocean coupled climate model, we include atmospheric composition feedbacks for this experiment. While the SRM scheme considered here could offset greenhouse gas induced global mean surface warming, it leads to important changes in atmospheric composition. We find large stratospheric ozone increases that induce significant reductions in surface UV-B irradiance, which would have implications for vitamin D production. In addition, the higher stratospheric ozone levels lead to decreased ozone photolysis in the troposphere. In combination with lower atmospheric specific humidity under SRM, this results in overall surface ozone concentration increases in the idealized G1 experiment. Both UV-B and surface ozone changes are important for human health. We therefore highlight that both stratospheric and tropospheric ozone changes must be considered in the assessment of any SRM scheme, due to their important roles in regulating UV exposure and air quality.
Key on demand (KoD) for software-defined optical networks secured by quantum key distribution (QKD).
Cao, Yuan; Zhao, Yongli; Colman-Meixner, Carlos; Yu, Xiaosong; Zhang, Jie
2017-10-30
Software-defined optical networking (SDON) will become the next generation optical network architecture. However, the optical layer and control layer of SDON are vulnerable to cyberattacks. While, data encryption is an effective method to minimize the negative effects of cyberattacks, secure key interchange is its major challenge which can be addressed by the quantum key distribution (QKD) technique. Hence, in this paper we discuss the integration of QKD with WDM optical networks to secure the SDON architecture by introducing a novel key on demand (KoD) scheme which is enabled by a novel routing, wavelength and key assignment (RWKA) algorithm. The QKD over SDON with KoD model follows two steps to provide security: i) quantum key pools (QKPs) construction for securing the control channels (CChs) and data channels (DChs); ii) the KoD scheme uses RWKA algorithm to allocate and update secret keys for different security requirements. To test our model, we define a security probability index which measures the security gain in CChs and DChs. Simulation results indicate that the security performance of CChs and DChs can be enhanced by provisioning sufficient secret keys in QKPs and performing key-updating considering potential cyberattacks. Also, KoD is beneficial to achieve a positive balance between security requirements and key resource usage.
Deep and wide gaps by super Earths in low-viscosity discs
NASA Astrophysics Data System (ADS)
Ginzburg, Sivan; Sari, Re'em
2018-06-01
Planets can open cavities (gaps) in the protoplanetary gaseous discs in which they are born by exerting gravitational torques. Viscosity counters these torques and limits the depletion of the gaps. We present a simple one-dimensional scheme to calculate the gas density profile inside gaps by balancing the gravitational and viscous torques. By generalizing the results of Goodman & Rafikov (2001), our scheme properly accounts for the propagation of angular momentum by density waves. This method allows us to easily study low-viscosity discs, which are challenging for full hydrodynamical simulations. We complement our numerical integration by analytical equations for the gap's steady-state depth and width as a function of the planet's to star's mass ratio μ, the gas disc's aspect ratio h, and its Shakura & Sunyaev viscosity parameter α. Specifically, we focus on low-mass planets (μ < μth ≡ h3) and identify a new low-viscosity regime, α < h(μ/μth)5, in which the classical analytical scaling relations are invalid. Equivalently, this low-viscosity regime applies to every gap that is depleted by more than a factor of (μth/μ)3 relative to the unperturbed density. We show that such gaps are significantly deeper and wider than previously thought, and consequently take a longer time to reach equilibrium.
26 CFR 1.167(b)-2 - Declining balance method.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 2 2014-04-01 2014-04-01 false Declining balance method. 1.167(b)-2 Section 1... Declining balance method. (a) Application of method. Under the declining balance method a uniform rate is.... While salvage is not taken into account in determining the annual allowances under this method, in no...
Multigrid method for the equilibrium equations of elasticity using a compact scheme
NASA Technical Reports Server (NTRS)
Taasan, S.
1986-01-01
A compact difference scheme is derived for treating the equilibrium equations of elasticity. The scheme is inconsistent and unstable. A multigrid method which takes into account these properties is described. The solution of the discrete equations, up to the level of discretization errors, is obtained by this method in just two multigrid cycles.
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
Zhang, Yi-Bo; Ou, Qing-Dong; Li, Yan-Qing; Chen, Jing-De; Zhao, Xin-Dong; Wei, Jian; Xie, Zhong-Zhi; Tang, Jian-Xin
2017-07-10
It is challenging in realizing high-performance transparent organic light-emitting diodes (OLEDs) with symmetrical light emission to both sides. Herein, an efficient transparent OLED with highly balanced white emission to both sides is demonstrated by integrating quasi-periodic nanostructures into the organic emitter and the metal-dielectric composite top electrode, which can simultaneously suppressing waveguide and surface plasmonic loss. The power efficiency and external quantum efficiency are raised to 83.5 lm W -1 and 38.8%, respectively, along with a bi-directional luminance ratio of 1.26. The proposed scheme provides a facile route for extending application scope of transparent OLEDs for future transparent displays and lightings.
Embedded WENO: A design strategy to improve existing WENO schemes
NASA Astrophysics Data System (ADS)
van Lith, Bart S.; ten Thije Boonkkamp, Jan H. M.; IJzerman, Wilbert L.
2017-02-01
Embedded WENO methods utilise all adjacent smooth substencils to construct a desirable interpolation. Conventional WENO schemes under-use this possibility close to large gradients or discontinuities. We develop a general approach for constructing embedded versions of existing WENO schemes. Embedded methods based on the WENO schemes of Jiang and Shu [1] and on the WENO-Z scheme of Borges et al. [2] are explicitly constructed. Several possible choices are presented that result in either better spectral properties or a higher order of convergence for sufficiently smooth solutions. However, these improvements carry over to discontinuous solutions. The embedded methods are demonstrated to be indeed improvements over their standard counterparts by several numerical examples. All the embedded methods presented have no added computational effort compared to their standard counterparts.
High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs
NASA Technical Reports Server (NTRS)
Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.
2014-01-01
This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.
MeMoVolc report on classification and dynamics of volcanic explosive eruptions
NASA Astrophysics Data System (ADS)
Bonadonna, C.; Cioni, R.; Costa, A.; Druitt, T.; Phillips, J.; Pioli, L.; Andronico, D.; Harris, A.; Scollo, S.; Bachmann, O.; Bagheri, G.; Biass, S.; Brogi, F.; Cashman, K.; Dominguez, L.; Dürig, T.; Galland, O.; Giordano, G.; Gudmundsson, M.; Hort, M.; Höskuldsson, A.; Houghton, B.; Komorowski, J. C.; Küppers, U.; Lacanna, G.; Le Pennec, J. L.; Macedonio, G.; Manga, M.; Manzella, I.; Vitturi, M. de'Michieli; Neri, A.; Pistolesi, M.; Polacci, M.; Ripepe, M.; Rossi, E.; Scheu, B.; Sulpizio, R.; Tripoli, B.; Valade, S.; Valentine, G.; Vidal, C.; Wallenstein, N.
2016-11-01
Classifications of volcanic eruptions were first introduced in the early twentieth century mostly based on qualitative observations of eruptive activity, and over time, they have gradually been developed to incorporate more quantitative descriptions of the eruptive products from both deposits and observations of active volcanoes. Progress in physical volcanology, and increased capability in monitoring, measuring and modelling of explosive eruptions, has highlighted shortcomings in the way we classify eruptions and triggered a debate around the need for eruption classification and the advantages and disadvantages of existing classification schemes. Here, we (i) review and assess existing classification schemes, focussing on subaerial eruptions; (ii) summarize the fundamental processes that drive and parameters that characterize explosive volcanism; (iii) identify and prioritize the main research that will improve the understanding, characterization and classification of volcanic eruptions and (iv) provide a roadmap for producing a rational and comprehensive classification scheme. In particular, classification schemes need to be objective-driven and simple enough to permit scientific exchange and promote transfer of knowledge beyond the scientific community. Schemes should be comprehensive and encompass a variety of products, eruptive styles and processes, including for example, lava flows, pyroclastic density currents, gas emissions and cinder cone or caldera formation. Open questions, processes and parameters that need to be addressed and better characterized in order to develop more comprehensive classification schemes and to advance our understanding of volcanic eruptions include conduit processes and dynamics, abrupt transitions in eruption regime, unsteadiness, eruption energy and energy balance.
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
2017-02-05
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less
West Antarctic Balance Fluxes: Impact of Smoothing, Algorithm and Topography.
NASA Astrophysics Data System (ADS)
Le Brocq, A.; Payne, A. J.; Siegert, M. J.; Bamber, J. L.
2004-12-01
Grid-based calculations of balance flux and velocity have been widely used to understand the large-scale dynamics of ice masses and as indicators of their state of balance. This research investigates a number of issues relating to their calculation for the West Antarctic Ice Sheet (see below for further details): 1) different topography smoothing techniques; 2) different grid based flow-apportioning algorithms; 3) the source of the flow direction, whether from smoothed topography, or smoothed gravitational driving stress; 4) different flux routing techniques and 5) the impact of different topographic datasets. The different algorithms described below lead to significant differences in both ice stream margins and values of fluxes within them. This encourages caution in the use of grid-based balance flux/velocity distributions and values, especially when considering the state of balance of individual ice streams. 1) Most previous calculations have used the same numerical scheme (Budd and Warner, 1996) applied to a smoothed topography in order to incorporate the longitudinal stresses that smooth ice flow. There are two options to consider when smoothing the topography, the size of the averaging filter and the shape of the averaging function. However, this is not a physically-based approach to incorporating smoothed ice flow and also introduces significant flow artefacts when using a variable weighting function. 2) Different algorithms to apportion flow are investigated; using 4 or 8 neighbours, and apportioning flow to all down-slope cells or only 2 (based on derived flow direction). 3) A theoretically more acceptable approach of incorporating smoothed ice flow is to use the smoothed gravitational driving stress in x and y components to derive a flow direction. The flux can then be apportioned using the flow direction approach used above. 4) The original scheme (Budd and Warner, 1996) uses an elevation sort technique to calculate the balance flux contribution from all cells to each individual cell. However, elevation sort is only successful when ice cannot flow uphill. Other possible techniques include using a recursive call for each neighbour or using a sparse matrix solution. 5) Two digital elevation models are used as input data, which have significant differences in coastal and mountainous areas and therefore lead to different calculations. Of particular interest is the difference in the Rutford Ice Stream/Carlson Inlet and Kamb Ice Stream (Ice Stream C) fluxes.
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.
2017-12-01
As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. The algorithmic needs of computational astrophysics are indeed very special. The methods need to be robust and preserve the positivity of density and pressure. Relativistic flows should remain sub-luminal. These requirements place additional pressures on a computational astrophysics code, which are usually not felt by a traditional fluid dynamics code. Hence the need for a specialized review. The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes. At third order, they are most similar to piecewise parabolic method schemes, which are also included. DG schemes evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and DG schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger. Time-dependent astrophysical codes need to be accurate in space and time with the result that the spatial and temporal accuracies must be matched. This is realized with the help of strong stability preserving Runge-Kutta schemes and ADER (Arbitrary DERivative in space and time) schemes, both of which are also described. The emphasis of this review is on computer-implementable ideas, not necessarily on the underlying theory.
A Paradigm shift to an Old Scheme for Outgoing Longwave Radiation
NASA Astrophysics Data System (ADS)
McDonald, Alastair B.
2016-04-01
There are many cases where the climate models do not agree with the empirical data. For instance, the data from radiosondes (and MSUs) do not show the amount of warming in the upper troposphere that is predicted by the models (Thorne et al. 2011). The current scheme for outgoing longwave radiation can be traced back to the great 19th Century French mathematician J-B Joseph Fourier. His anachronistic idea was that the radiation balance at the top of the atmosphere (TOA) is maintained by the conduction of heat from the surface (Fourier 1824). It was based on comparing the atmosphere to the 18th Century Swiss scientist H-B de Saussure's hotbox which he had invented to show that solar radiation is only slightly absorbed by the atmosphere. Saussure also showed that thermal radiation existed and argued that the warmth of the air near the surface of the Earth is due to absorption of that infra red radiation (Saussure 1786). Hence a paradigm shift to Saussure's scheme, where the thermal radiation is absorbed at the base of the atmosphere, rather than throughout the atmosphere as in Fourier's scheme, may solve many climate models problems. In this new paradigm the boundary layer continually exchanges radiation with the surface. Thus only at two instants during the day is there no net gain or loss of heat by the boundary layer from the surface, and so that layer is not in LTE. Moreover, since the absorption of outgoing longwave radiation is saturated within the boundary layer, it has little influence on the TOA balance. That balance is mostly maintained by changes in albedo, e.g. clouds and ice sheets. Use of this paradigm can explain why the excess warming in south western Europe was caused by water vapour close to the surface (Philipona et al. 2005), and may also explain why there are difficulties in closing the surface radiation balance (Wild et al. 2013) and in modelling abrupt climate change (White et al. 2013). References: Fourier, Joseph. 1824. 'Remarques Générales Sur Les Températures Du Globe Terrestre Et Des Espaces Planétaires.' Annales de Chimie et de Physique 27: 136-67, translated by Raymond T. Pierrehumbert http://www.nature.com/nature/journal/v432/n7018/extref/432677a-s1.pdf Philipona, Rolf, Bruno Dürr, Atsumu Ohmura, and Christian Ruckstuhl. 2005. 'Anthropogenic Greenhouse Forcing and Strong Water Vapor Feedback Increase Temperature in Europe'. Geophysical Research Letters 32 (19): L19809. doi:10.1029/2005GL023624. Saussure, Horace-Benedict de. 1786. 'Chapter XXXV. Des Causes du Froid qui Regne sur les Montagnes'. In Voyages dans les Alpes, II:347-71. Neuchatel: Fauche-Borel. http://gallica.bnf.fr/ark:/12148/bpt6k1029499.r=.langFR, translated by Alastair B. McDonald, http://www.abmcdonald.freeserve.co.uk/saussure/CHAPTER%2035.pdf. Thorne, Peter W., Philip Brohan, Holly A. Titchner, et al. 2011. 'A Quantification of Uncertainties in Historical Tropical Tropospheric Temperature Trends from Radiosondes'. Journal of Geophysical Research: Atmospheres 116 (D12): n/a - n/a. doi:10.1029/2010JD015487. Wild, Martin, Doris Folini, Christoph Schär, et al. 2013. 'The Global Energy Balance from a Surface Perspective'. Climate Dynamics 40 (11-12): 3107-34. doi:10.1007/s00382-012-1569-8. White, James W.C., Alley, Richard B., Archer, David E., et al. 2013. Abrupt Impacts of Climate Change: Anticipating Surprises. Washington, D.C.: National Academies Press. http://www.nap.edu/catalog/18373.
Energy Efficient Cluster Based Scheduling Scheme for Wireless Sensor Networks
Srie Vidhya Janani, E.; Ganesh Kumar, P.
2015-01-01
The energy utilization of sensor nodes in large scale wireless sensor network points out the crucial need for scalable and energy efficient clustering protocols. Since sensor nodes usually operate on batteries, the maximum utility of network is greatly dependent on ideal usage of energy leftover in these sensor nodes. In this paper, we propose an Energy Efficient Cluster Based Scheduling Scheme for wireless sensor networks that balances the sensor network lifetime and energy efficiency. In the first phase of our proposed scheme, cluster topology is discovered and cluster head is chosen based on remaining energy level. The cluster head monitors the network energy threshold value to identify the energy drain rate of all its cluster members. In the second phase, scheduling algorithm is presented to allocate time slots to cluster member data packets. Here congestion occurrence is totally avoided. In the third phase, energy consumption model is proposed to maintain maximum residual energy level across the network. Moreover, we also propose a new packet format which is given to all cluster member nodes. The simulation results prove that the proposed scheme greatly contributes to maximum network lifetime, high energy, reduced overhead, and maximum delivery ratio. PMID:26495417
Scheme for the generation of freely traveling optical trio coherent states
NASA Astrophysics Data System (ADS)
Duc, Truong Minh; Dat, Tran Quang; An, Nguyen Ba; Kim, Jaewan
2013-08-01
Trio coherent states (TCSs) are non-Gaussian three-mode entangled states which can serve as a useful resource for continuous-variable quantum tasks, so their generation is of primary importance. Schemes exist to generate stable TCSs in terms of vibrational motion of a trapped ion inside a crystal. However, to perform quantum communication and distributed quantum computation the states should be shared beforehand among distant parties. That is, their modes should be able to be directed to different desired locations in space. In this work, we propose an experimental setup to generate such free-traveling TCSs in terms of optical fields. Our scheme uses standard physical resources, such as coherent states, balanced beam splitters, phase shifters, nonideal on-off photodetectors, and realistic weak cross-Kerr nonlinearities, without the need of single photons or homodyne or heterodyne measurements. We study the dependences of the fidelity of the state generated by our scheme with respect to the target TCS and the corresponding generation probability for the parameters involved. In theory, the fidelity could be nearly perfect for whatever weak nonlinearities τ and low photodetector efficiency η, provided that the amplitude |α| of an input coherent state is large enough, namely, |α|≥5/(ητ).
Incorporating Plant Phenology Dynamics in a Biophysical Canopy Model
NASA Technical Reports Server (NTRS)
Barata, Raquel A.; Drewry, Darren
2012-01-01
The Multi-Layer Canopy Model (MLCan) is a vegetation model created to capture plant responses to environmental change. Themodel vertically resolves carbon uptake, water vapor and energy exchange at each canopy level by coupling photosynthesis, stomatal conductance and leaf energy balance. The model is forced by incoming shortwave and longwave radiation, as well as near-surface meteorological conditions. The original formulation of MLCan utilized canopy structural traits derived from observations. This project aims to incorporate a plant phenology scheme within MLCan allowing these structural traits to vary dynamically. In the plant phenology scheme implemented here, plant growth is dependent on environmental conditions such as air temperature and soil moisture. The scheme includes functionality that models plant germination, growth, and senescence. These growth stages dictate the variation in six different vegetative carbon pools: storage, leaves, stem, coarse roots, fine roots, and reproductive. The magnitudes of these carbon pools determine land surface parameters such as leaf area index, canopy height, rooting depth and root water uptake capacity. Coupling this phenology scheme with MLCan allows for a more flexible representation of the structure and function of vegetation as it responds to changing environmental conditions.
Parallelization of implicit finite difference schemes in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel
1990-01-01
Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.
A Computer Oriented Scheme for Coding Chemicals in the Field of Biomedicine.
ERIC Educational Resources Information Center
Bobka, Marilyn E.; Subramaniam, J.B.
The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…
Geometric phase in entangled systems: A single-neutron interferometer experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sponar, S.; Klepp, J.; Loidl, R.
2010-04-15
The influence of the geometric phase on a Bell measurement, as proposed by Bertlmann et al. [Phys. Rev. A 69, 032112 (2004)] and expressed by the Clauser-Horne-Shimony-Holt (CHSH) inequality, has been observed for a spin-path-entangled neutron state in an interferometric setup. It is experimentally demonstrated that the effect of geometric phase can be balanced by a change in Bell angles. The geometric phase is acquired during a time-dependent interaction with a radiofrequency field. Two schemes, polar and azimuthal adjustment of the Bell angles, are realized and analyzed in detail. The former scheme yields a sinusoidal oscillation of the correlation functionmore » S, dependent on the geometric phase, such that it varies in the range between 2 and 2{radical}(2) and therefore always exceeds the boundary value 2 between quantum mechanic and noncontextual theories. The latter scheme results in a constant, maximal violation of the Bell-like CHSH inequality, where S remains 2{radical}(2) for all settings of the geometric phase.« less
Coordinated single-phase control scheme for voltage unbalance reduction in low voltage network.
Pullaguram, Deepak; Mishra, Sukumar; Senroy, Nilanjan
2017-08-13
Low voltage (LV) distribution systems are typically unbalanced in nature due to unbalanced loading and unsymmetrical line configuration. This situation is further aggravated by single-phase power injections. A coordinated control scheme is proposed for single-phase sources, to reduce voltage unbalance. A consensus-based coordination is achieved using a multi-agent system, where each agent estimates the averaged global voltage and current magnitudes of individual phases in the LV network. These estimated values are used to modify the reference power of individual single-phase sources, to ensure system-wide balanced voltages and proper power sharing among sources connected to the same phase. Further, the high X / R ratio of the filter, used in the inverter of the single-phase source, enables control of reactive power, to minimize voltage unbalance locally. The proposed scheme is validated by simulating a LV distribution network with multiple single-phase sources subjected to various perturbations.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
Finite-Difference Lattice Boltzmann Scheme for High-Speed Compressible Flow: Two-Dimensional Case
NASA Astrophysics Data System (ADS)
Gan, Yan-Biao; Xu, Ai-Guo; Zhang, Guang-Cai; Zhang, Ping; Zhang, Lei; Li, Ying-Jun
2008-07-01
Lattice Boltzmann (LB) modeling of high-speed compressible flows has long been attempted by various authors. One common weakness of most of previous models is the instability problem when the Mach number of the flow is large. In this paper we present a finite-difference LB model, which works for flows with flexible ratios of specific heats and a wide range of Mach number, from 0 to 30 or higher. Besides the discrete-velocity-model by Watari [Physica A 382 (2007) 502], a modified Lax Wendroff finite difference scheme and an artificial viscosity are introduced. The combination of the finite-difference scheme and the adding of artificial viscosity must find a balance of numerical stability versus accuracy. The proposed model is validated by recovering results of some well-known benchmark tests: shock tubes and shock reflections. The new model may be used to track shock waves and/or to study the non-equilibrium procedure in the transition between the regular and Mach reflections of shock waves, etc.
Modeling and Analysis of the Reverse Water Gas Shift Process for In-Situ Propellant Production
NASA Technical Reports Server (NTRS)
Whitlow, Jonathan E.
2000-01-01
This report focuses on the development of mathematical models and simulation tools developed for the Reverse Water Gas Shift (RWGS) process. This process is a candidate technology for oxygen production on Mars under the In-Situ Propellant Production (ISPP) project. An analysis of the RWGS process was performed using a material balance for the system. The material balance is very complex due to the downstream separations and subsequent recycle inherent with the process. A numerical simulation was developed for the RWGS process to provide a tool for analysis and optimization of experimental hardware, which will be constructed later this year at Kennedy Space Center (KSC). Attempts to solve the material balance for the system, which can be defined by 27 nonlinear equations, initially failed. A convergence scheme was developed which led to successful solution of the material balance, however the simplified equations used for the gas separation membrane were found insufficient. Additional more rigorous models were successfully developed and solved for the membrane separation. Sample results from these models are included in this report, with recommendations for experimental work needed for model validation.
NASA Astrophysics Data System (ADS)
Müller Schmied, Hannes; Döll, Petra
2017-04-01
The estimation of the World's water resources has a long tradition and numerous methods for quantification exists. The resulting numbers vary significantly, leaving room for improvement. Since some decades, global hydrological models (GHMs) are being used for large scale water budget assessments. GHMs are designed to represent the macro-scale hydrological processes and many of those models include human water management, e.g. irrigation or reservoir operation, making them currently the first choice for global scale assessments of the terrestrial water balance within the Anthropocene. The Water - Global Assessment and Prognosis (WaterGAP) is a model framework that comprises both the natural and human water dimension and is in development and application since the 1990s. In recent years, efforts were made to assess the sensitivity of water balance components to alternative climate forcing input data and, e.g., how this sensitivity is affected by WaterGAP's calibration scheme. This presentation shows the current best estimate of terrestrial water balance components as simulated with WaterGAP by 1) assessing global and continental water balance components for the climate period 1971-2000 and the IPCC reference period 1986-2005 for the most current WaterGAP version using a homogenized climate forcing data, 2) investigating variations of water balance components for a number of state-of-the-art climate forcing data and 3) discussing the benefit of the calibration approach for a better observation-data constrained global water budget. For the most current WaterGAP version 2.2b and a homogenized combination of the two WATCH Forcing Datasets, global scale (excluding Antarctica and Greenland) river discharge into oceans and inland sinks (Q) is assessed to be 40 000 km3 yr-1 for 1971-2000 and 39 200 km3 yr-1 for 1986-2005. Actual evapotranspiration (AET) is close to each other with around 70 600 (70 700) km3 yr-1 as well as water consumption with 1000 (1100) km3 yr-1. The main reason for differing Q is varying precipitation (P, 111 600 km3 yr-1 vs. 110 900 km3 yr-1). The sensitivity of water balance components to alternative climate forcing data is high. Applying 5 state-of-the-art climate forcing data sets, long term average P differs globally by 8000 km3 yr-1, mainly due to different handling of precipitation undercatch correction (or neglecting it). AET differs by 5500 km3 yr-1 whereas Q varies by 3000 km3 yr-1. The sensitivity of human water consumption to alternative climate input data is only about 5%. WaterGAP's calibration approach forces simulated long-term river discharge to be approximately equal to observed values at 1319 gauging stations during the time period selected for calibration. This scheme greatly reduces the impact of uncertain climate input on simulated Q data in these upstream drainage basins (as well as downstream). In calibration areas, the Q variation among the climate input data is much lower (1.6%) than in non-calibrated areas (18.5%). However, variation of Q at the grid cell-level is still high (an average of 37% for Q in grid cells in calibration areas vs. 74% outside). Due to the closed water balance, variation of AET is higher in calibrated areas than in non-calibrated areas. Main challenges in assessing the world's water resources by GHMs like WaterGAP are 1) the need of consistent long-term climate forcing input data sets, especial considering a suitable handling of P undercatch, 2) the accessibility of in-situ data for river discharge or alternative calibration data for currently non-calibrated areas, and 3) an improved simulation in semi-arid and arid river basins. As an outlook, a multi-model, multi-forcing study of global water balance components within the frame of the Inter-Sectoral Impact Model Intercomparison Project is proposed.
Factorized Runge-Kutta-Chebyshev Methods
NASA Astrophysics Data System (ADS)
O'Sullivan, Stephen
2017-05-01
The second-order extended stability Factorized Runge-Kutta-Chebyshev (FRKC2) explicit schemes for the integration of large systems of PDEs with diffusive terms are presented. The schemes are simple to implement through ordered sequences of forward Euler steps with complex stepsizes, and easily parallelised for large scale problems on distributed architectures. Preserving 7 digits for accuracy at 16 digit precision, the schemes are theoretically capable of maintaining internal stability for acceleration factors in excess of 6000 with respect to standard explicit Runge-Kutta methods. The extent of the stability domain is approximately the same as that of RKC schemes, and a third longer than in the case of RKL2 schemes. Extension of FRKC methods to fourth-order, by both complex splitting and Butcher composition techniques, is also discussed. A publicly available implementation of FRKC2 schemes may be obtained from maths.dit.ie/frkc
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey
2015-04-01
The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each direction on a time step. In each direction, they have tridiagonal structure. They are solved by the sweep method. An important advantage of the discrete-analytical schemes is that the values of derivatives at the boundaries of finite volume are calculated together with the values of the unknown functions. This technique is particularly attractive for problems with dominant convection, as it does not require artificial monotonization and limiters. The same idea of integrating factors is applied in temporal dimension to the stiff systems of equations describing chemical transformation models [2]. The proposed method is applicable for the problems involving convection-diffusion-reaction operators. The work has been partially supported by the Presidium of RAS under Program 43, and by the RFBR grants 14-01-00125 and 14-01-31482. References: 1. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, (2014) V.67, Issue 12, P. 2240-2256. 2. V.V.Penenko, E.A.Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220.
Analysis of a decision model in the context of equilibrium pricing and order book pricing
NASA Astrophysics Data System (ADS)
Wagner, D. C.; Schmitt, T. A.; Schäfer, R.; Guhr, T.; Wolf, D. E.
2014-12-01
An agent-based model for financial markets has to incorporate two aspects: decision making and price formation. We introduce a simple decision model and consider its implications in two different pricing schemes. First, we study its parameter dependence within a supply-demand balance setting. We find realistic behavior in a wide parameter range. Second, we embed our decision model in an order book setting. Here, we observe interesting features which are not present in the equilibrium pricing scheme. In particular, we find a nontrivial behavior of the order book volumes which reminds of a trend switching phenomenon. Thus, the decision making model alone does not realistically represent the trading and the stylized facts. The order book mechanism is crucial.
Djordjevic, Ivan B
2007-08-06
We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.
On the vanishing couplings in ADE affine Toda field theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saitoh, Y.; Shimada, T.
In this paper, the authors show that certain vanishing couplins in the ADE affine Toda field theories remain vanishing even after higher-order corrections are included. This is a requisite property for the Lagrangian formulation of the theory. The authors develop a new perturbative formulation and treat affine Toda field theories as a massless theory with exponential interaction terms. The authors shown that the nonrenormalization comes from the Dynkin automorphism of the Lie algebra associated with these theories. A charge balance conditions plays an important role in our scheme. The all-order nonrenormalization of vanishing couplings in [bar A][sub n] affine Todamore » field theory is also proved in a standard massive scheme.« less
NASA Astrophysics Data System (ADS)
Ji, Yang; Chen, Hong; Tang, Hongwu
2017-06-01
A highly accurate wide-angle scheme, based on the generalized mutistep scheme in the propagation direction, is developed for the finite difference beam propagation method (FD-BPM). Comparing with the previously presented method, the simulation shows that our method results in a more accurate solution, and the step size can be much larger
NASA Astrophysics Data System (ADS)
Mengaldo, Gianmarco; De Grazia, Daniele; Moura, Rodrigo C.; Sherwin, Spencer J.
2018-04-01
This study focuses on the dispersion and diffusion characteristics of high-order energy-stable flux reconstruction (ESFR) schemes via the spatial eigensolution analysis framework proposed in [1]. The analysis is performed for five ESFR schemes, where the parameter 'c' dictating the properties of the specific scheme recovered is chosen such that it spans the entire class of ESFR methods, also referred to as VCJH schemes, proposed in [2]. In particular, we used five values of 'c', two that correspond to its lower and upper bounds and the others that identify three schemes that are linked to common high-order methods, namely the ESFR recovering two versions of discontinuous Galerkin methods and one recovering the spectral difference scheme. The performance of each scheme is assessed when using different numerical intercell fluxes (e.g. different levels of upwinding), ranging from "under-" to "over-upwinding". In contrast to the more common temporal analysis, the spatial eigensolution analysis framework adopted here allows one to grasp crucial insights into the diffusion and dispersion properties of FR schemes for problems involving non-periodic boundary conditions, typically found in open-flow problems, including turbulence, unsteady aerodynamics and aeroacoustics.
An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling
NASA Astrophysics Data System (ADS)
Wang, Enjiang; Liu, Yang
2018-01-01
The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.
Two modified symplectic partitioned Runge-Kutta methods for solving the elastic wave equation
NASA Astrophysics Data System (ADS)
Su, Bo; Tuo, Xianguo; Xu, Ling
2017-08-01
Based on a modified strategy, two modified symplectic partitioned Runge-Kutta (PRK) methods are proposed for the temporal discretization of the elastic wave equation. The two symplectic schemes are similar in form but are different in nature. After the spatial discretization of the elastic wave equation, the ordinary Hamiltonian formulation for the elastic wave equation is presented. The PRK scheme is then applied for time integration. An additional term associated with spatial discretization is inserted into the different stages of the PRK scheme. Theoretical analyses are conducted to evaluate the numerical dispersion and stability of the two novel PRK methods. A finite difference method is used to approximate the spatial derivatives since the two schemes are independent of the spatial discretization technique used. The numerical solutions computed by the two new schemes are compared with those computed by a conventional symplectic PRK. The numerical results, which verify the new method, are superior to those generated by traditional conventional methods in seismic wave modeling.
NASA Astrophysics Data System (ADS)
Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.
2018-03-01
In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.
Urban bioclimatology in developing countries.
Jauregui, E
1993-11-15
A brief review of the literature on urban human bioclimatology in the tropics is undertaken. Attempts to chart human bioclimatic conditions on the regional/local scale have been made in several developing countries. The effective temperature scheme (with all its limitations) is the one that has been most frequently applied. The possibilities of application of bioclimatic models based on human heat balance for the tropical urban environment are discussed.
Adam Wolf; Nick Saliendra; Kanat Akshalov; Douglas A. Johnson; Emilio Laca
2008-01-01
Eddy covariance (EC) and modified Bowen ratio (MBR) systems have been shown to yield subtly different estimates of sensible heat (H), latent heat (LE), and CO2 fluxes (Fc). Our study analyzed the discrepancies between these two systems by first considering the role of the data processing algorithm used to estimate fluxes using EC and later...
A Comparison of Some Difference Schemes for a Parabolic Problem of Zero-Coupon Bond Pricing
NASA Astrophysics Data System (ADS)
Chernogorova, Tatiana; Vulkov, Lubin
2009-11-01
This paper describes a comparison of some numerical methods for solving a convection-diffusion equation subjected by dynamical boundary conditions which arises in the zero-coupon bond pricing. The one-dimensional convection-diffusion equation is solved by using difference schemes with weights including standard difference schemes as the monotone Samarskii's scheme, FTCS and Crank-Nicolson methods. The schemes are free of spurious oscillations and satisfy the positivity and maximum principle as demanded for the financial and diffusive solution. Numerical results are compared with analytical solutions.
A comparison of two multi-variable integrator windup protection schemes
NASA Technical Reports Server (NTRS)
Mattern, Duane
1993-01-01
Two methods are examined for limit and integrator wind-up protection for multi-input, multi-output linear controllers subject to actuator constraints. The methods begin with an existing linear controller that satisfies the specifications for the nominal, small perturbation, linear model of the plant. The controllers are formulated to include an additional contribution to the state derivative calculations. The first method to be examined is the multi-variable version of the single-input, single-output, high gain, Conventional Anti-Windup (CAW) scheme. Except for the actuator limits, the CAW scheme is linear. The second scheme to be examined, denoted the Modified Anti-Windup (MAW) scheme, uses a scalar to modify the magnitude of the controller output vector while maintaining the vector direction. The calculation of the scalar modifier is a nonlinear function of the controller outputs and the actuator limits. In both cases the constrained actuator is tracked. These two integrator windup protection methods are demonstrated on a turbofan engine control system with five measurements, four control variables, and four actuators. The closed-loop responses of the two schemes are compared and contrasted during limit operation. The issue of maintaining the direction of the controller output vector using the Modified Anti-Windup scheme is discussed and the advantages and disadvantages of both of the IWP methods are presented.
NASA Technical Reports Server (NTRS)
Abramopoulos, Frank
1988-01-01
The conditions under which finite difference schemes for the shallow water equations can conserve both total energy and potential enstrophy are considered. A method of deriving such schemes using operator formalism is developed. Several such schemes are derived for the A-, B- and C-grids. The derived schemes include second-order schemes and pseudo-fourth-order schemes. The simplest B-grid pseudo-fourth-order schemes are presented.
NASA Astrophysics Data System (ADS)
Navas-Montilla, A.; Murillo, J.
2017-07-01
When designing a numerical scheme for the resolution of conservation laws, the selection of a particular source term discretization (STD) may seem irrelevant whenever it ensures convergence with mesh refinement, but it has a decisive impact on the solution. In the framework of the Shallow Water Equations (SWE), well-balanced STD based on quiescent equilibrium are unable to converge to physically based solutions, which can be constructed considering energy arguments. Energy based discretizations can be designed assuming dissipation or conservation, but in any case, the STD procedure required should not be merely based on ad hoc approximations. The STD proposed in this work is derived from the Generalized Hugoniot Locus obtained from the Generalized Rankine Hugoniot conditions and the Integral Curve across the contact wave associated to the bed step. In any case, the STD must allow energy-dissipative solutions: steady and unsteady hydraulic jumps, for which some numerical anomalies have been documented in the literature. These anomalies are the incorrect positioning of steady jumps and the presence of a spurious spike of discharge inside the cell containing the jump. The former issue can be addressed by proposing a modification of the energy-conservative STD that ensures a correct dissipation rate across the hydraulic jump, whereas the latter is of greater complexity and cannot be fixed by simply choosing a suitable STD, as there are more variables involved. The problem concerning the spike of discharge is a well-known problem in the scientific community, also known as slowly-moving shock anomaly, it is produced by a nonlinearity of the Hugoniot locus connecting the states at both sides of the jump. However, it seems that this issue is more a feature than a problem when considering steady solutions of the SWE containing hydraulic jumps. The presence of the spurious spike in the discharge has been taken for granted and has become a feature of the solution. Even though it does not disturb the rest of the solution in steady cases, when considering transient cases it produces a very undesirable shedding of spurious oscillations downstream that should be circumvented. Based on spike-reducing techniques (originally designed for homogeneous Euler equations) that propose the construction of interpolated fluxes in the untrustworthy regions, we design a novel Roe-type scheme for the SWE with discontinuous topography that reduces the presence of the aforementioned spurious spike. The resulting spike-reducing method in combination with the proposed STD ensures an accurate positioning of steady jumps, provides convergence with mesh refinement, which was not possible for previous methods that cannot avoid the spike.
NASA Astrophysics Data System (ADS)
Guzinski, R.; Anderson, M. C.; Kustas, W. P.; Nieto, H.; Sandholt, I.
2013-07-01
The Dual Temperature Difference (DTD) model, introduced by Norman et al. (2000), uses a two source energy balance modelling scheme driven by remotely sensed observations of diurnal changes in land surface temperature (LST) to estimate surface energy fluxes. By using a time-differential temperature measurement as input, the approach reduces model sensitivity to errors in absolute temperature retrieval. The original formulation of the DTD required an early morning LST observation (approximately 1 h after sunrise) when surface fluxes are minimal, limiting application to data provided by geostationary satellites at sub-hourly temporal resolution. The DTD model has been applied primarily during the active growth phase of agricultural crops and rangeland vegetation grasses, and has not been rigorously evaluated during senescence or in forested ecosystems. In this paper we present modifications to the DTD model that enable applications using thermal observations from polar orbiting satellites, such as Terra and Aqua, with day and night overpass times over the area of interest. This allows the application of the DTD model in high latitude regions where large viewing angles preclude the use of geostationary satellites, and also exploits the higher spatial resolution provided by polar orbiting satellites. A method for estimating nocturnal surface fluxes and a scheme for estimating the fraction of green vegetation are developed and evaluated. Modification for green vegetation fraction leads to significantly improved estimation of the heat fluxes from the vegetation canopy during senescence and in forests. When the modified DTD model is run with LST measurements acquired with the Moderate Resolution Imaging Spectroradiometer (MODIS) on board the Terra and Aqua satellites, generally satisfactory agreement with field measurements is obtained for a number of ecosystems in Denmark and the United States. Finally, regional maps of energy fluxes are produced for the Danish Hydrological ObsErvatory (HOBE) in western Denmark, indicating realistic patterns based on land use.
The use of financial incentives in Australian general practice.
Kecmanovic, Milica; Hall, Jane P
2015-05-18
To examine the uptake of financial incentive payments in general practice, and identify what types of practitioners are more likely to participate in these schemes. Analysis of data on general practitioners and GP registrars from the Medicine in Australia - Balancing Employment and Life (MABEL) longitudinal panel survey of medical practitioners in Australia, from 2008 to 2011. Income received by GPs from government incentive schemes and grants and factors associated with the likelihood of claiming such incentives. Around half of GPs reported receiving income from financial incentives in 2008, and there was a small fall in this proportion by 2011. There was considerable movement into and out of the incentives schemes, with more GPs exiting than taking up grants and payments. GPs working in larger practices with greater administrative support, GPs practising in rural areas and those who were principals or partners in practices were more likely to use grants and incentive payments. Administrative support available to GPs appears to be an increasingly important predictor of incentive use, suggesting that the administrative burden of claiming incentives is large and not always worth the effort. It is, therefore, crucial to consider such costs (especially relative to the size of the payment) when designing incentive payments. As market conditions are also likely to influence participation in incentive schemes, the impact of incentives can change over time and these schemes should be reviewed regularly.
Mukherjee, Lipi; Zhai, Peng-Wang; Hu, Yongxiang; Winker, David M.
2018-01-01
Polarized radiation fields in a turbid medium are influenced by single-scattering properties of scatterers. It is common that media contain two or more types of scatterers, which makes it essential to properly mix single-scattering properties of different types of scatterers in the vector radiative transfer theory. The vector radiative transfer solvers can be divided into two basic categories: the stochastic and deterministic methods. The stochastic method is basically the Monte Carlo method, which can handle scatterers with different scattering properties explicitly. This mixture scheme is called the external mixture scheme in this paper. The deterministic methods, however, can only deal with a single set of scattering properties in the smallest discretized spatial volume. The single-scattering properties of different types of scatterers have to be averaged before they are input to deterministic solvers. This second scheme is called the internal mixture scheme. The equivalence of these two different mixture schemes of scattering properties has not been demonstrated so far. In this paper, polarized radiation fields for several scattering media are solved using the Monte Carlo and successive order of scattering (SOS) methods and scattering media contain two types of scatterers: Rayleigh scatterers (molecules) and Mie scatterers (aerosols). The Monte Carlo and SOS methods employ external and internal mixture schemes of scatterers, respectively. It is found that the percentage differences between radiances solved by these two methods with different mixture schemes are of the order of 0.1%. The differences of Q/I, U/I, and V/I are of the order of 10−5 ~ 10−4, where I, Q, U, and V are the Stokes parameters. Therefore, the equivalence between these two mixture schemes is confirmed to the accuracy level of the radiative transfer numerical benchmarks. This result provides important guidelines for many radiative transfer applications that involve the mixture of different scattering and absorptive particles. PMID:29047543
An extended GS method for dense linear systems
NASA Astrophysics Data System (ADS)
Niki, Hiroshi; Kohno, Toshiyuki; Abe, Kuniyoshi
2009-09-01
Davey and Rosindale [K. Davey, I. Rosindale, An iterative solution scheme for systems of boundary element equations, Internat. J. Numer. Methods Engrg. 37 (1994) 1399-1411] derived the GSOR method, which uses an upper triangular matrix [Omega] in order to solve dense linear systems. By applying functional analysis, the authors presented an expression for the optimum [Omega]. Moreover, Davey and Bounds [K. Davey, S. Bounds, A generalized SOR method for dense linear systems of boundary element equations, SIAM J. Comput. 19 (1998) 953-967] also introduced further interesting results. In this note, we employ a matrix analysis approach to investigate these schemes, and derive theorems that compare these schemes with existing preconditioners for dense linear systems. We show that the convergence rate of the Gauss-Seidel method with preconditioner PG is superior to that of the GSOR method. Moreover, we define some splittings associated with the iterative schemes. Some numerical examples are reported to confirm the theoretical analysis. We show that the EGS method with preconditioner produces an extremely small spectral radius in comparison with the other schemes considered.
Improving multivariate Horner schemes with Monte Carlo tree search
NASA Astrophysics Data System (ADS)
Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.
2013-11-01
Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.
Symplectic partitioned Runge-Kutta scheme for Maxwell's equations
NASA Astrophysics Data System (ADS)
Huang, Zhi-Xiang; Wu, Xian-Liang
Using the symplectic partitioned Runge-Kutta (PRK) method, we construct a new scheme for approximating the solution to infinite dimensional nonseparable Hamiltonian systems of Maxwell's equations for the first time. The scheme is obtained by discretizing the Maxwell's equations in the time direction based on symplectic PRK method, and then evaluating the equation in the spatial direction with a suitable finite difference approximation. Several numerical examples are presented to verify the efficiency of the scheme.
NASA Astrophysics Data System (ADS)
Zhou, X.; Beljaars, A.; Wang, Y.; Huang, B.; Lin, C.; Chen, Y.; Wu, H.
2017-09-01
Weather Research and Forecasting (WRF) simulations with different selections of subgrid orographic drag over the Tibetan Plateau have been evaluated with observation and ERA-Interim reanalysis. Results show that the subgrid orographic drag schemes, especially the turbulent orographic form drag (TOFD) scheme, efficiently reduce the 10 m wind speed bias and RMS error with respect to station measurements. With the combination of gravity wave, flow blocking and TOFD schemes, wind speed is simulated more realistically than with the individual schemes only. Improvements are also seen in the 2 m air temperature and surface pressure. The gravity wave drag, flow blocking drag, and TOFD schemes combined have the smallest station mean bias (-2.05°C in 2 m air temperature and 1.27 hPa in surface pressure) and RMS error (3.59°C in 2 m air temperature and 2.37 hPa in surface pressure). Meanwhile, the TOFD scheme contributes more to the improvements than the gravity wave drag and flow blocking schemes. The improvements are more pronounced at low levels of the atmosphere than at high levels due to the stronger drag enhancement on the low-level flow. The reduced near-surface cold bias and high-pressure bias over the Tibetan Plateau are the result of changes in the low-level wind components associated with the geostrophic balance. The enhanced drag directly leads to weakened westerlies but also enhances the a-geostrophic flow in this case reducing (enhancing) the northerlies (southerlies), which bring more warm air across the Himalaya Mountain ranges from South Asia (bring less cold air from the north) to the interior Tibetan Plateau.
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
A classification scheme for risk assessment methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stamp, Jason Edwin; Campbell, Philip LaRoche
2004-08-01
This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that amore » method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In Section 2 we provide context for this report--what a 'method' is and where it fits. In Section 3 we present background for our classification scheme--what other schemes we have found, the fundamental nature of methods and their necessary incompleteness. In Section 4 we present our classification scheme in the form of a matrix, then we present an analogy that should provide an understanding of the scheme, concluding with an explanation of the two dimensions and the nine types in our scheme. In Section 5 we present examples of each of our classification types. In Section 6 we present conclusions.« less
A new family of high-order compact upwind difference schemes with good spectral resolution
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Yao, Zhaohui; He, Feng; Shen, M. Y.
2007-12-01
This paper presents a new family of high-order compact upwind difference schemes. Unknowns included in the proposed schemes are not only the values of the function but also those of its first and higher derivatives. Derivative terms in the schemes appear only on the upwind side of the stencil. One can calculate all the first derivatives exactly as one solves explicit schemes when the boundary conditions of the problem are non-periodic. When the proposed schemes are applied to periodic problems, only periodic bi-diagonal matrix inversions or periodic block-bi-diagonal matrix inversions are required. Resolution optimization is used to enhance the spectral representation of the first derivative, and this produces a scheme with the highest spectral accuracy among all known compact schemes. For non-periodic boundary conditions, boundary schemes constructed in virtue of the assistant scheme make the schemes not only possess stability for any selective length scale on every point in the computational domain but also satisfy the principle of optimal resolution. Also, an improved shock-capturing method is developed. Finally, both the effectiveness of the new hybrid method and the accuracy of the proposed schemes are verified by executing four benchmark test cases.
A gas-kinetic BGK scheme for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Xu, Kun
2000-01-01
This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.
A modified symplectic PRK scheme for seismic wave modeling
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Yang, Dinghui; Ma, Jian
2017-02-01
A new scheme for the temporal discretization of the seismic wave equation is constructed based on symplectic geometric theory and a modified strategy. The ordinary differential equation in terms of time, which is obtained after spatial discretization via the spectral-element method, is transformed into a Hamiltonian system. A symplectic partitioned Runge-Kutta (PRK) scheme is used to solve the Hamiltonian system. A term related to the multiplication of the spatial discretization operator with the seismic wave velocity vector is added into the symplectic PRK scheme to create a modified symplectic PRK scheme. The symplectic coefficients of the new scheme are determined via Taylor series expansion. The positive coefficients of the scheme indicate that its long-term computational capability is more powerful than that of conventional symplectic schemes. An exhaustive theoretical analysis reveals that the new scheme is highly stable and has low numerical dispersion. The results of three numerical experiments demonstrate the high efficiency of this method for seismic wave modeling.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.
2010-01-01
Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and complexity are studied for four nominally second-order accurate schemes: a node-centered scheme and three cell-centered schemes - a node-averaging scheme and two schemes with nearest-neighbor and adaptive compact stencils for least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Tests from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The tests of the second class are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes may degenerate on mixed grids, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to that of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping based on a distance function commonly available in practical schemes or modifying the scheme stencil to reflect the direction of strong coupling. The major conclusion is that accuracies of the node centered and the best cell-centered schemes are comparable at equivalent number of degrees of freedom.
NASA Technical Reports Server (NTRS)
Ozdogan, Mutlu; Rodell, Matthew; Beaudoing, Hiroko Kato; Toll, David L.
2009-01-01
A novel method is introduced for integrating satellite derived irrigation data and high-resolution crop type information into a land surface model (LSM). The objective is to improve the simulation of land surface states and fluxes through better representation of agricultural land use. Ultimately, this scheme could enable numerical weather prediction (NWP) models to capture land-atmosphere feedbacks in managed lands more accurately and thus improve forecast skill. Here we show that application of the new irrigation scheme over the continental US significantly influences the surface water and energy balances by modulating the partitioning of water between the surface and the atmosphere. In our experiment, irrigation caused a 12% increase in evapotranspiration (QLE) and an equivalent reduction in the sensible heat flux (QH) averaged over all irrigated areas in the continental US during the 2003 growing season. Local effects were more extreme: irrigation shifted more than 100 W/m from QH to QLE in many locations in California, eastern Idaho, southern Washington, and southern Colorado during peak crop growth. In these cases, the changes in ground heat flux (QG), net radiation (RNET), evapotranspiration (ET), runoff (R), and soil moisture (SM) were more than 3 W/m(sup 2), 20 W/m(sup 2), 5 mm/day, 0.3 mm/day, and 100 mm, respectively. These results are highly relevant to continental- to global-scale water and energy cycle studies that, to date, have struggled to quantify the effects of agricultural management practices such as irrigation. Based on the results presented here, we expect that better representation of managed lands will lead to improved weather and climate forecasting skill when the new irrigation scheme is incorporated into NWP models such as NOAA's Global Forecast System (GFS).
NASA Astrophysics Data System (ADS)
Ducousso, Nicolas; Le Sommer, J.; Molines, J.-M.; Bell, M.
2017-12-01
The energy- and enstrophy-conserving momentum advection scheme (EEN) used over the last 10 years in NEMO is subject to a spurious numerical instability. This instability, referred to as the Symmetric Instability of the Computational Kind (SICK), arises from a discrete imbalance between the two components of the vector-invariant form of momentum advection. The properties and the method for removing this instability have been documented by Hollingsworth et al. (1983), but the extent to which the SICK may interfere with processes of interest at mesoscale- and submesoscale-permitting resolutions is still unkown. In this paper, the impact of the SICK in realistic ocean model simulations is assessed by comparing model integrations with different versions of the EEN momentum advection scheme. Investigations are undertaken with a global mesoscale-permitting resolution (1/4 °) configuration and with a regional North Atlantic Ocean submesoscale-permitting resolution (1/60 °) configuration. At both resolutions, the instability is found to alter primarily the most energetic current systems, such as equatorial jets, western boundary currents and coherent vortices. The impact of the SICK is found to increase with model resolution with a noticeable impact at mesoscale-permitting resolution and a dramatic impact at submesoscale-permitting resolution. The SICK is shown to distort the normal functioning of current systems, by redirecting the slow energy transfer between balanced motions to a spurious energy transfer to internal inertia-gravity waves and to dissipation. Our results indicate that the SICK is likely to have significantly corrupted NEMO solutions (when run with the EEN scheme) at mesocale-permitting and finer resolutions over the last 10 years.
Least-squares finite element methods for compressible Euler equations
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Carey, G. F.
1990-01-01
A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.
Noiseless Vlasov-Poisson simulations with linearly transformed particles
Pinto, Martin C.; Sonnendrucker, Eric; Friedman, Alex; ...
2014-06-25
We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development ofmore » Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Lastly, benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.« less
NASA Technical Reports Server (NTRS)
Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.
1973-01-01
The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.
Noise spectra in balanced optical detectors based on transimpedance amplifiers.
Masalov, A V; Kuzhamuratov, A; Lvovsky, A I
2017-11-01
We present a thorough theoretical analysis and experimental study of the shot and electronic noise spectra of a balanced optical detector based on an operational amplifier connected in a transimpedance scheme. We identify and quantify the primary parameters responsible for the limitations of the circuit, in particular, the bandwidth and shot-to-electronic noise clearance. We find that the shot noise spectrum can be made consistent with the second-order Butterworth filter, while the electronic noise grows linearly with the second power of the frequency. Good agreement between the theory and experiment is observed; however, the capacitances of the operational amplifier input and the photodiodes appear significantly higher than those specified in manufacturers' datasheets. This observation is confirmed by independent tests.
Noise spectra in balanced optical detectors based on transimpedance amplifiers
NASA Astrophysics Data System (ADS)
Masalov, A. V.; Kuzhamuratov, A.; Lvovsky, A. I.
2017-11-01
We present a thorough theoretical analysis and experimental study of the shot and electronic noise spectra of a balanced optical detector based on an operational amplifier connected in a transimpedance scheme. We identify and quantify the primary parameters responsible for the limitations of the circuit, in particular, the bandwidth and shot-to-electronic noise clearance. We find that the shot noise spectrum can be made consistent with the second-order Butterworth filter, while the electronic noise grows linearly with the second power of the frequency. Good agreement between the theory and experiment is observed; however, the capacitances of the operational amplifier input and the photodiodes appear significantly higher than those specified in manufacturers' datasheets. This observation is confirmed by independent tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finn, John M., E-mail: finn@lanl.gov
2015-03-15
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint.more » We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012)], appears to work very well.« less
NASA Astrophysics Data System (ADS)
Havasi, Ágnes; Kazemi, Ehsan
2018-04-01
In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.
Okabayashi, M.; Zanca, P.; Strait, E. J.; ...
2016-11-25
Disruptions caused by tearing modes (TMs) are considered to be one of the most critical roadblocks to achieving reliable, steady-state operation of tokamak fusion reactors. We have demonstrated a promising scheme to avoid mode locking by utilizing the electro-magnetic (EM) torque produced with 3D coils that are available in many tokamaks. In this scheme, the EM torque is delivered to the modes by a toroidal phase shift between the externally applied field and the excited TM fields, compensating for the mode momentum loss through the interaction with the resistive wall and uncorrected error fields. Fine control of torque balance ismore » provided by a feedback scheme. We have explored this approach in two widely different devices and plasma conditions: DIII-D and RFX-mod operated in tokamak mode. In DIII-D, the plasma target was high β N in a non-circular divertor tokamak. We define β N as β N = β/(I p /aB t) (%Tm/MA), where β, I p, a, B t are the total stored plasma pressure normalized by the magnetic pressure, plasma current, plasma minor radius and toroidal magnetic field at the plasma center, respectively. The RFX-mod plasma was ohmically-heated with ultra-low safety factor in a circular limiter discharge with active feedback coils outside the thick resistive shell. The DIII-D and RFX-mod experiments showed remarkable consistency with theoretical predictions of torque balance. The application to ignition-oriented devices such as the International Thermonuclear Experimental Reactor (ITER) would expand the horizon of its operational regime. Finally, the internal 3D coil set currently under consideration for edge localized mode suppression in ITER would be well suited for this purpose.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okabayashi, M.; Zanca, P.; Strait, E. J.
Disruptions caused by tearing modes (TMs) are considered to be one of the most critical roadblocks to achieving reliable, steady-state operation of tokamak fusion reactors. We have demonstrated a promising scheme to avoid mode locking by utilizing the electro-magnetic (EM) torque produced with 3D coils that are available in many tokamaks. In this scheme, the EM torque is delivered to the modes by a toroidal phase shift between the externally applied field and the excited TM fields, compensating for the mode momentum loss through the interaction with the resistive wall and uncorrected error fields. Fine control of torque balance ismore » provided by a feedback scheme. We have explored this approach in two widely different devices and plasma conditions: DIII-D and RFX-mod operated in tokamak mode. In DIII-D, the plasma target was high β N in a non-circular divertor tokamak. We define β N as β N = β/(I p /aB t) (%Tm/MA), where β, I p, a, B t are the total stored plasma pressure normalized by the magnetic pressure, plasma current, plasma minor radius and toroidal magnetic field at the plasma center, respectively. The RFX-mod plasma was ohmically-heated with ultra-low safety factor in a circular limiter discharge with active feedback coils outside the thick resistive shell. The DIII-D and RFX-mod experiments showed remarkable consistency with theoretical predictions of torque balance. The application to ignition-oriented devices such as the International Thermonuclear Experimental Reactor (ITER) would expand the horizon of its operational regime. Finally, the internal 3D coil set currently under consideration for edge localized mode suppression in ITER would be well suited for this purpose.« less
NASA Astrophysics Data System (ADS)
Okabayashi, M.; Zanca, P.; Strait, E. J.; Garofalo, A. M.; Hanson, J. M.; In, Y.; La Haye, R. J.; Marrelli, L.; Martin, P.; Paccagnella, R.; Paz-Soldan, C.; Piovesan, P.; Piron, C.; Piron, L.; Shiraki, D.; Volpe, F. A.; DIII-D, The; RFX-mod Teams
2017-01-01
Disruptions caused by tearing modes (TMs) are considered to be one of the most critical roadblocks to achieving reliable, steady-state operation of tokamak fusion reactors. Here we have demonstrated a promising scheme to avoid mode locking by utilizing the electro-magnetic (EM) torque produced with 3D coils that are available in many tokamaks. In this scheme, the EM torque is delivered to the modes by a toroidal phase shift between the externally applied field and the excited TM fields, compensating for the mode momentum loss through the interaction with the resistive wall and uncorrected error fields. Fine control of torque balance is provided by a feedback scheme. We have explored this approach in two widely different devices and plasma conditions: DIII-D and RFX-mod operated in tokamak mode. In DIII-D, the plasma target was high β N in a non-circular divertor tokamak. Here β N is defined as β N = β/(I p /aB t) (%Tm/MA), where β, I p, a, B t are the total stored plasma pressure normalized by the magnetic pressure, plasma current, plasma minor radius and toroidal magnetic field at the plasma center, respectively. The RFX-mod plasma was ohmically-heated with ultra-low safety factor in a circular limiter discharge with active feedback coils outside the thick resistive shell. The DIII-D and RFX-mod experiments showed remarkable consistency with theoretical predictions of torque balance. The application to ignition-oriented devices such as the International Thermonuclear Experimental Reactor (ITER) would expand the horizon of its operational regime. The internal 3D coil set currently under consideration for edge localized mode suppression in ITER would be well suited for this purpose.
Precoding based channel prediction for underwater acoustic OFDM
NASA Astrophysics Data System (ADS)
Cheng, En; Lin, Na; Sun, Hai-xin; Yan, Jia-quan; Qi, Jie
2017-04-01
The life duration of underwater cooperative network has been the hot topic in recent years. And the problem of node energy consuming is the key technology to maintain the energy balance among all nodes. To ensure energy efficiency of some special nodes and obtain a longer lifetime of the underwater cooperative network, this paper focuses on adopting precoding strategy to preprocess the signal at the transmitter and simplify the receiver structure. Meanwhile, it takes into account the presence of Doppler shifts and long feedback transmission delay in an underwater acoustic communication system. Precoding technique is applied based on channel prediction to realize energy saving and improve system performance. Different precoding methods are compared. Simulated results and experimental results show that the proposed scheme has a better performance, and it can provide a simple receiver and realize energy saving for some special nodes in a cooperative communication.
Transport electron through a quantum wire by side-attached asymmetric quantum-dot rings
NASA Astrophysics Data System (ADS)
Rostami, A.; Zabihi, S.; Rasooli S., H.; Seyyedi, S. K.
2011-12-01
The electronic conductance at zero temperature through a quantum wire with side-attached asymmetric quantum ring (as a scatter system) is theoretically studied using the non-interacting Anderson tunneling Hamiltonian method. We show that the asymmetric configuration of QD- scatter system strongly impresses the amplitude and spectrum of quantum wire nanostructure transmission characteristics. It is shown that whenever the balanced number of quantum dots in two rings is substituted by unbalanced scheme, the number of forbidden mini-bands in quantum wire conductance increases and QW-nanostructure electronic conductance contains rich spectral properties due to appearance of the new anti-resonance and resonance points in spectrum. Considering the suitable gap between nano-rings can strengthen the amplitude of new resonant peaks in the QW conductance spectrum. The proposed asymmetric quantum ring scatter system idea in this paper opens a new insight on designing quantum wire nano structure for given electronic conductance.
Data inversion algorithm development for the hologen occultation experiment
NASA Technical Reports Server (NTRS)
Gordley, Larry L.; Mlynczak, Martin G.
1986-01-01
The successful retrieval of atmospheric parameters from radiometric measurement requires not only the ability to do ideal radiometric calculations, but also a detailed understanding of instrument characteristics. Therefore a considerable amount of time was spent in instrument characterization in the form of test data analysis and mathematical formulation. Analyses of solar-to-reference interference (electrical cross-talk), detector nonuniformity, instrument balance error, electronic filter time-constants and noise character were conducted. A second area of effort was the development of techniques for the ideal radiometric calculations required for the Halogen Occultation Experiment (HALOE) data reduction. The computer code for these calculations must be extremely complex and fast. A scheme for meeting these requirements was defined and the algorithms needed form implementation are currently under development. A third area of work included consulting on the implementation of the Emissivity Growth Approximation (EGA) method of absorption calculation into a HALOE broadband radiometer channel retrieval algorithm.
Compression-based integral curve data reuse framework for flow visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Fan; Bi, Chongke; Guo, Hanqi
Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less
NASA Astrophysics Data System (ADS)
Ji, X.; Shen, C.
2017-12-01
Flood inundation presents substantial societal hazards and also changes biogeochemistry for systems like the Amazon. It is often expensive to simulate high-resolution flood inundation and propagation in a long-term watershed-scale model. Due to the Courant-Friedrichs-Lewy (CFL) restriction, high resolution and large local flow velocity both demand prohibitively small time steps even for parallel codes. Here we develop a parallel surface-subsurface process-based model enhanced by multi-resolution meshes that are adaptively switched on or off. The high-resolution overland flow meshes are enabled only when the flood wave invades to floodplains. This model applies semi-implicit, semi-Lagrangian (SISL) scheme in solving dynamic wave equations, and with the assistant of the multi-mesh method, it also adaptively chooses the dynamic wave equation only in the area of deep inundation. Therefore, the model achieves a balance between accuracy and computational cost.
Data-driven modeling of solar-powered urban microgrids
Halu, Arda; Scala, Antonio; Khiyami, Abdulaziz; González, Marta C.
2016-01-01
Distributed generation takes center stage in today’s rapidly changing energy landscape. Particularly, locally matching demand and generation in the form of microgrids is becoming a promising alternative to the central distribution paradigm. Infrastructure networks have long been a major focus of complex networks research with their spatial considerations. We present a systemic study of solar-powered microgrids in the urban context, obeying real hourly consumption patterns and spatial constraints of the city. We propose a microgrid model and study its citywide implementation, identifying the self-sufficiency and temporal properties of microgrids. Using a simple optimization scheme, we find microgrid configurations that result in increased resilience under cost constraints. We characterize load-related failures solving power flows in the networks, and we show the robustness behavior of urban microgrids with respect to optimization using percolation methods. Our findings hint at the existence of an optimal balance between cost and robustness in urban microgrids. PMID:26824071
Data-driven modeling of solar-powered urban microgrids.
Halu, Arda; Scala, Antonio; Khiyami, Abdulaziz; González, Marta C
2016-01-01
Distributed generation takes center stage in today's rapidly changing energy landscape. Particularly, locally matching demand and generation in the form of microgrids is becoming a promising alternative to the central distribution paradigm. Infrastructure networks have long been a major focus of complex networks research with their spatial considerations. We present a systemic study of solar-powered microgrids in the urban context, obeying real hourly consumption patterns and spatial constraints of the city. We propose a microgrid model and study its citywide implementation, identifying the self-sufficiency and temporal properties of microgrids. Using a simple optimization scheme, we find microgrid configurations that result in increased resilience under cost constraints. We characterize load-related failures solving power flows in the networks, and we show the robustness behavior of urban microgrids with respect to optimization using percolation methods. Our findings hint at the existence of an optimal balance between cost and robustness in urban microgrids.
Ozone changes under solar geoengineering: implications for UV exposure and air quality
NASA Astrophysics Data System (ADS)
Nowack, P. J.; Abraham, N. L.; Braesicke, P.; Pyle, J. A.
2015-11-01
Various forms of geoengineering have been proposed to counter anthropogenic climate change. Methods which aim to modify the Earth's energy balance by reducing insolation are often subsumed under the term Solar Radiation Management (SRM). Here, we present results of a standard SRM modelling experiment in which the incoming solar irradiance is reduced to offset the global mean warming induced by a quadrupling of atmospheric carbon dioxide. For the first time in an atmosphere-ocean coupled climate model, we include atmospheric composition feedbacks such as ozone changes under this scenario. Including the composition changes, we find large reductions in surface UV-B irradiance, with implications for vitamin D production, and increases in surface ozone concentrations, both of which could be important for human health. We highlight that both tropospheric and stratospheric ozone changes should be considered in the assessment of any SRM scheme, due to their important roles in regulating UV exposure and air quality.
ls1 mardyn: The Massively Parallel Molecular Dynamics Code for Large Systems.
Niethammer, Christoph; Becker, Stefan; Bernreuther, Martin; Buchholz, Martin; Eckhardt, Wolfgang; Heinecke, Alexander; Werth, Stephan; Bungartz, Hans-Joachim; Glass, Colin W; Hasse, Hans; Vrabec, Jadran; Horsch, Martin
2014-10-14
The molecular dynamics simulation code ls1 mardyn is presented. It is a highly scalable code, optimized for massively parallel execution on supercomputing architectures and currently holds the world record for the largest molecular simulation with over four trillion particles. It enables the application of pair potentials to length and time scales that were previously out of scope for molecular dynamics simulation. With an efficient dynamic load balancing scheme, it delivers high scalability even for challenging heterogeneous configurations. Presently, multicenter rigid potential models based on Lennard-Jones sites, point charges, and higher-order polarities are supported. Due to its modular design, ls1 mardyn can be extended to new physical models, methods, and algorithms, allowing future users to tailor it to suit their respective needs. Possible applications include scenarios with complex geometries, such as fluids at interfaces, as well as nonequilibrium molecular dynamics simulation of heat and mass transfer.
Modeling North American Ice Sheet Response to Changes in Precession and Obliquity
NASA Astrophysics Data System (ADS)
Tabor, C.; Poulsen, C. J.; Pollard, D.
2012-12-01
Milankovitch theory proposes that changes in insolation due to orbital perturbations dictate the waxing and waning of the ice sheets (Hays et al., 1976). However, variations in solar forcing alone are insufficient to produce the glacial oscillations observed in the climate record. Non-linear feedbacks in the Earth system likely work in concert with the orbital cycles to produce a modified signal (e.g. Berger and Loutre, 1996), but the nature of these feedbacks remain poorly understood. To gain a better understand of the ice dynamics and climate feedbacks associated with changes in orbital configuration, we use a complex Earth system model consisting of the GENESIS GCM and land surface model (Pollard and Thompson, 1997), the Pennsylvania State University ice sheet model (Pollard and DeConto, 2009), and the BIOME vegetation model (Kaplan et al., 2001). We began this study by investigating ice sheet sensitivity to a range of commonly used ice sheet model parameters, including mass balance and albedo, to optimize simulations for Pleistocene orbital cycles. Our tests indicate that choice of mass balance and albedo parameterizations can lead to significant differences in ice sheet behavior and volume. For instance, use of an insolation-temperature mass balance scheme (van den Berg, 2008) allows for a larger ice sheet response to orbital changes than the commonly employed positive degree-day method. Inclusion of a large temperature dependent ice albedo, representing phenomena such as melt ponds and dirty ice, also enhances ice sheet sensitivity. Careful tuning of mass balance and albedo parameterizations can help alleviate the problem of insufficient ice sheet retreat during periods of high summer insolation (Horton and Poulsen, 2007) while still accurately replicating the modern climate. Using our optimized configuration, we conducted a series of experiments with idealized transient orbits in an asynchronous coupling scheme to investigate the influence of obliquity and precession on the Laurentide and Cordillera ice sheets of North America. Preliminary model results show that the ice sheet response to changes in obliquity are larger than for precession despite providing a smaller direct insolation variation in the Northern Hemisphere high latitudes. A combination of enhanced Northern Hemisphere mid-latitude temperature gradient and longer cycle duration allow for a larger ice sheet response to obliquity than would be expected from insolation forcing alone. Conversely, a shorter duration dampens the ice sheet response to precession. Nevertheless, the precession cycle does cause significant changes in ice volume, a feature not observed in the Early Pleistocene δ18O records (Raymo and Nisancioglu, 2003). Future work will examine the climate response to an idealized transient orbit that includes concurrent variations in obliquity, precession, and eccentricity.
Quantum computation with classical light: Implementation of the Deutsch-Jozsa algorithm
NASA Astrophysics Data System (ADS)
Perez-Garcia, Benjamin; McLaren, Melanie; Goyal, Sandeep K.; Hernandez-Aranda, Raul I.; Forbes, Andrew; Konrad, Thomas
2016-05-01
We propose an optical implementation of the Deutsch-Jozsa Algorithm using classical light in a binary decision-tree scheme. Our approach uses a ring cavity and linear optical devices in order to efficiently query the oracle functional values. In addition, we take advantage of the intrinsic Fourier transforming properties of a lens to read out whether the function given by the oracle is balanced or constant.