Best Practices for Artifact Versioning in Service-Oriented Systems
2012-01-01
and endpoint [OASIS 2004]. But as Peltz and Anagol- Subbarao warn, “[I]t can be appealing to version down to the very lowest levels in accordance...Schemes There are multiple sources for typical naming schemes in SOA environments: • Anagol- Subbarao and Peltz provide service naming schemes, including...service versioning. 3. Extensions must not use the targetNamespace value. Peltz and Anagol- Subbarao provide additional guidance on how to implement
A Study on the Security Levels of Spread-Spectrum Embedding Schemes in the WOA Framework.
Wang, Yuan-Gen; Zhu, Guopu; Kwong, Sam; Shi, Yun-Qing
2017-08-23
Security analysis is a very important issue for digital watermarking. Several years ago, according to Kerckhoffs' principle, the famous four security levels, namely insecurity, key security, subspace security, and stego-security, were defined for spread-spectrum (SS) embedding schemes in the framework of watermarked-only attack. However, up to now there has been little application of the definition of these security levels to the theoretical analysis of the security of SS embedding schemes, due to the difficulty of the theoretical analysis. In this paper, based on the security definition, we present a theoretical analysis to evaluate the security levels of five typical SS embedding schemes, which are the classical SS, the improved SS (ISS), the circular extension of ISS, the nonrobust and robust natural watermarking, respectively. The theoretical analysis of these typical SS schemes are successfully performed by taking advantage of the convolution of probability distributions to derive the probabilistic models of watermarked signals. Moreover, simulations are conducted to illustrate and validate our theoretical analysis. We believe that the theoretical and practical analysis presented in this paper can bridge the gap between the definition of the four security levels and its application to the theoretical analysis of SS embedding schemes.
An upwind, kinetic flux-vector splitting method for flows in chemical and thermal non-equilibrium
NASA Technical Reports Server (NTRS)
Eppard, W. M.; Grossman, B.
1993-01-01
We have developed new upwind kinetic difference schemes for flows with non-equilibrium thermodynamics and chemistry. These schemes are derived from the Boltzmann equation with the resulting Euler schemes developed as moments of the discretized Boltzmann scheme with a locally Maxwellian velocity distribution. Splitting the velocity distribution at the Boltzmann level is seen to result in a flux-split Euler scheme and is called Kinetic Flux Vector Splitting (KFVS). Extensions to flows with finite-rate chemistry and vibrational relaxation is accomplished utilizing nonequilibrium kinetic theory. Computational examples are presented comparing KFVS with the schemes of Van Leer and Roe for a quasi-one-dimensional flow through a supersonic diffuser, inviscid flow through two-dimensional inlet, and viscous flow over a cone at zero angle-of-attack. Calculations are also shown for the transonic flow over a bump in a channel and the transonic flow over an NACA 0012 airfoil. The results show that even though the KFVS scheme is a Riemann solver at the kinetic level, its behavior at the Euler level is more similar to the existing flux-vector splitting algorithms than to the flux-difference splitting scheme of Roe.
Control of photon storage time using phase locking.
Ham, Byoung S
2010-01-18
A photon echo storage-time extension protocol is presented by using a phase locking method in a three-level backward propagation scheme, where phase locking serves as a conditional stopper of the rephasing process in conventional two-pulse photon echoes. The backward propagation scheme solves the critical problems of extremely low retrieval efficiency and pi rephasing pulse-caused spontaneous emission noise in photon echo based quantum memories. The physics of the storage time extension lies in the imminent population transfer from the excited state to an auxiliary spin state by a phase locking control pulse. We numerically demonstrate that the storage time is lengthened by spin dephasing time.
Sensor data security level estimation scheme for wireless sensor networks.
Ramos, Alex; Filho, Raimir Holanda
2015-01-19
Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates.
Sensor Data Security Level Estimation Scheme for Wireless Sensor Networks
Ramos, Alex; Filho, Raimir Holanda
2015-01-01
Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates. PMID:25608215
A color-coded vision scheme for robotics
NASA Technical Reports Server (NTRS)
Johnson, Kelley Tina
1991-01-01
Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.
Extension of a System Level Tool for Component Level Analysis
NASA Technical Reports Server (NTRS)
Majumdar, Alok; Schallhorn, Paul
2002-01-01
This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.
Extension of a System Level Tool for Component Level Analysis
NASA Technical Reports Server (NTRS)
Majumdar, Alok; Schallhorn, Paul; McConnaughey, Paul K. (Technical Monitor)
2001-01-01
This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow, and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.
Generalization of the NpNn scheme to nonyrast levels of even-even nuclei
NASA Astrophysics Data System (ADS)
Zhao, Y. M.; Arima, A.
2003-07-01
In this Brief Report we present the systematics of excitation energies for even-even nuclei in two regions: the 50
NASA Astrophysics Data System (ADS)
Sparaciari, Carlo; Paris, Matteo G. A.
2013-01-01
We address measurement schemes where certain observables Xk are chosen at random within a set of nondegenerate isospectral observables and then measured on repeated preparations of a physical system. Each observable has a probability zk to be measured, with ∑kzk=1, and the statistics of this generalized measurement is described by a positive operator-valued measure. This kind of scheme is referred to as quantum roulettes, since each observable Xk is chosen at random, e.g., according to the fluctuating value of an external parameter. Here we focus on quantum roulettes for qubits involving the measurements of Pauli matrices, and we explicitly evaluate their canonical Naimark extensions, i.e., their implementation as indirect measurements involving an interaction scheme with a probe system. We thus provide a concrete model to realize the roulette without destroying the signal state, which can be measured again after the measurement or can be transmitted. Finally, we apply our results to the description of Stern-Gerlach-like experiments on a two-level system.
Boidin, B
2015-02-01
This article tackles the perspectives and limits of the extension of health coverage based on community based health insurance schemes in Africa. Despite their strong potential contribution to the extension of health coverage, their weaknesses challenge their ability to play an important role in this extension. Three limits are distinguished: financial fragility; insufficient adaptation to characteristics and needs of poor people; organizational and institutional failures. Therefore lessons can be learnt from the limits of the institutionalization of community based health insurance schemes. At first, community based health insurance schemes are to be considered as a transitional but insufficient solution. There is also a stronger role to be played by public actors in improving financial support, strengthening health services and coordinating coverage programs.
Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part I: numerical scheme
NASA Astrophysics Data System (ADS)
Rõõm, Rein; Männik, Aarne; Luhamaa, Andres
2007-10-01
Two-time-level, semi-implicit, semi-Lagrangian (SISL) scheme is applied to the non-hydrostatic pressure coordinate equations, constituting a modified Miller-Pearce-White model, in hybrid-coordinate framework. Neutral background is subtracted in the initial continuous dynamics, yielding modified equations for geopotential, temperature and logarithmic surface pressure fluctuation. Implicit Lagrangian marching formulae for single time-step are derived. A disclosure scheme is presented, which results in an uncoupled diagnostic system, consisting of 3-D Poisson equation for omega velocity and 2-D Helmholtz equation for logarithmic pressure fluctuation. The model is discretized to create a non-hydrostatic extension to numerical weather prediction model HIRLAM. The discretization schemes, trajectory computation algorithms and interpolation routines, as well as the physical parametrization package are maintained from parent hydrostatic HIRLAM. For stability investigation, the derived SISL model is linearized with respect to the initial, thermally non-equilibrium resting state. Explicit residuals of the linear model prove to be sensitive to the relative departures of temperature and static stability from the reference state. Relayed on the stability study, the semi-implicit term in the vertical momentum equation is replaced to the implicit term, which results in stability increase of the model.
A generalized weight-based particle-in-cell simulation scheme
NASA Astrophysics Data System (ADS)
Lee, W. W.; Jenkins, T. G.; Ethier, S.
2011-03-01
A generalized weight-based particle simulation scheme suitable for simulating magnetized plasmas, where the zeroth-order inhomogeneity is important, is presented. The scheme is an extension of the perturbative simulation schemes developed earlier for particle-in-cell (PIC) simulations. The new scheme is designed to simulate both the perturbed distribution ( δf) and the full distribution (full- F) within the same code. The development is based on the concept of multiscale expansion, which separates the scale lengths of the background inhomogeneity from those associated with the perturbed distributions. The potential advantage for such an arrangement is to minimize the particle noise by using δf in the linear stage of the simulation, while retaining the flexibility of a full- F capability in the fully nonlinear stage of the development when signals associated with plasma turbulence are at a much higher level than those from the intrinsic particle noise.
Higher-order time integration of Coulomb collisions in a plasma using Langevin equations
Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; ...
2013-02-08
The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt 1/2)] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering ifmore » and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.« less
NASA Astrophysics Data System (ADS)
Siswantyo, Sepha; Susanti, Bety Hayat
2016-02-01
Preneel-Govaerts-Vandewalle (PGV) schemes consist of 64 possible single-block-length schemes that can be used to build a hash function based on block ciphers. For those 64 schemes, Preneel claimed that 4 schemes are secure. In this paper, we apply length extension attack on those 4 secure PGV schemes which use RC5 algorithm in its basic construction to test their collision resistance property. The attack result shows that the collision occurred on those 4 secure PGV schemes. Based on the analysis, we indicate that Feistel structure and data dependent rotation operation in RC5 algorithm, XOR operations on the scheme, along with selection of additional message block value also give impact on the collision to occur.
Gadomski, A; Hladyszowski, J
2015-01-01
An extension of the Coulomb-Amontons law is proposed in terms of an interaction-detail involving renormalization (simplified) n-th level scheme. The coefficient of friction is obtained in a general exponential (nonlinear) form, characteristic of virtually infinite (or, many body) level of the interaction map. Yet, its application for a hydration repulsion bilayered system, prone to facilitated lubrication, is taken as linearly confined, albeit with an inclusion of a decisive repelling force/pressure factor. Some perspectives toward related systems, fairly outside biotribological issues, have been also addressed.
NASA Astrophysics Data System (ADS)
Culpitt, Tanner; Brorsen, Kurt R.; Hammes-Schiffer, Sharon
2017-06-01
Density functional theory (DFT) embedding approaches have generated considerable interest in the field of computational chemistry because they enable calculations on larger systems by treating subsystems at different levels of theory. To circumvent the calculation of the non-additive kinetic potential, various projector methods have been developed to ensure the orthogonality of molecular orbitals between subsystems. Herein the orthogonality constrained basis set expansion (OCBSE) procedure is implemented to enforce this subsystem orbital orthogonality without requiring a level shifting parameter. This scheme is a simple alternative to existing parameter-free projector-based schemes, such as the Huzinaga equation. The main advantage of the OCBSE procedure is that excellent convergence behavior is attained for DFT-in-DFT embedding without freezing any of the subsystem densities. For the three chemical systems studied, the level of accuracy is comparable to or higher than that obtained with the Huzinaga scheme with frozen subsystem densities. Allowing both the high-level and low-level DFT densities to respond to each other during DFT-in-DFT embedding calculations provides more flexibility and renders this approach more generally applicable to chemical systems. It could also be useful for future extensions to embedding approaches combining wavefunction theories and DFT.
Moving template analysis of crack growth. 1: Procedure development
NASA Astrophysics Data System (ADS)
Padovan, Joe; Guo, Y. H.
1994-06-01
Based on a moving template procedure, this two part series will develop a method to follow the crack tip physics in a self-adaptive manner which provides a uniformly accurate prediction of crack growth. For multiple crack environments, this is achieved by attaching a moving template to each crack tip. The templates are each individually oriented to follow the associated growth orientation and rate. In this part, the essentials of the procedure are derived for application to fatigue crack environments. Overall the scheme derived possesses several hierarchical levels, i.e. the global model, the interpolatively tied moving template, and a multilevel element death option to simulate the crack wake. To speed up computation, the hierarchical polytree scheme is used to reorganize the global stiffness inversion process. In addition to developing the various features of the scheme, the accuracy of predictions for various crack lengths is also benchmarked. Part 2 extends the scheme to multiple crack problems. Extensive benchmarking is also presented to verify the scheme.
The Text Encoding Initiative: Flexible and Extensible Document Encoding.
ERIC Educational Resources Information Center
Barnard, David T.; Ide, Nancy M.
1997-01-01
The Text Encoding Initiative (TEI), an international collaboration aimed at producing a common encoding scheme for complex texts, examines the requirement for generality versus the requirement to handle specialized text types. Discusses how documents and users tax the limits of fixed schemes requiring flexible extensible encoding to support…
Döntgen, Malte; Schmalz, Felix; Kopp, Wassja A; Kröger, Leif C; Leonhard, Kai
2018-06-13
An automated scheme for obtaining chemical kinetic models from scratch using reactive molecular dynamics and quantum chemistry simulations is presented. This methodology combines the phase space sampling of reactive molecular dynamics with the thermochemistry and kinetics prediction capabilities of quantum mechanics. This scheme provides the NASA polynomial and modified Arrhenius equation parameters for all species and reactions that are observed during the simulation and supplies them in the ChemKin format. The ab initio level of theory for predictions is easily exchangeable and the presently used G3MP2 level of theory is found to reliably reproduce hydrogen and methane oxidation thermochemistry and kinetics data. Chemical kinetic models obtained with this approach are ready-to-use for, e.g., ignition delay time simulations, as shown for hydrogen combustion. The presented extension of the ChemTraYzer approach can be used as a basis for methodologically advancing chemical kinetic modeling schemes and as a black-box approach to generate chemical kinetic models.
Classification of extraterrestrial civilizations
NASA Astrophysics Data System (ADS)
Tang, Tong B.; Chang, Grace
1991-06-01
A scheme of classification of extraterrestrial intelligence (ETI) communities based on the scope of energy accessible to the civilization in question is proposed as an alternative to the Kardeshev (1964) scheme that includes three types of civilization, as determined by their levels of energy expenditure. The proposed scheme includes six classes: (1) a civilization that runs essentially on energy exerted by individual beings or by domesticated lower life forms, (2) harnessing of natural sources on planetary surface with artificial constructions, like water wheels and wind sails, (3) energy from fossils and fissionable isotopes, mined beneath the planet surface, (4) exploitation of nuclear fusion on a large scale, whether on the planet, in space, or from primary solar energy, (5) extensive use of antimatter for energy storage, and (6) energy from spacetime, perhaps via the action of naked singularities.
Extensive Listening in a Colombian University: Process, Product, and Perceptions
ERIC Educational Resources Information Center
Mayora, Carlos A.
2017-01-01
The current paper reports an experience implementing a small-scale narrow listening scheme (one of the varieties of extensive listening) with intermediate learners of English as a foreign language in a Colombian university. The paper presents (a) how the scheme was designed and implemented, including materials and procedures (the process); (b) how…
A novel encryption scheme for high-contrast image data in the Fresnelet domain
Bibi, Nargis; Farwa, Shabieh; Jahngir, Adnan; Usman, Muhammad
2018-01-01
In this paper, a unique and more distinctive encryption algorithm is proposed. This is based on the complexity of highly nonlinear S box in Flesnelet domain. The nonlinear pattern is transformed further to enhance the confusion in the dummy data using Fresnelet technique. The security level of the encrypted image boosts using the algebra of Galois field in Fresnelet domain. At first level, the Fresnelet transform is used to propagate the given information with desired wavelength at specified distance. It decomposes given secret data into four complex subbands. These complex sub-bands are separated into two components of real subband data and imaginary subband data. At second level, the net subband data, produced at the first level, is deteriorated to non-linear diffused pattern using the unique S-box defined on the Galois field F28. In the diffusion process, the permuted image is substituted via dynamic algebraic S-box substitution. We prove through various analysis techniques that the proposed scheme enhances the cipher security level, extensively. PMID:29608609
On the Conservation and Convergence to Weak Solutions of Global Schemes
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Gottlieb, David; Shu, Chi-Wang
2001-01-01
In this paper we discuss the issue of conservation and convergence to weak solutions of several global schemes, including the commonly used compact schemes and spectral collocation schemes, for solving hyperbolic conservation laws. It is shown that such schemes, if convergent boundedly almost everywhere, will converge to weak solutions. The results are extensions of the classical Lax-Wendroff theorem concerning conservative schemes.
SKIRT: Hybrid parallelization of radiative transfer simulations
NASA Astrophysics Data System (ADS)
Verstocken, S.; Van De Putte, D.; Camps, P.; Baes, M.
2017-07-01
We describe the design, implementation and performance of the new hybrid parallelization scheme in our Monte Carlo radiative transfer code SKIRT, which has been used extensively for modelling the continuum radiation of dusty astrophysical systems including late-type galaxies and dusty tori. The hybrid scheme combines distributed memory parallelization, using the standard Message Passing Interface (MPI) to communicate between processes, and shared memory parallelization, providing multiple execution threads within each process to avoid duplication of data structures. The synchronization between multiple threads is accomplished through atomic operations without high-level locking (also called lock-free programming). This improves the scaling behaviour of the code and substantially simplifies the implementation of the hybrid scheme. The result is an extremely flexible solution that adjusts to the number of available nodes, processors and memory, and consequently performs well on a wide variety of computing architectures.
NASA Astrophysics Data System (ADS)
Hahn, S. J.; Fawley, W. M.; Kim, K. J.; Edighoffer, J. A.
1994-12-01
The authors examine the performance of the so-called electron output scheme recently proposed by the Novosibirsk group. In this scheme, the key role of the FEL oscillator is to induce bunching, while an external undulator, called the radiator, then outcouples the bunched electron beam to optical energy via coherent emission. The level of the intracavity power in the oscillator is kept low by employing a transverse optical klystron (TOK) configuration, thus avoiding excessive thermal loading on the cavity mirrors. Time-dependent effects are important in the operation of the electron output scheme because high gain in the TOK oscillator leads to sideband instabilities and chaotic behavior. The authors have carried out an extensive simulation study by using 1D and 2D time-dependent codes and find that proper control of the oscillator cavity detuning and cavity loss results in high output bunching with a narrow spectral bandwidth. Large cavity detuning in the oscillator and tapering of the radiator undulator is necessary for the optimum output power.
Convergence of generalized MUSCL schemes
NASA Technical Reports Server (NTRS)
Osher, S.
1984-01-01
Semi-discrete generalizations of the second order extension of Godunov's scheme, known as the MUSCL scheme, are constructed, starting with any three point E scheme. They are used to approximate scalar conservation laws in one space dimension. For convex conservation laws, each member of a wide class is proven to be a convergent approximation to the correct physical solution. Comparison with another class of high resolution convergent schemes is made.
ERIC Educational Resources Information Center
Maxime, Francoise; Maze, Armelle
2006-01-01
This article aims to study the design and the organization of auditing systems to develop environmental or quality assurance schemes at the farm level and the role that extension services could play in these processes. It starts by discussing the issue of combining auditing and advisory activities and developing auditing competences. Empirical…
ACCURATE ORBITAL INTEGRATION OF THE GENERAL THREE-BODY PROBLEM BASED ON THE D'ALEMBERT-TYPE SCHEME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minesaki, Yukitaka
2013-03-15
We propose an accurate orbital integration scheme for the general three-body problem that retains all conserved quantities except angular momentum. The scheme is provided by an extension of the d'Alembert-type scheme for constrained autonomous Hamiltonian systems. Although the proposed scheme is merely second-order accurate, it can precisely reproduce some periodic, quasiperiodic, and escape orbits. The Levi-Civita transformation plays a role in designing the scheme.
A fast and accurate dihedral interpolation loop subdivision scheme
NASA Astrophysics Data System (ADS)
Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan
2018-04-01
In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.
Marching iterative methods for the parabolized and thin layer Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Israeli, M.
1985-01-01
Downstream marching iterative schemes for the solution of the Parabolized or Thin Layer (PNS or TL) Navier-Stokes equations are described. Modifications of the primitive equation global relaxation sweep procedure result in efficient second-order marching schemes. These schemes take full account of the reduced order of the approximate equations as they behave like the SLOR for a single elliptic equation. The improved smoothing properties permit the introduction of Multi-Grid acceleration. The proposed algorithm is essentially Reynolds number independent and therefore can be applied to the solution of the subsonic Euler equations. The convergence rates are similar to those obtained by the Multi-Grid solution of a single elliptic equation; the storage is also comparable as only the pressure has to be stored on all levels. Extensions to three-dimensional and compressible subsonic flows are discussed. Numerical results are presented.
Mapping Mangrove Density from Rapideye Data in Central America
NASA Astrophysics Data System (ADS)
Son, Nguyen-Thanh; Chen, Chi-Farn; Chen, Cheng-Ru
2017-06-01
Mangrove forests provide a wide range of socioeconomic and ecological services for coastal communities. Extensive aquaculture development of mangrove waters in many developing countries has constantly ignored services of mangrove ecosystems, leading to unintended environmental consequences. Monitoring the current status and distribution of mangrove forests is deemed important for evaluating forest management strategies. This study aims to delineate the density distribution of mangrove forests in the Gulf of Fonseca, Central America with Rapideye data using the support vector machines (SVM). The data collected in 2012 for density classification of mangrove forests were processed based on four different band combination schemes: scheme-1 (bands 1-3, 5 excluding the red-edge band 4), scheme-2 (bands 1-5), scheme-3 (bands 1-3, 5 incorporating with the normalized difference vegetation index, NDVI), and scheme-4 (bands 1-3, 5 incorporating with the normalized difference red-edge index, NDRI). We also hypothesized if the obvious contribution of Rapideye red-edge band could improve the classification results. Three main steps of data processing were employed: (1), data pre-processing, (2) image classification, and (3) accuracy assessment to evaluate the contribution of red-edge band in terms of the accuracy of classification results across these four schemes. The classification maps compared with the ground reference data indicated the slightly higher accuracy level observed for schemes 2 and 4. The overall accuracies and Kappa coefficients were 97% and 0.95 for scheme-2 and 96.9% and 0.95 for scheme-4, respectively.
Calculations of steady and transient channel flows with a time-accurate L-U factorization scheme
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1991-01-01
Calculations of steady and unsteady, transonic, turbulent channel flows with a time accurate, lower-upper (L-U) factorization scheme are presented. The L-U factorization scheme is formally second-order accurate in time and space, and it is an extension of the steady state flow solver (RPLUS) used extensively to solve compressible flows. A time discretization method and the implementation of a consistent boundary condition specific to the L-U factorization scheme are also presented. The turbulence is described by the Baldwin-Lomax algebraic turbulence model. The present L-U scheme yields stable numerical results with the use of much smaller artificial dissipations than those used in the previous steady flow solver for steady and unsteady channel flows. The capability to solve time dependent flows is shown by solving very weakly excited and strongly excited, forced oscillatory, channel flows.
Design of an extensive information representation scheme for clinical narratives.
Deléger, Louise; Campillos, Leonardo; Ligozat, Anne-Laure; Névéol, Aurélie
2017-09-11
Knowledge representation frameworks are essential to the understanding of complex biomedical processes, and to the analysis of biomedical texts that describe them. Combined with natural language processing (NLP), they have the potential to contribute to retrospective studies by unlocking important phenotyping information contained in the narrative content of electronic health records (EHRs). This work aims to develop an extensive information representation scheme for clinical information contained in EHR narratives, and to support secondary use of EHR narrative data to answer clinical questions. We review recent work that proposed information representation schemes and applied them to the analysis of clinical narratives. We then propose a unifying scheme that supports the extraction of information to address a large variety of clinical questions. We devised a new information representation scheme for clinical narratives that comprises 13 entities, 11 attributes and 37 relations. The associated annotation guidelines can be used to consistently apply the scheme to clinical narratives and are https://cabernet.limsi.fr/annotation_guide_for_the_merlot_french_clinical_corpus-Sept2016.pdf . The information scheme includes many elements of the major schemes described in the clinical natural language processing literature, as well as a uniquely detailed set of relations.
Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph
2014-06-01
We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA.
Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph
2014-01-01
We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA. PMID:26516290
Zhao, Zhenguo; Shi, Wenbo
2014-01-01
Probabilistic signature scheme has been widely used in modern electronic commerce since it could provide integrity, authenticity, and nonrepudiation. Recently, Wu and Lin proposed a novel probabilistic signature (PS) scheme using the bilinear square Diffie-Hellman (BSDH) problem. They also extended it to a universal designated verifier signature (UDVS) scheme. In this paper, we analyze the security of Wu et al.'s PS scheme and UDVS scheme. Through concrete attacks, we demonstrate both of their schemes are not unforgeable. The security analysis shows that their schemes are not suitable for practical applications.
Enhancing Vocabulary Acquisition through Reading: A Hierarchy of Text-Related Exercise Types.
ERIC Educational Resources Information Center
Wesche, M.; Paribakht, T. Sima
This paper describes a classification scheme developed to examine the effects of extensive reading on primary and second language vocabulary acquisition and reports on an experiment undertaken to test the model scheme. The classification scheme represents a hypothesized hierarchy of the degree and type of mental processing required by various…
NASA Technical Reports Server (NTRS)
Yee, H. C.
1995-01-01
Two classes of explicit compact high-resolution shock-capturing methods for the multidimensional compressible Euler equations for fluid dynamics are constructed. Some of these schemes can be fourth-order accurate away from discontinuities. For the semi-discrete case their shock-capturing properties are of the total variation diminishing (TVD), total variation bounded (TVB), total variation diminishing in the mean (TVDM), essentially nonoscillatory (ENO), or positive type of scheme for 1-D scalar hyperbolic conservation laws and are positive schemes in more than one dimension. These fourth-order schemes require the same grid stencil as their second-order non-compact cousins. One class does not require the standard matrix inversion or a special numerical boundary condition treatment associated with typical compact schemes. Due to the construction, these schemes can be viewed as approximations to genuinely multidimensional schemes in the sense that they might produce less distortion in spherical type shocks and are more accurate in vortex type flows than schemes based purely on one-dimensional extensions. However, one class has a more desirable high-resolution shock-capturing property and a smaller operation count in 3-D than the other class. The extension of these schemes to coupled nonlinear systems can be accomplished using the Roe approximate Riemann solver, the generalized Steger and Warming flux-vector splitting or the van Leer type flux-vector splitting. Modification to existing high-resolution second- or third-order non-compact shock-capturing computer codes is minimal. High-resolution shock-capturing properties can also be achieved via a variant of the second-order Lax-Friedrichs numerical flux without the use of Riemann solvers for coupled nonlinear systems with comparable operations count to their classical shock-capturing counterparts. The simplest extension to viscous flows can be achieved by using the standard fourth-order compact or non-compact formula for the viscous terms.
Multigrid calculation of three-dimensional turbomachinery flows
NASA Technical Reports Server (NTRS)
Caughey, David A.
1989-01-01
Research was performed in the general area of computational aerodynamics, with particular emphasis on the development of efficient techniques for the solution of the Euler and Navier-Stokes equations for transonic flows through the complex blade passages associated with turbomachines. In particular, multigrid methods were developed, using both explicit and implicit time-stepping schemes as smoothing algorithms. The specific accomplishments of the research have included: (1) the development of an explicit multigrid method to solve the Euler equations for three-dimensional turbomachinery flows based upon the multigrid implementation of Jameson's explicit Runge-Kutta scheme (Jameson 1983); (2) the development of an implicit multigrid scheme for the three-dimensional Euler equations based upon lower-upper factorization; (3) the development of a multigrid scheme using a diagonalized alternating direction implicit (ADI) algorithm; (4) the extension of the diagonalized ADI multigrid method to solve the Euler equations of inviscid flow for three-dimensional turbomachinery flows; and also (5) the extension of the diagonalized ADI multigrid scheme to solve the Reynolds-averaged Navier-Stokes equations for two-dimensional turbomachinery flows.
Zhao, Zhenguo; Shi, Wenbo
2014-01-01
Probabilistic signature scheme has been widely used in modern electronic commerce since it could provide integrity, authenticity, and nonrepudiation. Recently, Wu and Lin proposed a novel probabilistic signature (PS) scheme using the bilinear square Diffie-Hellman (BSDH) problem. They also extended it to a universal designated verifier signature (UDVS) scheme. In this paper, we analyze the security of Wu et al.'s PS scheme and UDVS scheme. Through concrete attacks, we demonstrate both of their schemes are not unforgeable. The security analysis shows that their schemes are not suitable for practical applications. PMID:25025083
Etch Profile Simulation Using Level Set Methods
NASA Technical Reports Server (NTRS)
Hwang, Helen H.; Meyyappan, Meyya; Arnold, James O. (Technical Monitor)
1997-01-01
Etching and deposition of materials are critical steps in semiconductor processing for device manufacturing. Both etching and deposition may have isotropic and anisotropic components, due to directional sputtering and redeposition of materials, for example. Previous attempts at modeling profile evolution have used so-called "string theory" to simulate the moving solid-gas interface between the semiconductor and the plasma. One complication of this method is that extensive de-looping schemes are required at the profile corners. We will present a 2D profile evolution simulation using level set theory to model the surface. (1) By embedding the location of the interface in a field variable, the need for de-looping schemes is eliminated and profile corners are more accurately modeled. This level set profile evolution model will calculate both isotropic and anisotropic etch and deposition rates of a substrate in low pressure (10s mTorr) plasmas, considering the incident ion energy angular distribution functions and neutral fluxes. We will present etching profiles of Si substrates in Ar/Cl2 discharges for various incident ion energies and trench geometries.
Seino, Junji; Nakai, Hiromi
2012-10-14
The local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys. 136, 244102 (2012)], which is based on the locality of relativistic effects, has been extended to a four-component Dirac-Coulomb Hamiltonian. In the previous study, the LUT scheme was applied only to a one-particle IODKH Hamiltonian with non-relativistic two-electron Coulomb interaction, termed IODKH/C. The current study extends the LUT scheme to a two-particle IODKH Hamiltonian as well as one-particle one, termed IODKH/IODKH, which has been a real bottleneck in numerical calculation. The LUT scheme with the IODKH/IODKH Hamiltonian was numerically assessed in the diatomic molecules HX and X(2) and hydrogen halide molecules, (HX)(n) (X = F, Cl, Br, and I). The total Hartree-Fock energies calculated by the LUT method agree well with conventional IODKH/IODKH results. The computational cost of the LUT method is reduced drastically compared with that of the conventional method. In addition, the LUT method achieves linear-scaling with respect to the system size and a small prefactor.
NASA Astrophysics Data System (ADS)
Hahn, S. J.; Fawley, W. M.; Kim, K.-J.; Edighoffer, J. A.
1995-04-01
We examine the performance of the so-called electron output scheme recently proposed by the Novosibirsk group [G.I. Erg et al., 15th Int. Free Electron Laser Conf., The Hague, The Netherlands, 1993, Book of Abstracts p. 50; Preprint Budker INP 93-75]. In this scheme, the key role of the FEL oscillator is to induce bunching, while an external undulator, called the radiator, then outcouples the bunched electron beam to optical energy via coherent emission. The level of the intracavity power in the oscillator is kept low by employing a transverse optical klystron (TOK) configuration, thus avoiding excessive thermal loading on the cavity mirrors. Time-dependent effects are important in the operation of the electron output scheme because high gain in the TOK oscillator leads to sideband instabilities and chaotic behavior. We have carried out an extensive simulation study by using 1D and 2D time-dependent codes and find that proper control of the oscillator cavity detuning and cavity loss results in high output bunching with a narrow spectral bandwidth. Large cavity detuning in the oscillator and tapering of the radiator undulator is necessary for the optimum output power.
Coherent population trapping with polarization modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yun, Peter, E-mail: enxue.yun@obspm.fr; Guérandel, Stéphane; Clercq, Emeric de
Coherent population trapping (CPT) is extensively studied for future vapor cell clocks of high frequency stability. In the constructive polarization modulation CPT scheme, a bichromatic laser field with polarization and phase synchronously modulated is applied on an atomic medium. A high contrast CPT signal is observed in this so-called double-modulation configuration, due to the fact that the atomic population does not leak to the extreme Zeeman states, and that the two CPT dark states, which are produced successively by the alternate polarizations, add constructively. Here, we experimentally investigate CPT signal dynamics first in the usual configuration, a single circular polarization.more » The double-modulation scheme is then addressed in both cases: one pulse Rabi interaction and two pulses Ramsey interaction. The impact and the optimization of the experimental parameters involved in the time sequence are reviewed. We show that a simple seven-level model explains the experimental observations. The double-modulation scheme yields a high contrast similar to the one of other high contrast configurations like push-pull optical pumping or crossed linear polarization scheme, with a setup allowing a higher compactness. The constructive polarization modulation is attractive for atomic clock, atomic magnetometer, and high precision spectroscopy applications.« less
Towards syntactic characterizations of approximation schemes via predicate and graph decompositions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, H.B. III; Stearns, R.E.; Jacob, R.
1998-12-01
The authors present a simple extensible theoretical framework for devising polynomial time approximation schemes for problems represented using natural syntactic (algebraic) specifications endowed with natural graph theoretic restrictions on input instances. Direct application of the technique yields polynomial time approximation schemes for all the problems studied in [LT80, NC88, KM96, Ba83, DTS93, HM+94a, HM+94] as well as the first known approximation schemes for a number of additional combinatorial problems. One notable aspect of the work is that it provides insights into the structure of the syntactic specifications and the corresponding algorithms considered in [KM96, HM+94]. The understanding allows them tomore » extend the class of syntactic specifications for which generic approximation schemes can be developed. The results can be shown to be tight in many cases, i.e. natural extensions of the specifications can be shown to yield non-approximable problems. The results provide a non-trivial characterization of a class of problems having a PTAS and extend the earlier work on this topic by [KM96, HM+94].« less
An Investigation of High-Order Shock-Capturing Methods for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Casper, Jay; Baysal, Oktay
1997-01-01
Topics covered include: Low-dispersion scheme for nonlinear acoustic waves in nonuniform flow; Computation of acoustic scattering by a low-dispersion scheme; Algorithmic extension of low-dispersion scheme and modeling effects for acoustic wave simulation; The accuracy of shock capturing in two spatial dimensions; Using high-order methods on lower-order geometries; and Computational considerations for the simulation of discontinuous flows.
ERIC Educational Resources Information Center
Lovin, LouAnn H.; Stevens, Alexis L.; Siegfried, John; Wilkins, Jesse L. M.; Norton, Anderson
2018-01-01
In an effort to expand our knowledge base pertaining to pre-K-8 prospective teachers' understanding of fractions, the present study was designed to extend the work on fractions schemes and operations to this population. One purpose of our study was to validate the fractions schemes and operations hierarchy with the pre-K-8 prospective teacher…
2D-pattern matching image and video compression: theory, algorithms, and experiments.
Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth
2002-01-01
In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.
NASA Astrophysics Data System (ADS)
Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva
2018-02-01
Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.
Raul, Pramod R; Pagilla, Prabhakar R
2015-05-01
In this paper, two adaptive Proportional-Integral (PI) control schemes are designed and discussed for control of web tension in Roll-to-Roll (R2R) manufacturing systems. R2R systems are used to transport continuous materials (called webs) on rollers from the unwind roll to the rewind roll. Maintaining web tension at the desired value is critical to many R2R processes such as printing, coating, lamination, etc. Existing fixed gain PI tension control schemes currently used in industrial practice require extensive tuning and do not provide the desired performance for changing operating conditions and material properties. The first adaptive PI scheme utilizes the model reference approach where the controller gains are estimated based on matching of the actual closed-loop tension control systems with an appropriately chosen reference model. The second adaptive PI scheme utilizes the indirect adaptive control approach together with relay feedback technique to automatically initialize the adaptive PI gains. These adaptive tension control schemes can be implemented on any R2R manufacturing system. The key features of the two adaptive schemes is that their designs are simple for practicing engineers, easy to implement in real-time, and automate the tuning process. Extensive experiments are conducted on a large experimental R2R machine which mimics many features of an industrial R2R machine. These experiments include trials with two different polymer webs and a variety of operating conditions. Implementation guidelines are provided for both adaptive schemes. Experimental results comparing the two adaptive schemes and a fixed gain PI tension control scheme used in industrial practice are provided and discussed. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Hierarchical content-based image retrieval by dynamic indexing and guided search
NASA Astrophysics Data System (ADS)
You, Jane; Cheung, King H.; Liu, James; Guo, Linong
2003-12-01
This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include: a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing, an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features.
A two-level structure for advanced space power system automation
NASA Technical Reports Server (NTRS)
Loparo, Kenneth A.; Chankong, Vira
1990-01-01
The tasks to be carried out during the three-year project period are: (1) performing extensive simulation using existing mathematical models to build a specific knowledge base of the operating characteristics of space power systems; (2) carrying out the necessary basic research on hierarchical control structures, real-time quantitative algorithms, and decision-theoretic procedures; (3) developing a two-level automation scheme for fault detection and diagnosis, maintenance and restoration scheduling, and load management; and (4) testing and demonstration. The outlines of the proposed system structure that served as a master plan for this project, work accomplished, concluding remarks, and ideas for future work are also addressed.
Performance characteristics of an adaptive controller based on least-mean-square filters
NASA Technical Reports Server (NTRS)
Mehta, Rajiv S.; Merhav, Shmuel J.
1986-01-01
A closed loop, adaptive control scheme that uses a least mean square filter as the controller model is presented, along with simulation results that demonstrate the excellent robustness of this scheme. It is shown that the scheme adapts very well to unknown plants, even those that are marginally stable, responds appropriately to changes in plant parameters, and is not unduly affected by additive noise. A heuristic argument for the conditions necessary for convergence is presented. Potential applications and extensions of the scheme are also discussed.
Effectiveness of community health financing in meeting the cost of illness.
Preker, Alexander S.; Carrin, Guy; Dror, David; Jakab, Melitta; Hsiao, William; Arhin-Tenkorang, Dyna
2002-01-01
How to finance and provide health care for the more than 1.3 billion rural poor and informal sector workers in low- and middle-income countries is one of the greatest challenges facing the international development community. This article presents the main findings from an extensive survey of the literature of community financing arrangements, and selected experiences from the Asia and Africa regions. Most community financing schemes have evolved in the context of severe economic constraints, political instability, and lack of good governance. Micro-level household data analysis indicates that community financing improves access by rural and informal sector workers to needed heath care and provides them with some financial protection against the cost of illness. Macro-level cross-country analysis gives empirical support to the hypothesis that risk-sharing in health financing matters in terms of its impact on both the level and distribution of health, financial fairness and responsiveness indicators. The background research done for this article points to five key policies available to governments to improve the effectiveness and sustainability of existing community financing schemes. This includes: (a) increased and well-targeted subsidies to pay for the premiums of low-income populations; (b) insurance to protect against expenditure fluctuations and re-insurance to enlarge the effective size of small risk pools; (c) effective prevention and case management techniques to limit expenditure fluctuations; (d) technical support to strengthen the management capacity of local schemes; and (e) establishment and strengthening of links with the formal financing and provider networks. PMID:11953793
Exploring the Perceptions of Success in an Exercise Referral Scheme: A Mixed Method Investigation
ERIC Educational Resources Information Center
Mills, Hayley; Crone, Diane; James, David V. B.; Johnston, Lynne H.
2012-01-01
Background: Exercise referral schemes feature as one of the prevalent primary care physical activity interventions in the United Kingdom, without extensive understanding of how those involved in providing and participating view success. The present research explores and reveals the constituents of "success," through comparison,…
Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation
NASA Astrophysics Data System (ADS)
Litaker, Eric T.
1994-12-01
The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.
SX User's Manual for SX version 2. 0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.A.; Braddy, D.
1993-01-04
Scheme is a lexically scoped, properly tail recursive dialect of the LISP programming language. The PACT implementation is described abstractly in Abelson and Sussman's book, Structure and Interpretation of Computer Programs. It features all of the essential procedures'' described in the Revised Report on Scheme'' which defines the standard for Scheme. In PACT, Scheme is implemented as a library; however, a small driver delivers a stand alone Scheme interpreter. The PACT implementation features a reference counting incremental garbage collector. This distributes the overhead of memory management throughout the running of Scheme code. It also tends to keep Scheme from tryingmore » to grab the entire machine on which it is running which some garbage collection schemes will attempt to do. SX is perhaps the ultimate PACT statement. It is simply Scheme plus the other parts of PACT. A more precise way to describe it is as a dialect of LISP with extensions for PGS, PDB, PDBX, PML, and PANACEA. What this yields is an interpretive language whose primitive procedures span the functionality of all of PACT. Like the Scheme implementation which it extends, SX provides both a library and a stand alone application. The stand alone interpreter is the engine behind applications such as PDBView and PDBDiff. The SX library is the heart of TRANSL, a tool to translate data files from one database format to another. The modularization and layering make it possible to use the PACT components like building blocks. In addition, SX contains functionality which is the generalization of that found in ULTRA II. This means that as the development of SX proceeds, an SX driven application will be able to,perform arbitrary dimensional presentation, analysis, and manipulation tasks. Because of the fundamental unity of these two PACT parts, they are documented in a single manual. The first part will cover the standard Scheme functionality and the second part will discuss the SX extensions.« less
SX User`s Manual for SX version 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.A.; Braddy, D.
1993-01-04
Scheme is a lexically scoped, properly tail recursive dialect of the LISP programming language. The PACT implementation is described abstractly in Abelson and Sussman`s book, Structure and Interpretation of Computer Programs. It features all of the ``essential procedures`` described in the ``Revised Report on Scheme`` which defines the standard for Scheme. In PACT, Scheme is implemented as a library; however, a small driver delivers a stand alone Scheme interpreter. The PACT implementation features a reference counting incremental garbage collector. This distributes the overhead of memory management throughout the running of Scheme code. It also tends to keep Scheme from tryingmore » to grab the entire machine on which it is running which some garbage collection schemes will attempt to do. SX is perhaps the ultimate PACT statement. It is simply Scheme plus the other parts of PACT. A more precise way to describe it is as a dialect of LISP with extensions for PGS, PDB, PDBX, PML, and PANACEA. What this yields is an interpretive language whose primitive procedures span the functionality of all of PACT. Like the Scheme implementation which it extends, SX provides both a library and a stand alone application. The stand alone interpreter is the engine behind applications such as PDBView and PDBDiff. The SX library is the heart of TRANSL, a tool to translate data files from one database format to another. The modularization and layering make it possible to use the PACT components like building blocks. In addition, SX contains functionality which is the generalization of that found in ULTRA II. This means that as the development of SX proceeds, an SX driven application will be able to,perform arbitrary dimensional presentation, analysis, and manipulation tasks. Because of the fundamental unity of these two PACT parts, they are documented in a single manual. The first part will cover the standard Scheme functionality and the second part will discuss the SX extensions.« less
Embedded wavelet packet transform technique for texture compression
NASA Astrophysics Data System (ADS)
Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay
1995-09-01
A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.
Finite Volume Methods: Foundation and Analysis
NASA Technical Reports Server (NTRS)
Barth, Timothy; Ohlberger, Mario
2003-01-01
Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.
Arbitrated Quantum Signature with Hamiltonian Algorithm Based on Blind Quantum Computation
NASA Astrophysics Data System (ADS)
Shi, Ronghua; Ding, Wanting; Shi, Jinjing
2018-03-01
A novel arbitrated quantum signature (AQS) scheme is proposed motivated by the Hamiltonian algorithm (HA) and blind quantum computation (BQC). The generation and verification of signature algorithm is designed based on HA, which enables the scheme to rely less on computational complexity. It is unnecessary to recover original messages when verifying signatures since the blind quantum computation is applied, which can improve the simplicity and operability of our scheme. It is proved that the scheme can be deployed securely, and the extended AQS has some extensive applications in E-payment system, E-government, E-business, etc.
Arbitrated Quantum Signature with Hamiltonian Algorithm Based on Blind Quantum Computation
NASA Astrophysics Data System (ADS)
Shi, Ronghua; Ding, Wanting; Shi, Jinjing
2018-07-01
A novel arbitrated quantum signature (AQS) scheme is proposed motivated by the Hamiltonian algorithm (HA) and blind quantum computation (BQC). The generation and verification of signature algorithm is designed based on HA, which enables the scheme to rely less on computational complexity. It is unnecessary to recover original messages when verifying signatures since the blind quantum computation is applied, which can improve the simplicity and operability of our scheme. It is proved that the scheme can be deployed securely, and the extended AQS has some extensive applications in E-payment system, E-government, E-business, etc.
Data compression for satellite images
NASA Technical Reports Server (NTRS)
Chen, P. H.; Wintz, P. A.
1976-01-01
An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.
NASA Astrophysics Data System (ADS)
Binh, Le Nguyen
2009-04-01
A geometrical and phasor representation technique is presented to illustrate the modulation of the lightwave carrier to generate quadrature amplitude modulated (QAM) signals. The modulation of the amplitude and phase of the lightwave carrier is implemented using only one dual-drive Mach-Zehnder interferometric modulator (MZIM) with the assistance of phasor techniques. Any multilevel modulation scheme can be generated, but we illustrate specifically, the multilevel amplitude and differential phase shift keying (MADPSK) signals. The driving voltage levels are estimated for driving the traveling wave electrodes of the modulator. Phasor diagrams are extensively used to demonstrate the effectiveness of modulation schemes. MATLAB Simulink models are formed to generate the multilevel modulation formats, transmission, and detection in optically amplified fiber communication systems. Transmission performance is obtained for the multilevel optical signals and proven to be equivalent or better than those of binary level with equivalent bit rate. Further, the resilience to nonlinear effects is much higher for MADPSK of 50% and 33% pulse width as compared to non-return-to-zero (NRZ) pulse shaping.
Error reduction program: A progress report
NASA Technical Reports Server (NTRS)
Syed, S. A.
1984-01-01
Five finite differences schemes were evaluated for minimum numerical diffusion in an effort to identify and incorporate the best error reduction scheme into a 3D combustor performance code. Based on this evaluated, two finite volume method schemes were selected for further study. Both the quadratic upstream differencing scheme (QUDS) and the bounded skew upstream differencing scheme two (BSUDS2) were coded into a two dimensional computer code and their accuracy and stability determined by running several test cases. It was found that BSUDS2 was more stable than QUDS. It was also found that the accuracy of both schemes is dependent on the angle that the streamline make with the mesh with QUDS being more accurate at smaller angles and BSUDS2 more accurate at larger angles. The BSUDS2 scheme was selected for extension into three dimensions.
ERIC Educational Resources Information Center
Claybrook, Billy G.
A new heuristic factorization scheme uses learning to improve the efficiency of determining the symbolic factorization of multivariable polynomials with interger coefficients and an arbitrary number of variables and terms. The factorization scheme makes extensive use of artificial intelligence techniques (e.g., model-building, learning, and…
The category MF in the semistable case
NASA Astrophysics Data System (ADS)
Faltings, G.
2016-10-01
The categories MF over discrete valuation rings were introduced by J. M. Fontaine as crystalline objects one might hope to associate with Galois representations. The definition was later extended to smooth base-schemes. Here we give a further extension to semistable schemes. As an application we show that certain Shimura varieties have semistable models.
NASA Astrophysics Data System (ADS)
Taki, H.; Azou, S.; Hamie, A.; Al Housseini, A.; Alaeddine, A.; Sharaiha, A.
2017-01-01
In this paper, we investigate the usage of SOA for reach extension of an impulse radio over fiber system. Operating in the saturated regime translates into strong nonlinearities and spectral distortions, which drops the power efficiency of the propagated pulses. After studying the SOA response versus operating conditions, we have enhanced the system performance by applying simple analog pre-distortion schemes for various derivatives of the Gaussian pulse and their combination. A novel pulse shape has also been designed by linearly combining three basic Gaussian pulses, offering a very good spectral efficiency (> 55 %) for a high power (0 dBm) at the amplifier input. Furthermore, the potential of our technique has been examined considering a 1.5 Gbps-OOK and 0.75 Gbps-PPM modulation schemes. Pre-distortion proved an advantage for a large extension of optical link (150 km), with an inline amplification via SOA at 40 km.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smirnov, A. G., E-mail: smirnov@lpi.ru
2015-12-15
We develop a general technique for finding self-adjoint extensions of a symmetric operator that respects a given set of its symmetries. Problems of this type naturally arise when considering two- and three-dimensional Schrödinger operators with singular potentials. The approach is based on constructing a unitary transformation diagonalizing the symmetries and reducing the initial operator to the direct integral of a suitable family of partial operators. We prove that symmetry preserving self-adjoint extensions of the initial operator are in a one-to-one correspondence with measurable families of self-adjoint extensions of partial operators obtained by reduction. The general scheme is applied to themore » three-dimensional Aharonov-Bohm Hamiltonian describing the electron in the magnetic field of an infinitely thin solenoid. We construct all self-adjoint extensions of this Hamiltonian, invariant under translations along the solenoid and rotations around it, and explicitly find their eigenfunction expansions.« less
An Automatic Detection System of Lung Nodule Based on Multi-Group Patch-Based Deep Learning Network.
Jiang, Hongyang; Ma, He; Qian, Wei; Gao, Mengdi; Li, Yan
2017-07-14
High-efficiency lung nodule detection dramatically contributes to the risk assessment of lung cancer. It is a significant and challenging task to quickly locate the exact positions of lung nodules. Extensive work has been done by researchers around this domain for approximately two decades. However, previous computer aided detection (CADe) schemes are mostly intricate and time-consuming since they may require more image processing modules, such as the computed tomography (CT) image transformation, the lung nodule segmentation and the feature extraction, to construct a whole CADe system. It is difficult for those schemes to process and analyze enormous data when the medical images continue to increase. Besides, some state of the art deep learning schemes may be strict in the standard of database. This study proposes an effective lung nodule detection scheme based on multi-group patches cut out from the lung images, which are enhanced by the Frangi filter. Through combining two groups of images, a four-channel convolution neural networks (CNN) model is designed to learn the knowledge of radiologists for detecting nodules of four levels. This CADe scheme can acquire the sensitivity of 80.06% with 4.7 false positives per scan and the sensitivity of 94% with 15.1 false positives per scan. The results demonstrate that the multi-group patch-based learning system is efficient to improve the performance of lung nodule detection and greatly reduce the false positives under a huge amount of image data.
Global biodiversity monitoring: from data sources to essential biodiversity variables
Proenca, Vania; Martin, Laura J.; Pereira, Henrique M.; Fernandez, Miguel; McRae, Louise; Belnap, Jayne; Böhm, Monika; Brummitt, Neil; Garcia-Moreno, Jaime; Gregory, Richard D.; Honrado, Joao P; Jürgens, Norbert; Opige, Michael; Schmeller, Dirk S.; Tiago, Patricia; van Sway, Chris A
2016-01-01
Essential Biodiversity Variables (EBVs) consolidate information from varied biodiversity observation sources. Here we demonstrate the links between data sources, EBVs and indicators and discuss how different sources of biodiversity observations can be harnessed to inform EBVs. We classify sources of primary observations into four types: extensive and intensive monitoring schemes, ecological field studies and satellite remote sensing. We characterize their geographic, taxonomic and temporal coverage. Ecological field studies and intensive monitoring schemes inform a wide range of EBVs, but the former tend to deliver short-term data, while the geographic coverage of the latter is limited. In contrast, extensive monitoring schemes mostly inform the population abundance EBV, but deliver long-term data across an extensive network of sites. Satellite remote sensing is particularly suited to providing information on ecosystem function and structure EBVs. Biases behind data sources may affect the representativeness of global biodiversity datasets. To improve them, researchers must assess data sources and then develop strategies to compensate for identified gaps. We draw on the population abundance dataset informing the Living Planet Index (LPI) to illustrate the effects of data sources on EBV representativeness. We find that long-term monitoring schemes informing the LPI are still scarce outside of Europe and North America and that ecological field studies play a key role in covering that gap. Achieving representative EBV datasets will depend both on the ability to integrate available data, through data harmonization and modeling efforts, and on the establishment of new monitoring programs to address critical data gaps.
The Effects of Reducing Tracking in Upper Secondary School: Evidence from a Large-Scale Pilot Scheme
ERIC Educational Resources Information Center
Hall, Caroline
2012-01-01
By exploiting an extensive pilot scheme that preceded an educational reform, this paper evaluates the effects of introducing a more comprehensive upper secondary school system in Sweden. The reform reduced the differences between academic and vocational tracks through prolonging and increasing the academic content of the latter. As a result, all…
Teaching the Conceptual Scheme "The Particle Nature of Matter" in the Elementary School.
ERIC Educational Resources Information Center
Pella, Milton O.; And Others
Conclusions of an extensive project aimed to prepare lessons and associated materials related to teaching concepts included in the scheme "The Particle Nature of Matter" for grades two through six are presented. The hypothesis formulated for the project was that children in elementary schools can learn theoretical concepts related to the particle…
Singh, Abhinav; Purohit, Bharathi M
2017-06-01
To assess patient satisfaction, self-rated oral health and associated factors, including periodontal status and dental caries, among patients covered for dental insurance through a National Social Security Scheme in New Delhi, India. A total of 1,498 patients participated in the study. Satisfaction levels and self-rated oral-health scores were measured using a questionnaire comprising 12 closed-ended questions. Clinical data were collected using the Community Periodontal Index (CPI) and the decayed, missing and filled teeth (DMFT) index. Regression analysis was conducted to evaluate factors associated with dental caries, periodontal status and self-rated oral health. Areas of concern included poor cleanliness within the hospital, extensive delays for appointments, waiting time in hospital and inadequate interpersonal and communication skills among health-care professionals. Approximately 51% of the respondents rated their oral health as fair to poor. Younger age, no tobacco usage, good periodontal status and absence of dental caries were significantly associated with higher oral health satisfaction, with odds ratios of 3.94, 2.38, 2.58 and 2.09, respectively (P ≤ 0.001). The study indicates poor satisfaction levels with the current dental care system and a poor self-rated oral health status among the study population. Some specific areas of concern have been identified. These findings may facilitate restructuring of the existing dental services under the National Social Security Scheme towards creating a better patient care system. © 2017 FDI World Dental Federation.
Nagy-Soper subtraction scheme for multiparton final states
NASA Astrophysics Data System (ADS)
Chung, Cheng-Han; Robens, Tania
2013-04-01
In this work, we present the extension of an alternative subtraction scheme for next-to-leading order QCD calculations to the case of an arbitrary number of massless final state partons. The scheme is based on the splitting kernels of an improved parton shower and comes with a reduced number of final state momentum mappings. While a previous publication including the setup of the scheme has been restricted to cases with maximally two massless partons in the final state, we here provide the final state real emission and integrated subtraction terms for processes with any number of massless partons. We apply our scheme to three jet production at lepton colliders at next-to-leading order and present results for the differential C parameter distribution.
Experimental triple-slit interference in a strongly driven V-type artificial atom
NASA Astrophysics Data System (ADS)
Dada, Adetunmise C.; Santana, Ted S.; Koutroumanis, Antonios; Ma, Yong; Park, Suk-In; Song, Jindong; Gerardot, Brian D.
2017-08-01
Rabi oscillations of a two-level atom appear as a quantum interference effect between the amplitudes associated with atomic superpositions, in analogy with the classic double-slit experiment which manifests a sinusoidal interference pattern. By extension, through direct detection of time-resolved resonance fluorescence from a quantum-dot neutral exciton driven in the Rabi regime, we experimentally demonstrate triple-slit-type quantum interference via quantum erasure in a V-type three-level artificial atom. This result is of fundamental interest in the experimental studies of the properties of V-type three-level systems and may pave the way for further insight into their coherence properties as well as applications for quantum information schemes. It also suggests quantum dots as candidates for multipath-interference experiments for probing foundational concepts in quantum physics.
Factorized Runge-Kutta-Chebyshev Methods
NASA Astrophysics Data System (ADS)
O'Sullivan, Stephen
2017-05-01
The second-order extended stability Factorized Runge-Kutta-Chebyshev (FRKC2) explicit schemes for the integration of large systems of PDEs with diffusive terms are presented. The schemes are simple to implement through ordered sequences of forward Euler steps with complex stepsizes, and easily parallelised for large scale problems on distributed architectures. Preserving 7 digits for accuracy at 16 digit precision, the schemes are theoretically capable of maintaining internal stability for acceleration factors in excess of 6000 with respect to standard explicit Runge-Kutta methods. The extent of the stability domain is approximately the same as that of RKC schemes, and a third longer than in the case of RKL2 schemes. Extension of FRKC methods to fourth-order, by both complex splitting and Butcher composition techniques, is also discussed. A publicly available implementation of FRKC2 schemes may be obtained from maths.dit.ie/frkc
Development of a three-dimensional high-order strand-grids approach
NASA Astrophysics Data System (ADS)
Tong, Oisin
Development of a novel high-order flux correction method on strand grids is presented. The method uses a combination of flux correction in the unstructured plane and summation-by-parts operators in the strand direction to achieve high-fidelity solutions. Low-order truncation errors are cancelled with accurate flux and solution gradients in the flux correction method, thereby achieving a formal order of accuracy of 3, although higher orders are often obtained, especially for highly viscous flows. In this work, the scheme is extended to high-Reynolds number computations in both two and three dimensions. Turbulence closure is achieved with a robust version of the Spalart-Allmaras turbulence model that accommodates negative values of the turbulence working variable, and the Menter SST turbulence model, which blends the k-epsilon and k-o turbulence models for better accuracy. A major advantage of this high-order formulation is the ability to implement traditional finite volume-like limiters to cleanly capture shocked and discontinuous flows. In this work, this approach is explored via a symmetric limited positive (SLIP) limiter. Extensive verification and validation is conducted in two and three dimensions to determine the accuracy and fidelity of the scheme for a number of different cases. Verification studies show that the scheme achieves better than third order accuracy for low and high-Reynolds number flows. Cost studies show that in three-dimensions, the third-order flux correction scheme requires only 30% more walltime than a traditional second-order scheme on strand grids to achieve the same level of convergence. In order to overcome meshing issues at sharp corners and other small-scale features, a unique approach to traditional geometry, coined "asymptotic geometry," is explored. Asymptotic geometry is achieved by filtering out small-scale features in a level set domain through min/max flow. This approach is combined with a curvature based strand shortening strategy in order to qualitatively improve strand grid mesh quality.
Living with a large reduction in permited loading by using a hydrograph-controlled release scheme
Conrads, P.A.; Martello, W.P.; Sullins, N.R.
2003-01-01
The Total Maximum Daily Load (TMDL) for ammonia and biochemical oxygen demand for the Pee Dee, Waccamaw, and Atlantic Intracoastal Waterway system near Myrtle Beach, South Carolina, mandated a 60-percent reduction in point-source loading. For waters with a naturally low background dissolved-oxygen concentrations, South Carolina anti-degradation rules in the water-quality regulations allows a permitted discharger a reduction of dissolved oxygen of 0.1 milligrams per liter (mg/L). This is known as the "0.1 rule." Permitted dischargers within this region of the State operate under the "0.1 rule" and cannot cause a cumulative impact greater than 0.1 mg/L on dissolved-oxygen concentrations. For municipal water-reclamation facilities to serve the rapidly growing resort and retirement community near Myrtle Beach, a variable loading scheme was developed to allow dischargers to utilize increased assimilative capacity during higher streamflow conditions while still meeting the requirements of a recently established TMDL. As part of the TMDL development, an extensive real-time data-collection network was established in the lower Waccamaw and Pee Dee River watershed where continuous measurements of streamflow, water level, dissolved oxygen, temperature, and specific conductance are collected. In addition, the dynamic BRANCH/BLTM models were calibrated and validated to simulate the water quality and tidal dynamics of the system. The assimilative capacities for various streamflows were also analyzed. The variable-loading scheme established total loadings for three streamflow levels. Model simulations show the results from the additional loading to be less than a 0.1 mg/L reduction in dissolved oxygen. As part of the loading scheme, the real-time network was redesigned to monitor streamflow entering the study area and water-quality conditions in the location of dissolved-oxygen "sags." The study reveals how one group of permit holders used a variable-loading scheme to implement restrictive permit limits without experiencing prohibitive capital expenditures or initiating a lengthy appeals process.
An efficient scheme for automatic web pages categorization using the support vector machine
NASA Astrophysics Data System (ADS)
Bhalla, Vinod Kumar; Kumar, Neeraj
2016-07-01
In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.
LevelScheme: A level scheme drawing and scientific figure preparation system for Mathematica
NASA Astrophysics Data System (ADS)
Caprio, M. A.
2005-09-01
LevelScheme is a scientific figure preparation system for Mathematica. The main emphasis is upon the construction of level schemes, or level energy diagrams, as used in nuclear, atomic, molecular, and hadronic physics. LevelScheme also provides a general infrastructure for the preparation of publication-quality figures, including support for multipanel and inset plotting, customizable tick mark generation, and various drawing and labeling tasks. Coupled with Mathematica's plotting functions and powerful programming language, LevelScheme provides a flexible system for the creation of figures combining diagrams, mathematical plots, and data plots. Program summaryTitle of program:LevelScheme Catalogue identifier:ADVZ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVZ Operating systems:Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux Programming language used:Mathematica 4 Number of bytes in distributed program, including test and documentation:3 051 807 Distribution format:tar.gz Nature of problem:Creation of level scheme diagrams. Creation of publication-quality multipart figures incorporating diagrams and plots. Method of solution:A set of Mathematica packages has been developed, providing a library of level scheme drawing objects, tools for figure construction and labeling, and control code for producing the graphics.
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less
Enhancing the intense field control of molecular fragmentation.
Anis, Fatima; Esry, B D
2012-09-28
We describe a pump-probe scheme with which the spatial asymmetry of dissociating molecular fragments-as controlled by the carrier-envelope phase of an intense few-cycle laser pulse-can be enhanced by an order of magnitude or more. We illustrate the scheme using extensive, full-dimensional calculations for dissociation of H(2)(+) and include the averaging necessary for comparison with experiment.
Kim, Bongseok; Kim, Sangdong; Lee, Jonghun
2018-01-01
We propose a novel discrete Fourier transform (DFT)-based direction of arrival (DOA) estimation by a virtual array extension using simple multiplications for frequency modulated continuous wave (FMCW) radar. DFT-based DOA estimation is usually employed in radar systems because it provides the advantage of low complexity for real-time signal processing. In order to enhance the resolution of DOA estimation or to decrease the missing detection probability, it is essential to have a considerable number of channel signals. However, due to constraints of space and cost, it is not easy to increase the number of channel signals. In order to address this issue, we increase the number of effective channel signals by generating virtual channel signals using simple multiplications of the given channel signals. The increase in channel signals allows the proposed scheme to detect DOA more accurately than the conventional scheme while using the same number of channel signals. Simulation results show that the proposed scheme achieves improved DOA estimation compared to the conventional DFT-based method. Furthermore, the effectiveness of the proposed scheme in a practical environment is verified through the experiment. PMID:29758016
Implementation issues in source coding
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Yun-Chung; Hadenfeldt, A. C.
1989-01-01
An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated.
NASA Astrophysics Data System (ADS)
Tanikawa, Ataru; Yoshikawa, Kohji; Okamoto, Takashi; Nitadori, Keigo
2012-02-01
We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme ( Makino and Aarseth, 1992), and achieved the performance of ˜20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions ( Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme ( Nitadori et al., 2006a), and achieved ˜90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N ˜ 10 5 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs ( Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.
Multiframe video coding for improved performance over wireless channels.
Budagavi, M; Gibson, J D
2001-01-01
We propose and evaluate a multi-frame extension to block motion compensation (BMC) coding of videoconferencing-type video signals for wireless channels. The multi-frame BMC (MF-BMC) coder makes use of the redundancy that exists across multiple frames in typical videoconferencing sequences to achieve additional compression over that obtained by using the single frame BMC (SF-BMC) approach, such as in the base-level H.263 codec. The MF-BMC approach also has an inherent ability of overcoming some transmission errors and is thus more robust when compared to the SF-BMC approach. We model the error propagation process in MF-BMC coding as a multiple Markov chain and use Markov chain analysis to infer that the use of multiple frames in motion compensation increases robustness. The Markov chain analysis is also used to devise a simple scheme which randomizes the selection of the frame (amongst the multiple previous frames) used in BMC to achieve additional robustness. The MF-BMC coders proposed are a multi-frame extension of the base level H.263 coder and are found to be more robust than the base level H.263 coder when subjected to simulated errors commonly encountered on wireless channels.
Structure of {sup 81}Ga populated from the {beta}{sup -} decay of {sup 81}Zn
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paziy, V.; Mach, H.; Fraile, L. M.
2013-06-10
We report on the results of the {beta}-decay of {sup 81}Zn. The experiment was performed at the CERN ISOLDE facility in the framework of a systematic ultra-fast timing investigation of neutron-rich nuclei populated in the decay of Zn. The present analysis included {beta}-gated {gamma}-ray singles and {gamma}-{gamma} coincidences from the decay of {sup 81}Zn to {sup 81}Ga and leads to a new and much more extensive level scheme of {sup 81}Ga. A new half-life of {sup 81}Zn is provided.
Balanced Central Schemes for the Shallow Water Equations on Unstructured Grids
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron
2004-01-01
We present a two-dimensional, well-balanced, central-upwind scheme for approximating solutions of the shallow water equations in the presence of a stationary bottom topography on triangular meshes. Our starting point is the recent central scheme of Kurganov and Petrova (KP) for approximating solutions of conservation laws on triangular meshes. In order to extend this scheme from systems of conservation laws to systems of balance laws one has to find an appropriate discretization of the source terms. We first show that for general triangulations there is no discretization of the source terms that corresponds to a well-balanced form of the KP scheme. We then derive a new variant of a central scheme that can be balanced on triangular meshes. We note in passing that it is straightforward to extend the KP scheme to general unstructured conformal meshes. This extension allows us to recover our previous well-balanced scheme on Cartesian grids. We conclude with several simulations, verifying the second-order accuracy of our scheme as well as its well-balanced properties.
A Novel IEEE 802.15.4e DSME MAC for Wireless Sensor Networks
Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin
2017-01-01
IEEE 802.15.4e standard proposes Deterministic and Synchronous Multichannel Extension (DSME) mode for wireless sensor networks (WSNs) to support industrial, commercial and health care applications. In this paper, a new channel access scheme and beacon scheduling schemes are designed for the IEEE 802.15.4e enabled WSNs in star topology to reduce the network discovery time and energy consumption. In addition, a new dynamic guaranteed retransmission slot allocation scheme is designed for devices with the failure Guaranteed Time Slot (GTS) transmission to reduce the retransmission delay. To evaluate our schemes, analytical models are designed to analyze the performance of WSNs in terms of reliability, delay, throughput and energy consumption. Our schemes are validated with simulation and analytical results and are observed that simulation results well match with the analytical one. The evaluated results of our designed schemes can improve the reliability, throughput, delay, and energy consumptions significantly. PMID:28275216
A Novel IEEE 802.15.4e DSME MAC for Wireless Sensor Networks.
Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin
2017-01-16
IEEE 802.15.4e standard proposes Deterministic and Synchronous Multichannel Extension (DSME) mode for wireless sensor networks (WSNs) to support industrial, commercial and health care applications. In this paper, a new channel access scheme and beacon scheduling schemes are designed for the IEEE 802.15.4e enabled WSNs in star topology to reduce the network discovery time and energy consumption. In addition, a new dynamic guaranteed retransmission slot allocation scheme is designed for devices with the failure Guaranteed Time Slot (GTS) transmission to reduce the retransmission delay. To evaluate our schemes, analytical models are designed to analyze the performance of WSNs in terms of reliability, delay, throughput and energy consumption. Our schemes are validated with simulation and analytical results and are observed that simulation results well match with the analytical one. The evaluated results of our designed schemes can improve the reliability, throughput, delay, and energy consumptions significantly.
Searchable attribute-based encryption scheme with attribute revocation in cloud storage.
Wang, Shangping; Zhao, Duqiao; Zhang, Yaling
2017-01-01
Attribute based encryption (ABE) is a good way to achieve flexible and secure access control to data, and attribute revocation is the extension of the attribute-based encryption, and the keyword search is an indispensable part for cloud storage. The combination of both has an important application in the cloud storage. In this paper, we construct a searchable attribute-based encryption scheme with attribute revocation in cloud storage, the keyword search in our scheme is attribute based with access control, when the search succeeds, the cloud server returns the corresponding cipher text to user and the user can decrypt the cipher text definitely. Besides, our scheme supports multiple keywords search, which makes the scheme more practical. Under the assumption of decisional bilinear Diffie-Hellman exponent (q-BDHE) and decisional Diffie-Hellman (DDH) in the selective security model, we prove that our scheme is secure.
Forecasting residential solar photovoltaic deployment in California
Dong, Changgui; Sigrin, Benjamin; Brinkman, Gregory
2016-12-06
Residential distributed photovoltaic (PV) deployment in the United States has experienced robust growth, and policy changes impacting the value of solar are likely to occur at the federal and state levels. To establish a credible baseline and evaluate impacts of potential new policies, this analysis employs multiple methods to forecast residential PV deployment in California, including a time-series forecasting model, a threshold heterogeneity diffusion model, a Bass diffusion model, and National Renewable Energy Laboratory's dSolar model. As a baseline, the residential PV market in California is modeled to peak in the early 2020s, with a peak annual installation of 1.5-2more » GW across models. We then use the baseline results from the dSolar model and the threshold model to gauge the impact of the recent federal investment tax credit (ITC) extension, the newly approved California net energy metering (NEM) policy, and a hypothetical value-of-solar (VOS) compensation scheme. We find that the recent ITC extension may increase annual PV installations by 12%-18% (roughly 500 MW, MW) for the California residential sector in 2019-2020. The new NEM policy only has a negligible effect in California due to the relatively small new charges (< 100 MW in 2019-2020). Moreover, impacts of the VOS compensation scheme (0.12 cents per kilowatt-hour) are larger, reducing annual PV adoption by 32% (or 900-1300 MW) in 2019-2020.« less
Forecasting residential solar photovoltaic deployment in California
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Changgui; Sigrin, Benjamin; Brinkman, Gregory
Residential distributed photovoltaic (PV) deployment in the United States has experienced robust growth, and policy changes impacting the value of solar are likely to occur at the federal and state levels. To establish a credible baseline and evaluate impacts of potential new policies, this analysis employs multiple methods to forecast residential PV deployment in California, including a time-series forecasting model, a threshold heterogeneity diffusion model, a Bass diffusion model, and National Renewable Energy Laboratory's dSolar model. As a baseline, the residential PV market in California is modeled to peak in the early 2020s, with a peak annual installation of 1.5-2more » GW across models. We then use the baseline results from the dSolar model and the threshold model to gauge the impact of the recent federal investment tax credit (ITC) extension, the newly approved California net energy metering (NEM) policy, and a hypothetical value-of-solar (VOS) compensation scheme. We find that the recent ITC extension may increase annual PV installations by 12%-18% (roughly 500 MW, MW) for the California residential sector in 2019-2020. The new NEM policy only has a negligible effect in California due to the relatively small new charges (< 100 MW in 2019-2020). Moreover, impacts of the VOS compensation scheme (0.12 cents per kilowatt-hour) are larger, reducing annual PV adoption by 32% (or 900-1300 MW) in 2019-2020.« less
Traditional risk-sharing arrangements and informal social insurance in Eritrea.
Habtom, GebreMichael Kibreab; Ruys, Pieter
2007-01-01
In Eritrea neither the state nor the market is effective in providing health insurance to low-income people (in rural and informal job sector). Schemes intended for the informal sector are confronted with low and irregular incomes of target populations and consequently negligible potential for profit making. Because of this there, are no formal health insurance systems in Eritrea that cover people in the traditional (or informal) sector of the economy. In the absence of formal safety nets traditional Eritrean societies use their local social capital to alleviate unexpected social costs. In Eritrea traditional risk-sharing arrangements are made within extended families and mutual aid community associations. This study reveals that in a situation where the state no longer provides free public health services any more and access to private insurance is denied, the extension of the voluntary mutual aid community associations to Mahber-based health insurance schemes at the local level is a viable way for providing modern health services.
Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part II: numerical testing
NASA Astrophysics Data System (ADS)
Rõõm, Rein; Männik, Aarne; Luhamaa, Andres; Zirk, Marko
2007-10-01
The semi-implicit semi-Lagrangian (SISL), two-time-level, non-hydrostatic numerical scheme, based on the non-hydrostatic, semi-elastic pressure-coordinate equations, is tested in model experiments with flow over given orography (elliptical hill, mountain ridge, system of successive ridges) in a rectangular domain with emphasis on the numerical accuracy and non-hydrostatic effect presentation capability. Comparison demonstrates good (in strong primary wave generation) to satisfactory (in weak secondary wave reproduction in some cases) consistency of the numerical modelling results with known stationary linear test solutions. Numerical stability of the developed model is investigated with respect to the reference state choice, modelling dynamics of a stationary front. The horizontally area-mean reference temperature proves to be the optimal stability warrant. The numerical scheme with explicit residual in the vertical forcing term becomes unstable for cross-frontal temperature differences exceeding 30 K. Stability is restored, if the vertical forcing is treated implicitly, which enables to use time steps, comparable with the hydrostatic SISL.
Experimental multiplexing of quantum key distribution with classical optical communication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Liu-Jun; Chen, Luo-Kan; Ju, Lei
2015-02-23
We demonstrate the realization of quantum key distribution (QKD) when combined with classical optical communication, and synchronous signals within a single optical fiber. In the experiment, the classical communication sources use Fabry-Pérot (FP) lasers, which are implemented extensively in optical access networks. To perform QKD, multistage band-stop filtering techniques are developed, and a wavelength-division multiplexing scheme is designed for the multi-longitudinal-mode FP lasers. We have managed to maintain sufficient isolation among the quantum channel, the synchronous channel and the classical channels to guarantee good QKD performance. Finally, the quantum bit error rate remains below a level of 2% across themore » entire practical application range. The proposed multiplexing scheme can ensure low classical light loss, and enables QKD over fiber lengths of up to 45 km simultaneously when the fibers are populated with bidirectional FP laser communications. Our demonstration paves the way for application of QKD to current optical access networks, where FP lasers are widely used by the end users.« less
Dopamine-dependent non-linear correlation between subthalamic rhythms in Parkinson's disease.
Marceglia, S; Foffani, G; Bianchi, A M; Baselli, G; Tamma, F; Egidi, M; Priori, A
2006-03-15
The basic information architecture in the basal ganglia circuit is under debate. Whereas anatomical studies quantify extensive convergence/divergence patterns in the circuit, suggesting an information sharing scheme, neurophysiological studies report an absence of linear correlation between single neurones in normal animals, suggesting a segregated parallel processing scheme. In 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-treated monkeys and in parkinsonian patients single neurones become linearly correlated, thus leading to a loss of segregation between neurones. Here we propose a possible integrative solution to this debate, by extending the concept of functional segregation from the cellular level to the network level. To this end, we recorded local field potentials (LFPs) from electrodes implanted for deep brain stimulation (DBS) in the subthalamic nucleus (STN) of parkinsonian patients. By applying bispectral analysis, we found that in the absence of dopamine stimulation STN LFP rhythms became non-linearly correlated, thus leading to a loss of segregation between rhythms. Non-linear correlation was particularly consistent between the low-beta rhythm (13-20 Hz) and the high-beta rhythm (20-35 Hz). Levodopa administration significantly decreased these non-linear correlations, therefore increasing segregation between rhythms. These results suggest that the extensive convergence/divergence in the basal ganglia circuit is physiologically necessary to sustain LFP rhythms distributed in large ensembles of neurones, but is not sufficient to induce correlated firing between neurone pairs. Conversely, loss of dopamine generates pathological linear correlation between neurone pairs, alters the patterns within LFP rhythms, and induces non-linear correlation between LFP rhythms operating at different frequencies. The pathophysiology of information processing in the human basal ganglia therefore involves not only activities of individual rhythms, but also interactions between rhythms.
Dopamine-dependent non-linear correlation between subthalamic rhythms in Parkinson's disease
Marceglia, S; Foffani, G; Bianchi, A M; Baselli, G; Tamma, F; Egidi, M; Priori, A
2006-01-01
The basic information architecture in the basal ganglia circuit is under debate. Whereas anatomical studies quantify extensive convergence/divergence patterns in the circuit, suggesting an information sharing scheme, neurophysiological studies report an absence of linear correlation between single neurones in normal animals, suggesting a segregated parallel processing scheme. In 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-treated monkeys and in parkinsonian patients single neurones become linearly correlated, thus leading to a loss of segregation between neurones. Here we propose a possible integrative solution to this debate, by extending the concept of functional segregation from the cellular level to the network level. To this end, we recorded local field potentials (LFPs) from electrodes implanted for deep brain stimulation (DBS) in the subthalamic nucleus (STN) of parkinsonian patients. By applying bispectral analysis, we found that in the absence of dopamine stimulation STN LFP rhythms became non-linearly correlated, thus leading to a loss of segregation between rhythms. Non-linear correlation was particularly consistent between the low-beta rhythm (13–20 Hz) and the high-beta rhythm (20–35 Hz). Levodopa administration significantly decreased these non-linear correlations, therefore increasing segregation between rhythms. These results suggest that the extensive convergence/divergence in the basal ganglia circuit is physiologically necessary to sustain LFP rhythms distributed in large ensembles of neurones, but is not sufficient to induce correlated firing between neurone pairs. Conversely, loss of dopamine generates pathological linear correlation between neurone pairs, alters the patterns within LFP rhythms, and induces non-linear correlation between LFP rhythms operating at different frequencies. The pathophysiology of information processing in the human basal ganglia therefore involves not only activities of individual rhythms, but also interactions between rhythms. PMID:16410285
Extensive Reading through the Internet: Is It Worth the While?
ERIC Educational Resources Information Center
Silva, Juan Pino
2009-01-01
Reading materials written in English is the prime goal of many reading programs around the world. Extensive reading (ER) has for years aided new students at my institution to gradually acquire large vocabularies and other sub-skills that are needed to read fluently. To continue to do that effectively, a new scheme involving the use of…
Computing Evans functions numerically via boundary-value problems
NASA Astrophysics Data System (ADS)
Barker, Blake; Nguyen, Rose; Sandstede, Björn; Ventura, Nathaniel; Wahl, Colin
2018-03-01
The Evans function has been used extensively to study spectral stability of travelling-wave solutions in spatially extended partial differential equations. To compute Evans functions numerically, several shooting methods have been developed. In this paper, an alternative scheme for the numerical computation of Evans functions is presented that relies on an appropriate boundary-value problem formulation. Convergence of the algorithm is proved, and several examples, including the computation of eigenvalues for a multi-dimensional problem, are given. The main advantage of the scheme proposed here compared with earlier methods is that the scheme is linear and scalable to large problems.
A high-order gas-kinetic Navier-Stokes flow solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Qibing, E-mail: lqb@tsinghua.edu.c; Xu Kun, E-mail: makxu@ust.h; Fu Song, E-mail: fs-dem@tsinghua.edu.c
2010-09-20
The foundation for the development of modern compressible flow solver is based on the Riemann solution of the inviscid Euler equations. The high-order schemes are basically related to high-order spatial interpolation or reconstruction. In order to overcome the low-order wave interaction mechanism due to the Riemann solution, the temporal accuracy of the scheme can be improved through the Runge-Kutta method, where the dynamic deficiencies in the first-order Riemann solution is alleviated through the sub-step spatial reconstruction in the Runge-Kutta process. The close coupling between the spatial and temporal evolution in the original nonlinear governing equations seems weakened due to itsmore » spatial and temporal decoupling. Many recently developed high-order methods require a Navier-Stokes flux function under piece-wise discontinuous high-order initial reconstruction. However, the piece-wise discontinuous initial data and the hyperbolic-parabolic nature of the Navier-Stokes equations seem inconsistent mathematically, such as the divergence of the viscous and heat conducting terms due to initial discontinuity. In this paper, based on the Boltzmann equation, we are going to present a time-dependent flux function from a high-order discontinuous reconstruction. The theoretical basis for such an approach is due to the fact that the Boltzmann equation has no specific requirement on the smoothness of the initial data and the kinetic equation has the mechanism to construct a dissipative wave structure starting from an initially discontinuous flow condition on a time scale being larger than the particle collision time. The current high-order flux evaluation method is an extension of the second-order gas-kinetic BGK scheme for the Navier-Stokes equations (BGK-NS). The novelty for the easy extension from a second-order to a higher order is due to the simple particle transport and collision mechanism on the microscopic level. This paper will present a hierarchy to construct such a high-order method. The necessity to couple spatial and temporal evolution nonlinearly in the flux evaluation can be clearly observed through the numerical performance of the scheme for the viscous flow computations.« less
Minimizing Dispersion in FDTD Methods with CFL Limit Extension
NASA Astrophysics Data System (ADS)
Sun, Chen
The CFL extension in FDTD methods is receiving considerable attention in order to reduce the computational effort and save the simulation time. One of the major issues in the CFL extension methods is the increased dispersion. We formulate a decomposition of FDTD equations to study the behaviour of the dispersion. A compensation scheme to reduce the dispersion in CFL extension is constructed and proposed. We further study the CFL extension in a FDTD subgridding case, where we improve the accuracy by acting only on the FDTD equations of the fine grid. Numerical results confirm the efficiency of the proposed method for minimising dispersion.
NASA Technical Reports Server (NTRS)
Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.
1990-01-01
Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.
2011-06-24
extensively studied by ultrafast laser spectroscopy. More recently the structures of the LH2 complexes has revealed the nonameric and octameric arrangement of...Scheme 1). 4 Scheme 1. Compartimentalization of light harvesting and charge separation. The antenna complexes( LH2 ,LH1-RC) efficiently...realize various photosynthetic functions using cofactors (BChl a and carotenoid) assembled into the apoproteins (LH1 and LH2 ). The light-harvesting
2017-04-03
setup in terms of temporal and spatial discretization . The second component was an extension of existing depth-integrated wave models to describe...equations (Abbott, 1976). Discretization schemes involve numerical dispersion and dissipation that distort the true character of the governing equations...represent a leading-order approximation of the Boussinesq-type equations. Tam and Webb (1993) proposed a wavenumber-based discretization scheme to preserve
Issues and solutions for storage, retrieval, and searching of MPEG-7 documents
NASA Astrophysics Data System (ADS)
Chang, Yuan-Chi; Lo, Ming-Ling; Smith, John R.
2000-10-01
The ongoing MPEG-7 standardization activity aims at creating a standard for describing multimedia content in order to facilitate the interpretation of the associated information content. Attempting to address a broad range of applications, MPEG-7 has defined a flexible framework consisting of Descriptors, Description Schemes, and Description Definition Language. Descriptors and Description Schemes describe features, structure and semantics of multimedia objects. They are written in the Description Definition Language (DDL). In the most recent revision, DDL applies XML (Extensible Markup Language) Schema with MPEG-7 extensions. DDL has constructs that support inclusion, inheritance, reference, enumeration, choice, sequence, and abstract type of Description Schemes and Descriptors. In order to enable multimedia systems to use MPEG-7, a number of important problems in storing, retrieving and searching MPEG-7 documents need to be solved. This paper reports on initial finding on issues and solutions of storing and accessing MPEG-7 documents. In particular, we discuss the benefits of using a virtual document management framework based on XML Access Server (XAS) in order to bridge the MPEG-7 multimedia applications and database systems. The need arises partly because MPEG-7 descriptions need customized storage schema, indexing and search engines. We also discuss issues arising in managing dependence and cross-description scheme search.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir
An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less
NASA Astrophysics Data System (ADS)
Li, Xiaoyun; Hu, Haihua; Xu, Lingbo; Cui, Can; Qian, Degui; Li, Shuang; Zhu, Wenzhe; Wang, Peng; Lin, Ping; Pan, Jiaqi; Li, Chaorong
2018-05-01
Artificial Z-scheme system inspired by the natural photosynthesis in green plants has attracted extensive attention owing to its advantages such as simultaneously wide range light absorption, highly efficient charge separation and strong redox ability. In this paper, we report the synthesis of a novel all-solid-state direct Z-scheme photocatalyst of Ag3PO4/CeO2/TiO2 by depositing Ag3PO4 nanoparticles (NPs) on CeO2/TiO2 hierarchical branched nanowires (BNWs), where the CeO2/TiO2 BNWs act as a novel substrate for the well dispersed nano-size Ag3PO4. The Ag3PO4/CeO2/TiO2 photocatalyst exhibits excellent ability of photocatalytic oxygen evolution from pure water splitting. It is suggested that the Z-scheme charge transfer route between CeO2/TiO2 and Ag3PO4 improves the redox ability. On the other hand, the cascade energy level alignment in CeO2/TiO2 BNWs expedites the spatial charge separation, and hence suppresses photocatalytic backward reaction. However, it is difficult to realize a perfect excitation balance in Ag3PO4/CeO2/TiO2 and the composite still surfers photo-corrosion in photocatalysis reaction. Nevertheless, our results provide an innovative strategy of constructing a Z-scheme system from a type-II heterostructure and a highly efficient oxygen evolution catalyst.
Comparison of Several Numerical Methods for Simulation of Compressible Shear Layers
NASA Technical Reports Server (NTRS)
Kennedy, Christopher A.; Carpenter, Mark H.
1997-01-01
An investigation is conducted on several numerical schemes for use in the computation of two-dimensional, spatially evolving, laminar variable-density compressible shear layers. Schemes with various temporal accuracies and arbitrary spatial accuracy for both inviscid and viscous terms are presented and analyzed. All integration schemes use explicit or compact finite-difference derivative operators. Three classes of schemes are considered: an extension of MacCormack's original second-order temporally accurate method, a new third-order variant of the schemes proposed by Rusanov and by Kutier, Lomax, and Warming (RKLW), and third- and fourth-order Runge-Kutta schemes. In each scheme, stability and formal accuracy are considered for the interior operators on the convection-diffusion equation U(sub t) + aU(sub x) = alpha U(sub xx). Accuracy is also verified on the nonlinear problem, U(sub t) + F(sub x) = 0. Numerical treatments of various orders of accuracy are chosen and evaluated for asymptotic stability. Formally accurate boundary conditions are derived for several sixth- and eighth-order central-difference schemes. Damping of high wave-number data is accomplished with explicit filters of arbitrary order. Several schemes are used to compute variable-density compressible shear layers, where regions of large gradients exist.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finn, John M., E-mail: finn@lanl.gov
2015-03-15
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint.more » We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012)], appears to work very well.« less
Service-Oriented Node Scheduling Scheme for Wireless Sensor Networks Using Markov Random Field Model
Cheng, Hongju; Su, Zhihuang; Lloret, Jaime; Chen, Guolong
2014-01-01
Future wireless sensor networks are expected to provide various sensing services and energy efficiency is one of the most important criterions. The node scheduling strategy aims to increase network lifetime by selecting a set of sensor nodes to provide the required sensing services in a periodic manner. In this paper, we are concerned with the service-oriented node scheduling problem to provide multiple sensing services while maximizing the network lifetime. We firstly introduce how to model the data correlation for different services by using Markov Random Field (MRF) model. Secondly, we formulate the service-oriented node scheduling issue into three different problems, namely, the multi-service data denoising problem which aims at minimizing the noise level of sensed data, the representative node selection problem concerning with selecting a number of active nodes while determining the services they provide, and the multi-service node scheduling problem which aims at maximizing the network lifetime. Thirdly, we propose a Multi-service Data Denoising (MDD) algorithm, a novel multi-service Representative node Selection and service Determination (RSD) algorithm, and a novel MRF-based Multi-service Node Scheduling (MMNS) scheme to solve the above three problems respectively. Finally, extensive experiments demonstrate that the proposed scheme efficiently extends the network lifetime. PMID:25384005
Two-Level Verification of Data Integrity for Data Storage in Cloud Computing
NASA Astrophysics Data System (ADS)
Xu, Guangwei; Chen, Chunlin; Wang, Hongya; Zang, Zhuping; Pang, Mugen; Jiang, Ping
Data storage in cloud computing can save capital expenditure and relive burden of storage management for users. As the lose or corruption of files stored may happen, many researchers focus on the verification of data integrity. However, massive users often bring large numbers of verifying tasks for the auditor. Moreover, users also need to pay extra fee for these verifying tasks beyond storage fee. Therefore, we propose a two-level verification of data integrity to alleviate these problems. The key idea is to routinely verify the data integrity by users and arbitrate the challenge between the user and cloud provider by the auditor according to the MACs and ϕ values. The extensive performance simulations show that the proposed scheme obviously decreases auditor's verifying tasks and the ratio of wrong arbitration.
A Fine-Grained and Privacy-Preserving Query Scheme for Fog Computing-Enhanced Location-Based Service
Yin, Fan; Tang, Xiaohu
2017-01-01
Location-based services (LBS), as one of the most popular location-awareness applications, has been further developed to achieve low-latency with the assistance of fog computing. However, privacy issues remain a research challenge in the context of fog computing. Therefore, in this paper, we present a fine-grained and privacy-preserving query scheme for fog computing-enhanced location-based services, hereafter referred to as FGPQ. In particular, mobile users can obtain the fine-grained searching result satisfying not only the given spatial range but also the searching content. Detailed privacy analysis shows that our proposed scheme indeed achieves the privacy preservation for the LBS provider and mobile users. In addition, extensive performance analyses and experiments demonstrate that the FGPQ scheme can significantly reduce computational and communication overheads and ensure the low-latency, which outperforms existing state-of-the art schemes. Hence, our proposed scheme is more suitable for real-time LBS searching. PMID:28696395
Yang, Xue; Yin, Fan; Tang, Xiaohu
2017-07-11
Location-based services (LBS), as one of the most popular location-awareness applications, has been further developed to achieve low-latency with the assistance of fog computing. However, privacy issues remain a research challenge in the context of fog computing. Therefore, in this paper, we present a fine-grained and privacy-preserving query scheme for fog computing-enhanced location-based services, hereafter referred to as FGPQ. In particular, mobile users can obtain the fine-grained searching result satisfying not only the given spatial range but also the searching content. Detailed privacy analysis shows that our proposed scheme indeed achieves the privacy preservation for the LBS provider and mobile users. In addition, extensive performance analyses and experiments demonstrate that the FGPQ scheme can significantly reduce computational and communication overheads and ensure the low-latency, which outperforms existing state-of-the art schemes. Hence, our proposed scheme is more suitable for real-time LBS searching.
A Survey of Research Progress and Development Tendency of Attribute-Based Encryption
Pang, Liaojun; Yang, Jie; Jiang, Zhengtao
2014-01-01
With the development of cryptography, the attribute-based encryption (ABE) draws widespread attention of the researchers in recent years. The ABE scheme, which belongs to the public key encryption mechanism, takes attributes as public key and associates them with the ciphertext or the user's secret key. It is an efficient way to solve open problems in access control scenarios, for example, how to provide data confidentiality and expressive access control at the same time. In this paper, we survey the basic ABE scheme and its two variants: the key-policy ABE (KP-ABE) scheme and the ciphertext-policy ABE (CP-ABE) scheme. We also pay attention to other researches relating to the ABE schemes, including multiauthority, user/attribute revocation, accountability, and proxy reencryption, with an extensive comparison of their functionality and performance. Finally, possible future works and some conclusions are pointed out. PMID:25101313
A Blind Reversible Robust Watermarking Scheme for Relational Databases
Chang, Chin-Chen; Nguyen, Thai-Son; Lin, Chia-Chen
2013-01-01
Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as “histogram shifting of adjacent pixel difference” (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks. PMID:24223033
A blind reversible robust watermarking scheme for relational databases.
Chang, Chin-Chen; Nguyen, Thai-Son; Lin, Chia-Chen
2013-01-01
Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as "histogram shifting of adjacent pixel difference" (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks.
Multiswitching compound antisynchronization of four chaotic systems
NASA Astrophysics Data System (ADS)
Khan, Ayub; Khattar, Dinesh; Prajapati, Nitish
2017-12-01
Based on three drive-one response system, in this article, the authors investigate a novel synchronization scheme for a class of chaotic systems. The new scheme, multiswitching compound antisynchronization (MSCoAS), is a notable extension of the earlier multiswitching schemes concerning only one drive-one response system model. The concept of multiswitching synchronization is extended to compound synchronization scheme such that the state variables of three drive systems antisynchronize with different state variables of the response system, simultaneously. The study involving multiswitching of three drive systems and one response system is first of its kind. Various switched modified function projective antisynchronization schemes are obtained as special cases of MSCoAS, for a suitable choice of scaling factors. Using suitable controllers and Lyapunov stability theory, sufficient condition is obtained to achieve MSCoAS between four chaotic systems and the corresponding theoretical proof is given. Numerical simulations are performed using Lorenz system in MATLAB to demonstrate the validity of the presented method.
A no-fault compensation system for medical injury is long overdue.
Weisbrot, David; Breen, Kerry J
2012-09-03
The 2011 report of the Productivity Commission (PC) recommended the establishment of a no-fault national injury insurance scheme limited to "catastrophic" injury, including medical injury. The report is welcome, but represents a missed opportunity to establish simultaneously a much-needed no-fault scheme for all medical injuries. The existing indemnity scheme based on negligence remains a slow, costly, inefficient, ill targeted and stress-creating system. A fault-based negligence scheme cannot deter non-intentional errors and does little to identify or prevent systems failures. In addition, it discourages reporting, and thus is antithetical to the modern focus on universal patient safety. A no-fault scheme has the potential to be fairer, quicker and no more costly, and to contribute to patient safety. No-fault schemes have been in place in at least six developed countries for many years. This extensive experience in comparable countries should be examined to assist Australia to design an effective, comprehensive system. Before implementing the recommendations of the PC, the federal government should ask the Commission to study and promptly report on an ancillary no-fault scheme that covers all medical injury.
TripSense: A Trust-Based Vehicular Platoon Crowdsensing Scheme with Privacy Preservation in VANETs
Hu, Hao; Lu, Rongxing; Huang, Cheng; Zhang, Zonghua
2016-01-01
In this paper, we propose a trust-based vehicular platoon crowdsensing scheme, named TripSense, in VANET. The proposed TripSense scheme introduces a trust-based system to evaluate vehicles’ sensing abilities and then selects the more capable vehicles in order to improve sensing results accuracy. In addition, the sensing tasks are accomplished by platoon member vehicles and preprocessed by platoon head vehicles before the data are uploaded to server. Hence, it is less time-consuming and more efficient compared with the way where the data are submitted by individual platoon member vehicles. Hence it is more suitable in ephemeral networks like VANET. Moreover, our proposed TripSense scheme integrates unlinkable pseudo-ID techniques to achieve PM vehicle identity privacy, and employs a privacy-preserving sensing vehicle selection scheme without involving the PM vehicle’s trust score to keep its location privacy. Detailed security analysis shows that our proposed TripSense scheme not only achieves desirable privacy requirements but also resists against attacks launched by adversaries. In addition, extensive simulations are conducted to show the correctness and effectiveness of our proposed scheme. PMID:27258287
NASA Technical Reports Server (NTRS)
Kaushik, Dinesh K.; Baysal, Oktay
1997-01-01
Accurate computation of acoustic wave propagation may be more efficiently performed when their dispersion relations are considered. Consequently, computational algorithms which attempt to preserve these relations have been gaining popularity in recent years. In the present paper, the extensions to one such scheme are discussed. By solving the linearized, 2-D Euler and Navier-Stokes equations with such a method for the acoustic wave propagation, several issues were investigated. Among them were higher-order accuracy, choice of boundary conditions and differencing stencils, effects of viscosity, low-storage time integration, generalized curvilinear coordinates, periodic series, their reflections and interference patterns from a flat wall and scattering from a circular cylinder. The results were found to be promising en route to the aeroacoustic simulations of realistic engineering problems.
Conjunctive patches subspace learning with side information for collaborative image retrieval.
Zhang, Lining; Wang, Lipo; Lin, Weisi
2012-08-01
Content-Based Image Retrieval (CBIR) has attracted substantial attention during the past few years for its potential practical applications to image management. A variety of Relevance Feedback (RF) schemes have been designed to bridge the semantic gap between the low-level visual features and the high-level semantic concepts for an image retrieval task. Various Collaborative Image Retrieval (CIR) schemes aim to utilize the user historical feedback log data with similar and dissimilar pairwise constraints to improve the performance of a CBIR system. However, existing subspace learning approaches with explicit label information cannot be applied for a CIR task, although the subspace learning techniques play a key role in various computer vision tasks, e.g., face recognition and image classification. In this paper, we propose a novel subspace learning framework, i.e., Conjunctive Patches Subspace Learning (CPSL) with side information, for learning an effective semantic subspace by exploiting the user historical feedback log data for a CIR task. The CPSL can effectively integrate the discriminative information of labeled log images, the geometrical information of labeled log images and the weakly similar information of unlabeled images together to learn a reliable subspace. We formally formulate this problem into a constrained optimization problem and then present a new subspace learning technique to exploit the user historical feedback log data. Extensive experiments on both synthetic data sets and a real-world image database demonstrate the effectiveness of the proposed scheme in improving the performance of a CBIR system by exploiting the user historical feedback log data.
Sleeping money: investigating the huge surpluses of social health insurance in China.
Liu, JunQiang; Chen, Tao
2013-12-01
The spreading of social health insurance (SHI) worldwide poses challenges for fledging public administrators. Inefficiency, misuse and even corruption threaten the stewardship of those newly established health funds. This article examines a tricky situation faced by China's largest SHI program: the basic health insurance (BHI) scheme for urban employees. BHI accumulated a 406 billion yuan surplus by 2009, although the reimbursement level was still low. Using a provincial level panel database, we find that the huge BHI surpluses are related to the (temporarily) decreasing dependency ratio, the steady growth of average wages, the extension of BHI coverage, and progress in social insurance agency building. The financial situations of local governments and risk pooling level also matter. Besides, medical savings accounts result in about one third of BHI surpluses. Although these findings are not causal, lessons drawn from this study can help to improve the governance and performance of SHI programs in developing countries.
NASA Astrophysics Data System (ADS)
Dinesh Kumar, S.; Nageshwar Rao, R.; Pramod Chakravarthy, P.
2017-11-01
In this paper, we consider a boundary value problem for a singularly perturbed delay differential equation of reaction-diffusion type. We construct an exponentially fitted numerical method using Numerov finite difference scheme, which resolves not only the boundary layers but also the interior layers arising from the delay term. An extensive amount of computational work has been carried out to demonstrate the applicability of the proposed method.
Traditional schemes for treatment of psoriatic arthritis.
McHugh, Neil J
2009-08-01
Prior to the availability of biologic agents such as anti-tumor necrosis factor (TNF), traditional treatment schemes for psoriatic arthritis were not extensively evaluated. While it appears that the newer forms of treatment are more effective, conventional agents still need to be scrutinized with similar methodology and will still have a role in those patients with less progressive disease, in combination with biologic agents, and in patients where biologics are not tolerated or have failed.
NASA Astrophysics Data System (ADS)
Bejaoui, Najoua
The pressurized water nuclear reactors (PWRs) is the largest fleet of nuclear reactors in operation around the world. Although these reactors have been studied extensively by designers and operators using efficient numerical methods, there are still some calculation weaknesses, given the geometric complexity of the core, still unresolved such as the analysis of the neutron flux's behavior at the core-reflector interface. The standard calculation scheme is a two steps process. In the first step, a detailed calculation at the assembly level with reflective boundary conditions, provides homogenized cross-sections for the assemblies, condensed to a reduced number of groups; this step is called the lattice calculation. The second step uses homogenized properties in each assemblies to calculate reactor properties at the core level. This step is called the full-core calculation or whole-core calculation. This decoupling of the two calculation steps is the origin of methodological bias particularly at the interface core reflector: the periodicity hypothesis used to calculate cross section librairies becomes less pertinent for assemblies that are adjacent to the reflector generally represented by these two models: thus the introduction of equivalent reflector or albedo matrices. The reflector helps to slowdown neutrons leaving the reactor and returning them to the core. This effect leads to two fission peaks in fuel assemblies localised at the core/reflector interface, the fission rate increasing due to the greater proportion of reentrant neutrons. This change in the neutron spectrum arises deep inside the fuel located on the outskirts of the core. To remedy this we simulated a peripheral assembly reflected with TMI-PWR reflector and developed an advanced calculation scheme that takes into account the environment of the peripheral assemblies and generate equivalent neutronic properties for the reflector. This scheme is tested on a core without control mechanisms and charged with fresh fuel. The results of this study showed that explicit representation of reflector and calculation of peripheral assembly with our advanced scheme allow corrections to the energy spectrum at the core interface and increase the peripheral power by up to 12% compared with that of the reference scheme.
Gandhi, Nilima; Bhavsar, Satyendra P; Reiner, Eric J; Chen, Tony; Morse, Dave; Arhonditsis, George B; Drouillard, Ken G
2015-01-06
Polychlorinated biphenyls (PCBs) remain chemicals of concern more than three decades after the ban on their production. Technical mixture-based total PCB measurements are unreliable due to weathering and degradation, while detailed full congener specific measurements can be time-consuming and costly for large studies. Measurements using a subset of indicator PCBs (iPCBs) have been considered appropriate; however, inclusion of different PCB congeners in various iPCB schemes makes it challenging to readily compare data. Here, using an extensive data set, we examine the performance of existing iPCB3 (PCB 138, 153, and 180), iPCB6 (iPCB3 plus 28, 52, and 101) and iPCB7 (iPCB6 plus 118) schemes, and new iPCB schemes in estimating total of PCB congeners (∑PCB) and dioxin-like PCB toxic equivalent (dlPCB-TEQ) concentrations in sport fish fillets and the whole body of juvenile fish. The coefficients of determination (R(2)) for regressions conducted using logarithmically transformed data suggest that inclusion of an increased number of PCBs in an iPCB improves relationship with ∑PCB but not dlPCB-TEQs. Overall, novel iPCB3 (PCB 95, 118, and 153), iPCB4 (iPCB3 plus 138) and iPCB5 (iPCB4 plus 110) presented in this study and existing iPCB6 and iPCB7 are the most optimal indicators, while the current iPCB3 should be avoided. Measurement of ∑PCB based on a more detailed analysis (50+ congeners) is also overall a good approach for assessing PCB contamination and to track PCB origin in fish. Relationships among the existing and new iPCB schemes have been presented to facilitate their interconversion. The iPCB6 equiv levels for the 6.5 and 10 pg/g benchmarks of dlPCB-TEQ05 are about 50 and 120 ng/g ww, respectively, which are lower than the corresponding iPCB6 limits of 125 and 300 ng/g ww set by the European Union.
Upwind schemes and bifurcating solutions in real gas computations
NASA Technical Reports Server (NTRS)
Suresh, Ambady; Liou, Meng-Sing
1992-01-01
The area of high speed flow is seeing a renewed interest due to advanced propulsion concepts such as the National Aerospace Plane (NASP), Space Shuttle, and future civil transport concepts. Upwind schemes to solve such flows have become increasingly popular in the last decade due to their excellent shock capturing properties. In the first part of this paper the authors present the extension of the Osher scheme to equilibrium and non-equilibrium gases. For simplicity, the source terms are treated explicitly. Computations based on the above scheme are presented to demonstrate the feasibility, accuracy and efficiency of the proposed scheme. One of the test problems is a Chapman-Jouguet detonation problem for which numerical solutions have been known to bifurcate into spurious weak detonation solutions on coarse grids. Results indicate that the numerical solution obtained depends both on the upwinding scheme used and the limiter employed to obtain second order accuracy. For example, the Osher scheme gives the correct CJ solution when the super-bee limiter is used, but gives the spurious solution when the Van Leer limiter is used. With the Roe scheme the spurious solution is obtained for all limiters.
Active control of the lifetime of excited resonance states by means of laser pulses.
García-Vela, A
2012-04-07
Quantum control of the lifetime of a system in an excited resonance state is investigated theoretically by creating coherent superpositions of overlapping resonances. This control scheme exploits the quantum interference occurring between the overlapping resonances, which can be controlled by varying the width of the laser pulse that creates the superposition state. The scheme is applied to a realistic model of the Br(2)(B)-Ne predissociation decay dynamics through a three-dimensional wave packet method. It is shown that extensive control of the system lifetime is achievable, both enhancing and damping it remarkably. An experimental realization of the control scheme is suggested.
Isbarn, Hendrik; Briganti, Alberto; De Visschere, Pieter J L; Fütterer, Jurgen J; Ghadjar, Pirus; Giannarini, Gianluca; Ost, Piet; Ploussard, Guillaume; Sooriakumaran, Prasanna; Surcel, Christian I; van Oort, Inge M; Yossepowitch, Ofer; van den Bergh, Roderick C N
2015-04-01
Prostate biopsy (PB) is the gold standard for the diagnosis of prostate cancer (PCa). However, the optimal number of biopsy cores remains debatable. We sought to compare contemporary standard (10-12 cores) vs. saturation (=18 cores) schemes on initial as well as repeat PB. A non-systematic review of the literature was performed from 2000 through 2013. Studies of highest evidence (randomized controlled trials, prospective non-randomized studies, and retrospective reports of high quality) comparing standard vs saturation schemes on initial and repeat PB were evaluated. Outcome measures were overall PCa detection rate, detection rate of insignificant PCa, and procedure-associated morbidity. On initial PB, there is growing evidence that a saturation scheme is associated with a higher PCa detection rate compared to a standard one in men with lower PSA levels (<10 ng/ml), larger prostates (>40 cc), or lower PSA density values (<0.25 ng/ml/cc). However, these cut-offs are not uniform and differ among studies. Detection rates of insignificant PCa do not differ in a significant fashion between standard and saturation biopsies. On repeat PB, PCa detection rate is likewise higher with saturation protocols. Estimates of insignificant PCa vary widely due to differing definitions of insignificant disease. However, the rates of insignificant PCa appear to be comparable for the schemes in patients with only one prior negative biopsy, while saturation biopsy seems to detect more cases of insignificant PCa compared to standard biopsy in men with two or more prior negative biopsies. Very extensive sampling is associated with a high rate of acute urinary retention, whereas other severe adverse events, such as sepsis, appear not to occur more frequently with saturation schemes. Current evidence suggests that saturation schemes are associated with a higher PCa detection rate compared to standard ones on initial PB in men with lower PSA levels or larger prostates, and on repeat PB. Since most data are derived from retrospective studies, other endpoints such as detection rate of insignificant disease - especially on repeat PB - show broad variations throughout the literature and must, thus, be interpreted with caution. Future prospective controlled trials should be conducted to compare extended templates with newer techniques, such as image-guided sampling, in order to optimize PCa diagnostic strategy.
Ahmed, Wamiq M; Lenz, Dominik; Liu, Jia; Paul Robinson, J; Ghafoor, Arif
2008-03-01
High-throughput biological imaging uses automated imaging devices to collect a large number of microscopic images for analysis of biological systems and validation of scientific hypotheses. Efficient manipulation of these datasets for knowledge discovery requires high-performance computational resources, efficient storage, and automated tools for extracting and sharing such knowledge among different research sites. Newly emerging grid technologies provide powerful means for exploiting the full potential of these imaging techniques. Efficient utilization of grid resources requires the development of knowledge-based tools and services that combine domain knowledge with analysis algorithms. In this paper, we first investigate how grid infrastructure can facilitate high-throughput biological imaging research, and present an architecture for providing knowledge-based grid services for this field. We identify two levels of knowledge-based services. The first level provides tools for extracting spatiotemporal knowledge from image sets and the second level provides high-level knowledge management and reasoning services. We then present cellular imaging markup language, an extensible markup language-based language for modeling of biological images and representation of spatiotemporal knowledge. This scheme can be used for spatiotemporal event composition, matching, and automated knowledge extraction and representation for large biological imaging datasets. We demonstrate the expressive power of this formalism by means of different examples and extensive experimental results.
A positive and entropy-satisfying finite volume scheme for the Baer-Nunziato model
NASA Astrophysics Data System (ADS)
Coquel, Frédéric; Hérard, Jean-Marc; Saleh, Khaled
2017-02-01
We present a relaxation scheme for approximating the entropy dissipating weak solutions of the Baer-Nunziato two-phase flow model. This relaxation scheme is straightforwardly obtained as an extension of the relaxation scheme designed in [16] for the isentropic Baer-Nunziato model and consequently inherits its main properties. To our knowledge, this is the only existing scheme for which the approximated phase fractions, phase densities and phase internal energies are proven to remain positive without any restrictive condition other than a classical fully computable CFL condition. For ideal gas and stiffened gas equations of state, real values of the phasic speeds of sound are also proven to be maintained by the numerical scheme. It is also the only scheme for which a discrete entropy inequality is proven, under a CFL condition derived from the natural sub-characteristic condition associated with the relaxation approximation. This last property, which ensures the non-linear stability of the numerical method, is satisfied for any admissible equation of state. We provide a numerical study for the convergence of the approximate solutions towards some exact Riemann solutions. The numerical simulations show that the relaxation scheme compares well with two of the most popular existing schemes available for the Baer-Nunziato model, namely Schwendeman-Wahle-Kapila's Godunov-type scheme [39] and Tokareva-Toro's HLLC scheme [44]. The relaxation scheme also shows a higher precision and a lower computational cost (for comparable accuracy) than a standard numerical scheme used in the nuclear industry, namely Rusanov's scheme. Finally, we assess the good behavior of the scheme when approximating vanishing phase solutions.
A positive and entropy-satisfying finite volume scheme for the Baer–Nunziato model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coquel, Frédéric, E-mail: frederic.coquel@cmap.polytechnique.fr; Hérard, Jean-Marc, E-mail: jean-marc.herard@edf.fr; Saleh, Khaled, E-mail: saleh@math.univ-lyon1.fr
We present a relaxation scheme for approximating the entropy dissipating weak solutions of the Baer–Nunziato two-phase flow model. This relaxation scheme is straightforwardly obtained as an extension of the relaxation scheme designed in for the isentropic Baer–Nunziato model and consequently inherits its main properties. To our knowledge, this is the only existing scheme for which the approximated phase fractions, phase densities and phase internal energies are proven to remain positive without any restrictive condition other than a classical fully computable CFL condition. For ideal gas and stiffened gas equations of state, real values of the phasic speeds of sound aremore » also proven to be maintained by the numerical scheme. It is also the only scheme for which a discrete entropy inequality is proven, under a CFL condition derived from the natural sub-characteristic condition associated with the relaxation approximation. This last property, which ensures the non-linear stability of the numerical method, is satisfied for any admissible equation of state. We provide a numerical study for the convergence of the approximate solutions towards some exact Riemann solutions. The numerical simulations show that the relaxation scheme compares well with two of the most popular existing schemes available for the Baer–Nunziato model, namely Schwendeman–Wahle–Kapila's Godunov-type scheme and Tokareva–Toro's HLLC scheme . The relaxation scheme also shows a higher precision and a lower computational cost (for comparable accuracy) than a standard numerical scheme used in the nuclear industry, namely Rusanov's scheme. Finally, we assess the good behavior of the scheme when approximating vanishing phase solutions.« less
The Osher scheme for real gases
NASA Technical Reports Server (NTRS)
Suresh, Ambady; Liou, Meng-Sing
1990-01-01
An extension of Osher's approximate Riemann solver to include gases with an arbitrary equation of state is presented. By a judicious choice of thermodynamic variables, the Riemann invariats are reduced to quadratures which are then approximated numerically. The extension is rigorous and does not involve any further assumptions or approximations over the ideal gas case. Numerical results are presented to demonstrate the feasibility and accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Maire, Pierre-Henri; Abgrall, Rémi; Breil, Jérôme; Loubère, Raphaël; Rebourcet, Bernard
2013-02-01
In this paper, we describe a cell-centered Lagrangian scheme devoted to the numerical simulation of solid dynamics on two-dimensional unstructured grids in planar geometry. This numerical method, utilizes the classical elastic-perfectly plastic material model initially proposed by Wilkins [M.L. Wilkins, Calculation of elastic-plastic flow, Meth. Comput. Phys. (1964)]. In this model, the Cauchy stress tensor is decomposed into the sum of its deviatoric part and the thermodynamic pressure which is defined by means of an equation of state. Regarding the deviatoric stress, its time evolution is governed by a classical constitutive law for isotropic material. The plasticity model employs the von Mises yield criterion and is implemented by means of the radial return algorithm. The numerical scheme relies on a finite volume cell-centered method wherein numerical fluxes are expressed in terms of sub-cell force. The generic form of the sub-cell force is obtained by requiring the scheme to satisfy a semi-discrete dissipation inequality. Sub-cell force and nodal velocity to move the grid are computed consistently with cell volume variation by means of a node-centered solver, which results from total energy conservation. The nominally second-order extension is achieved by developing a two-dimensional extension in the Lagrangian framework of the Generalized Riemann Problem methodology, introduced by Ben-Artzi and Falcovitz [M. Ben-Artzi, J. Falcovitz, Generalized Riemann Problems in Computational Fluid Dynamics, Cambridge Monogr. Appl. Comput. Math. (2003)]. Finally, the robustness and the accuracy of the numerical scheme are assessed through the computation of several test cases.
NASA Technical Reports Server (NTRS)
Hornberger, G. M.; Rastetter, E. B.
1982-01-01
A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.
A Higher Order Iterative Method for Computing the Drazin Inverse
Soleymani, F.; Stanimirović, Predrag S.
2013-01-01
A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper. PMID:24222747
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.
2017-12-01
As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. The algorithmic needs of computational astrophysics are indeed very special. The methods need to be robust and preserve the positivity of density and pressure. Relativistic flows should remain sub-luminal. These requirements place additional pressures on a computational astrophysics code, which are usually not felt by a traditional fluid dynamics code. Hence the need for a specialized review. The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes. At third order, they are most similar to piecewise parabolic method schemes, which are also included. DG schemes evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and DG schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger. Time-dependent astrophysical codes need to be accurate in space and time with the result that the spatial and temporal accuracies must be matched. This is realized with the help of strong stability preserving Runge-Kutta schemes and ADER (Arbitrary DERivative in space and time) schemes, both of which are also described. The emphasis of this review is on computer-implementable ideas, not necessarily on the underlying theory.
Architecture of security management unit for safe hosting of multiple agents
NASA Astrophysics Data System (ADS)
Gilmont, Tanguy; Legat, Jean-Didier; Quisquater, Jean-Jacques
1999-04-01
In such growing areas as remote applications in large public networks, electronic commerce, digital signature, intellectual property and copyright protection, and even operating system extensibility, the hardware security level offered by existing processors is insufficient. They lack protection mechanisms that prevent the user from tampering critical data owned by those applications. Some devices make exception, but have not enough processing power nor enough memory to stand up to such applications (e.g. smart cards). This paper proposes an architecture of secure processor, in which the classical memory management unit is extended into a new security management unit. It allows ciphered code execution and ciphered data processing. An internal permanent memory can store cipher keys and critical data for several client agents simultaneously. The ordinary supervisor privilege scheme is replaced by a privilege inheritance mechanism that is more suited to operating system extensibility. The result is a secure processor that has hardware support for extensible multitask operating systems, and can be used for both general applications and critical applications needing strong protection. The security management unit and the internal permanent memory can be added to an existing CPU core without loss of performance, and do not require it to be modified.
Two-level schemes for the advection equation
NASA Astrophysics Data System (ADS)
Vabishchevich, Petr N.
2018-06-01
The advection equation is the basis for mathematical models of continuum mechanics. In the approximate solution of nonstationary problems it is necessary to inherit main properties of the conservatism and monotonicity of the solution. In this paper, the advection equation is written in the symmetric form, where the advection operator is the half-sum of advection operators in conservative (divergent) and non-conservative (characteristic) forms. The advection operator is skew-symmetric. Standard finite element approximations in space are used. The standard explicit two-level scheme for the advection equation is absolutely unstable. New conditionally stable regularized schemes are constructed, on the basis of the general theory of stability (well-posedness) of operator-difference schemes, the stability conditions of the explicit Lax-Wendroff scheme are established. Unconditionally stable and conservative schemes are implicit schemes of the second (Crank-Nicolson scheme) and fourth order. The conditionally stable implicit Lax-Wendroff scheme is constructed. The accuracy of the investigated explicit and implicit two-level schemes for an approximate solution of the advection equation is illustrated by the numerical results of a model two-dimensional problem.
Vision and the representation of the surroundings in spatial memory
Tatler, Benjamin W.; Land, Michael F.
2011-01-01
One of the paradoxes of vision is that the world as it appears to us and the image on the retina at any moment are not much like each other. The visual world seems to be extensive and continuous across time. However, the manner in which we sample the visual environment is neither extensive nor continuous. How does the brain reconcile these differences? Here, we consider existing evidence from both static and dynamic viewing paradigms together with the logical requirements of any representational scheme that would be able to support active behaviour. While static scene viewing paradigms favour extensive, but perhaps abstracted, memory representations, dynamic settings suggest sparser and task-selective representation. We suggest that in dynamic settings where movement within extended environments is required to complete a task, the combination of visual input, egocentric and allocentric representations work together to allow efficient behaviour. The egocentric model serves as a coding scheme in which actions can be planned, but also offers a potential means of providing the perceptual stability that we experience. PMID:21242146
Error suppression via complementary gauge choices in Reed-Muller codes
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Jochym-O'Connor, Tomas
2017-09-01
Concatenation of two quantum error-correcting codes with complementary sets of transversal gates can provide a means toward universal fault-tolerant quantum computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit-level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit-level noise.
Heralded creation of photonic qudits from parametric down-conversion using linear optics
NASA Astrophysics Data System (ADS)
Yoshikawa, Jun-ichi; Bergmann, Marcel; van Loock, Peter; Fuwa, Maria; Okada, Masanori; Takase, Kan; Toyama, Takeshi; Makino, Kenzo; Takeda, Shuntaro; Furusawa, Akira
2018-05-01
We propose an experimental scheme to generate, in a heralded fashion, arbitrary quantum superpositions of two-mode optical states with a fixed total photon number n based on weakly squeezed two-mode squeezed state resources (obtained via weak parametric down-conversion), linear optics, and photon detection. Arbitrary d -level (qudit) states can be created this way where d =n +1 . Furthermore, we experimentally demonstrate our scheme for n =2 . The resulting qutrit states are characterized via optical homodyne tomography. We also discuss possible extensions to more than two modes concluding that, in general, our approach ceases to work in this case. For illustration and with regards to possible applications, we explicitly calculate a few examples such as NOON states and logical qubit states for quantum error correction. In particular, our approach enables one to construct bosonic qubit error-correction codes against amplitude damping (photon loss) with a typical suppression of √{n }-1 losses and spanned by two logical codewords that each correspond to an n -photon superposition for two bosonic modes.
Qiu, Shuming; Xu, Guoai; Ahmad, Haseeb; Guo, Yanhui
2018-01-01
The Session Initiation Protocol (SIP) is an extensive and esteemed communication protocol employed to regulate signaling as well as for controlling multimedia communication sessions. Recently, Kumari et al. proposed an improved smart card based authentication scheme for SIP based on Farash's scheme. Farash claimed that his protocol is resistant against various known attacks. But, we observe some accountable flaws in Farash's protocol. We point out that Farash's protocol is prone to key-compromise impersonation attack and is unable to provide pre-verification in the smart card, efficient password change and perfect forward secrecy. To overcome these limitations, in this paper we present an enhanced authentication mechanism based on Kumari et al.'s scheme. We prove that the proposed protocol not only overcomes the issues in Farash's scheme, but it can also resist against all known attacks. We also provide the security analysis of the proposed scheme with the help of widespread AVISPA (Automated Validation of Internet Security Protocols and Applications) software. At last, comparing with the earlier proposals in terms of security and efficiency, we conclude that the proposed protocol is efficient and more secure.
Advection of Microphysical Scalars in Terminal Area Simulation System (TASS)
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Proctor, Fred H.
2011-01-01
The Terminal Area Simulation System (TASS) is a large eddy scale atmospheric flow model with extensive turbulence and microphysics packages. It has been applied successfully in the past to a diverse set of problems ranging from prediction of severe convective events (Proctor et al. 2002), tracking storms and for simulating weapons effects such as the dispersion and fallout of fission debris (Bacon and Sarma 1991), etc. More recently, TASS has been used for predicting the transport and decay of wake vortices behind aircraft (Proctor 2009). An essential part of the TASS model is its comprehensive microphysics package, which relies on the accurate computation of microphysical scalar transport. This paper describes an evaluation of the Leonard scheme implemented in the TASS model for transporting microphysical scalars. The scheme is validated against benchmark cases with exact solutions and compared with two other schemes - a Monotone Upstream-centered Scheme for Conservation Laws (MUSCL)-type scheme after van Leer and LeVeque's high-resolution wave propagation method. Finally, a comparison between the schemes is made against an incident of severe tornadic super-cell convection near Del City, Oklahoma.
A New Approach for Constructing Highly Stable High Order CESE Schemes
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2010-01-01
A new approach is devised to construct high order CESE schemes which would avoid the common shortcomings of traditional high order schemes including: (a) susceptibility to computational instabilities; (b) computational inefficiency due to their local implicit nature (i.e., at each mesh points, need to solve a system of linear/nonlinear equations involving all the mesh variables associated with this mesh point); (c) use of large and elaborate stencils which complicates boundary treatments and also makes efficient parallel computing much harder; (d) difficulties in applications involving complex geometries; and (e) use of problem-specific techniques which are needed to overcome stability problems but often cause undesirable side effects. In fact it will be shown that, with the aid of a conceptual leap, one can build from a given 2nd-order CESE scheme its 4th-, 6th-, 8th-,... order versions which have the same stencil and same stability conditions of the 2nd-order scheme, and also retain all other advantages of the latter scheme. A sketch of multidimensional extensions will also be provided.
Quantum games of opinion formation based on the Marinatto-Weber quantum game scheme
NASA Astrophysics Data System (ADS)
Deng, Xinyang; Deng, Yong; Liu, Qi; Shi, Lei; Wang, Zhen
2016-06-01
Quantization has become a new way to investigate classical game theory since quantum strategies and quantum games were proposed. In the existing studies, many typical game models, such as the prisoner's dilemma, battle of the sexes, Hawk-Dove game, have been extensively explored by using quantization approach. Along a similar method, here several game models of opinion formations will be quantized on the basis of the Marinatto-Weber quantum game scheme, a frequently used scheme of converting classical games to quantum versions. Our results show that the quantization can fascinatingly change the properties of some classical opinion formation game models so as to generate win-win outcomes.
A Pseudo-Temporal Multi-Grid Relaxation Scheme for Solving the Parabolized Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
White, J. A.; Morrison, J. H.
1999-01-01
A multi-grid, flux-difference-split, finite-volume code, VULCAN, is presented for solving the elliptic and parabolized form of the equations governing three-dimensional, turbulent, calorically perfect and non-equilibrium chemically reacting flows. The space marching algorithms developed to improve convergence rate and or reduce computational cost are emphasized. The algorithms presented are extensions to the class of implicit pseudo-time iterative, upwind space-marching schemes. A full approximate storage, full multi-grid scheme is also described which is used to accelerate the convergence of a Gauss-Seidel relaxation method. The multi-grid algorithm is shown to significantly improve convergence on high aspect ratio grids.
Nagy-Soper Subtraction: a Review
NASA Astrophysics Data System (ADS)
Robens, Tania
2013-07-01
In this review, we present a review on an alternative NLO subtraction scheme, based on the splitting kernels of an improved parton shower that promises to facilitate the inclusion of higher-order corrections into Monte Carlo event generators. We give expressions for the scheme for massless emitters, and point to work on the extension for massive cases. As an example, we show results for the C parameter of the process e+e-→3 jets at NLO which have recently been published as a verification of this scheme. We equally provide analytic expressions for integrated counterterms that have not been presented in previous work, and comment on the possibility of analytic approximations for the remaining numerical integrals.
Feature Visibility Limits in the Non-Linear Enhancement of Turbid Images
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.
2003-01-01
The advancement of non-linear processing methods for generic automatic clarification of turbid imagery has led us from extensions of entirely passive multiscale Retinex processing to a new framework of active measurement and control of the enhancement process called the Visual Servo. In the process of testing this new non-linear computational scheme, we have identified that feature visibility limits in the post-enhancement image now simplify to a single signal-to-noise figure of merit: a feature is visible if the feature-background signal difference is greater than the RMS noise level. In other words, a signal-to-noise limit of approximately unity constitutes a lower limit on feature visibility.
Generation of multiphoton Greenberger-Horne-Zeilinger state and its two kinds of teleportation
NASA Astrophysics Data System (ADS)
Song, Jia-Min; Chang, Di; Huang, Yong-Chang
2012-02-01
We propose a comprehensive experimental scheme to generate and teleport GHZ states of any number of photons as well as to accomplish the process of open-destination teleportation of a single photon's arbitrary state. The equipment and techniques which are used in our scheme are all feasible under current technology. Moreover, we make a direct extension of the above cases and investigate the open-destination teleportation of any M-photon general GHZ states with a brief diagram.
NASA Astrophysics Data System (ADS)
Padula, S.; Harou, J. J.
2012-12-01
Water utilities in England and Wales are regulated natural monopolies called 'water companies'. Water companies must obtain periodic regulatory approval for all investments (new supply infrastructure or demand management measures). Both water companies and their regulators use results from least economic cost capacity expansion optimisation models to develop or assess water supply investment plans. This presentation first describes the formulation of a flexible supply-demand planning capacity expansion model for water system planning. The model uses a mixed integer linear programming (MILP) formulation to choose the least-cost schedule of future supply schemes (reservoirs, desalination plants, etc.) and demand management (DM) measures (leakage reduction, water efficiency and metering options) and bulk transfers. Decisions include what schemes to implement, when to do so, how to size schemes and how much to use each scheme during each year of an n-year long planning horizon (typically 30 years). In addition to capital and operating (fixed and variable) costs, the estimated social and environmental costs of schemes are considered. Each proposed scheme is costed discretely at one or more capacities following regulatory guidelines. The model uses a node-link network structure: water demand nodes are connected to supply and demand management (DM) options (represented as nodes) or to other demand nodes (transfers). Yields from existing and proposed are estimated separately using detailed water resource system simulation models evaluated over the historical period. The model simultaneously considers multiple demand scenarios to ensure demands are met at required reliability levels; use levels of each scheme are evaluated for each demand scenario and weighted by scenario likelihood so that operating costs are accurately evaluated. Multiple interdependency relationships between schemes (pre-requisites, mutual exclusivity, start dates, etc.) can be accounted for by additional constraints. User-defined annual water saving profiles are used for DM schemes so that water conservation 'yields' can follow observed patterns. A two-stage optimization procedure is applied to deal with network infeasibilities which appear in large applications. We apply the model to a regional system of seven water companies in the South East of England, the driest part of the UK with its largest and fastest growing population. The model's spatial units are water supply zones, i.e. interconnected zones of equal supply reliability; each company contains between 3 and 8 of these. Economic benefits of greater sharing of resources among water companies (regional water transfers) are evaluated by considering bi-directional interconnections between all neighboring supply zones. Next we describe an extension of the model to investigate how current regulations incentivize companies to invest in an attempt to understand how better regulations can incentivize more water transfers. Finally, an attempt is made to change from the current model assumption of perfect cooperation between companies to one that represents the fact that each company is a private company seeking to maximize its own benefits. Limitations and advantages of the formulations are discussed and recommendations for capacity expansion modeling are made.
Improved diffusion Monte Carlo propagators for bosonic systems using Itô calculus
NASA Astrophysics Data System (ADS)
Hâkansson, P.; Mella, M.; Bressanini, Dario; Morosi, Gabriele; Patrone, Marta
2006-11-01
The construction of importance sampled diffusion Monte Carlo (DMC) schemes accurate to second order in the time step is discussed. A central aspect in obtaining efficient second order schemes is the numerical solution of the stochastic differential equation (SDE) associated with the Fokker-Plank equation responsible for the importance sampling procedure. In this work, stochastic predictor-corrector schemes solving the SDE and consistent with Itô calculus are used in DMC simulations of helium clusters. These schemes are numerically compared with alternative algorithms obtained by splitting the Fokker-Plank operator, an approach that we analyze using the analytical tools provided by Itô calculus. The numerical results show that predictor-corrector methods are indeed accurate to second order in the time step and that they present a smaller time step bias and a better efficiency than second order split-operator derived schemes when computing ensemble averages for bosonic systems. The possible extension of the predictor-corrector methods to higher orders is also discussed.
NASA Astrophysics Data System (ADS)
Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun
2018-05-01
This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.
A novel encoding scheme for effective biometric discretization: Linearly Separable Subcode.
Lim, Meng-Hui; Teoh, Andrew Beng Jin
2013-02-01
Separability in a code is crucial in guaranteeing a decent Hamming-distance separation among the codewords. In multibit biometric discretization where a code is used for quantization-intervals labeling, separability is necessary for preserving distance dissimilarity when feature components are mapped from a discrete space to a Hamming space. In this paper, we examine separability of Binary Reflected Gray Code (BRGC) encoding and reveal its inadequacy in tackling interclass variation during the discrete-to-binary mapping, leading to a tradeoff between classification performance and entropy of binary output. To overcome this drawback, we put forward two encoding schemes exhibiting full-ideal and near-ideal separability capabilities, known as Linearly Separable Subcode (LSSC) and Partially Linearly Separable Subcode (PLSSC), respectively. These encoding schemes convert the conventional entropy-performance tradeoff into an entropy-redundancy tradeoff in the increase of code length. Extensive experimental results vindicate the superiority of our schemes over the existing encoding schemes in discretization performance. This opens up possibilities of achieving much greater classification performance with high output entropy.
A second-order shock-adaptive Godunov scheme based on the generalized Lagrangian formulation
NASA Astrophysics Data System (ADS)
Lepage, Claude
Application of the Godunov scheme to the Euler equations of gas dynamics, based on the Eulerian formulation of flow, smears discontinuities (especially sliplines) over several computational cells, while the accuracy in the smooth flow regions is of the order of a function of the cell width. Based on the generalized Lagrangian formulation (GLF), the Godunov scheme yields far superior results. By the use of coordinate streamlines in the GLF, the slipline (itself a streamline) is resolved crisply. Infinite shock resolution is achieved through the splitting of shock cells, while the accuracy in the smooth flow regions is improved using a nonconservative formulation of the governing equations coupled to a second order extension of the Godunov scheme. Furthermore, GLF requires no grid generation for boundary value problems and the simple structure of the solution to the Riemann problem in the GLF is exploited in the numerical implementation of the shock adaptive scheme. Numerical experiments reveal high efficiency and unprecedented resolution of shock and slipline discontinuities.
Modelling debris transport within glaciers by advection in a full-Stokes ice flow model
NASA Astrophysics Data System (ADS)
Wirbel, Anna; Jarosch, Alexander H.; Nicholson, Lindsey
2018-01-01
Glaciers with extensive surface debris cover respond differently to climate forcing than those without supraglacial debris. In order to include debris-covered glaciers in projections of glaciogenic runoff and sea level rise and to understand the paleoclimate proxy recorded by such glaciers, it is necessary to understand the manner and timescales over which a supraglacial debris cover develops. Because debris is delivered to the glacier by processes that are heterogeneous in space and time, and these debris inclusions are altered during englacial transport through the glacier system, correctly determining where, when and how much debris is delivered to the glacier surface requires knowledge of englacial transport pathways and deformation. To achieve this, we present a model of englacial debris transport in which we couple an advection scheme to a full-Stokes ice flow model. The model performs well in numerical benchmark tests, and we present both 2-D and 3-D glacier test cases that, for a set of prescribed debris inputs, reproduce the englacial features, deformation thereof and patterns of surface emergence predicted by theory and observations of structural glaciology. In a future step, coupling this model to (i) a debris-aware surface mass balance scheme and (ii) a supraglacial debris transport scheme will enable the co-evolution of debris cover and glacier geometry to be modelled.
Performance-cost evaluation methodology for ITS equipment deployment
DOT National Transportation Integrated Search
2000-09-01
Although extensive Intelligent Transportation Systems (ITS) technology is being deployed in the field, little analysis is being performed to evaluate the benefits of implementation schemes. Benefit analysis is particularly in need for one popular ITS...
Visual tracking using objectness-bounding box regression and correlation filters
NASA Astrophysics Data System (ADS)
Mbelwa, Jimmy T.; Zhao, Qingjie; Lu, Yao; Wang, Fasheng; Mbise, Mercy
2018-03-01
Visual tracking is a fundamental problem in computer vision with extensive application domains in surveillance and intelligent systems. Recently, correlation filter-based tracking methods have shown a great achievement in terms of robustness, accuracy, and speed. However, such methods have a problem of dealing with fast motion (FM), motion blur (MB), illumination variation (IV), and drifting caused by occlusion (OCC). To solve this problem, a tracking method that integrates objectness-bounding box regression (O-BBR) model and a scheme based on kernelized correlation filter (KCF) is proposed. The scheme based on KCF is used to improve the tracking performance of FM and MB. For handling drift problem caused by OCC and IV, we propose objectness proposals trained in bounding box regression as prior knowledge to provide candidates and background suppression. Finally, scheme KCF as a base tracker and O-BBR are fused to obtain a state of a target object. Extensive experimental comparisons of the developed tracking method with other state-of-the-art trackers are performed on some of the challenging video sequences. Experimental comparison results show that our proposed tracking method outperforms other state-of-the-art tracking methods in terms of effectiveness, accuracy, and robustness.
Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems
NASA Technical Reports Server (NTRS)
Casper, Jay; Dorrepaal, J. Mark
1990-01-01
The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caron, Justin; Cohen, Stuart M; Brown, Maxwell
This paper provides a comprehensive exploration of the impacts of economy-wide CO 2 taxes in the U.S. simulated using a detailed electric sector model [the National Renewable Energy Laboratory's Regional Energy Deployment System (ReEDS)] linked with a computable general equilibrium model of the U.S. economy [the Massachusetts Institute of Technology's U.S. Regional Energy Policy (USREP) model]. We implement various tax trajectories and options for using the revenue collected by the tax and describe their impact on household welfare and its distribution across income levels. Overall, we find that our top-down/bottom-up models affects estimates of the distribution and cost of emissionmore » reductions as well as the amount of revenue collected, but that these are mostly insensitive to the way the revenue is recycled. We find that substantial abatement opportunities through fuel switching and renewable penetration in the electricity sector allow the economy to accommodate extensive emissions reductions at relatively low cost. While welfare impacts are largely determined by the choice of revenue recycling scheme, all tax levels and schemes provide net benefits when accounting for the avoided global climate change benefits of emission reductions. Recycling revenue through capital income tax rebates is more efficient than labor income tax rebates or uniform transfers to households. While capital tax rebates substantially reduce the overall costs of emission abatement, they profit high income households the most and are regressive. We more generally identify a clear trade-off between equity and efficiency across the various recycling options. However, we show through a set of hybrid recycling schemes that it is possible to limit inequalities in impacts, particularly those on the lowest income households, at relatively little incremental cost.« less
Caron, Justin; Cohen, Stuart M; Brown, Maxwell; ...
2018-02-01
This paper provides a comprehensive exploration of the impacts of economy-wide CO 2 taxes in the U.S. simulated using a detailed electric sector model [the National Renewable Energy Laboratory's Regional Energy Deployment System (ReEDS)] linked with a computable general equilibrium model of the U.S. economy [the Massachusetts Institute of Technology's U.S. Regional Energy Policy (USREP) model]. We implement various tax trajectories and options for using the revenue collected by the tax and describe their impact on household welfare and its distribution across income levels. Overall, we find that our top-down/bottom-up models affects estimates of the distribution and cost of emissionmore » reductions as well as the amount of revenue collected, but that these are mostly insensitive to the way the revenue is recycled. We find that substantial abatement opportunities through fuel switching and renewable penetration in the electricity sector allow the economy to accommodate extensive emissions reductions at relatively low cost. While welfare impacts are largely determined by the choice of revenue recycling scheme, all tax levels and schemes provide net benefits when accounting for the avoided global climate change benefits of emission reductions. Recycling revenue through capital income tax rebates is more efficient than labor income tax rebates or uniform transfers to households. While capital tax rebates substantially reduce the overall costs of emission abatement, they profit high income households the most and are regressive. We more generally identify a clear trade-off between equity and efficiency across the various recycling options. However, we show through a set of hybrid recycling schemes that it is possible to limit inequalities in impacts, particularly those on the lowest income households, at relatively little incremental cost.« less
Some results on numerical methods for hyperbolic conservation laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang Huanan.
1989-01-01
This dissertation contains some results on the numerical solutions of hyperbolic conservation laws. (1) The author introduced an artificial compression method as a correction to the basic ENO schemes. The method successfully prevents contact discontinuities from being smeared. This is achieved by increasing the slopes of the ENO reconstructions in such a way that the essentially non-oscillatory property of the schemes is kept. He analyzes the non-oscillatory property of the new artificial compression method by applying it to the UNO scheme which is a second order accurate ENO scheme, and proves that the resulting scheme is indeed non-oscillatory. Extensive 1-Dmore » numerical results and some preliminary 2-D ones are provided to show the strong performance of the method. (2) He combines the ENO schemes and the centered difference schemes into self-adjusting hybrid schemes which will be called the localized ENO schemes. At or near the jumps, he uses the ENO schemes with the field by field decompositions, otherwise he simply uses the centered difference schemes without the field by field decompositions. The method involves a new interpolation analysis. In the numerical experiments on several standard test problems, the quality of the numerical results of this method is close to that of the pure ENO results. The localized ENO schemes can be equipped with the above artificial compression method. In this way, he dramatically improves the resolutions of the contact discontinuities at very little additional costs. (3) He introduces a space-time mesh refinement method for time dependent problems.« less
Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.
Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin
2005-03-01
This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.
Nuclear Data Sheets for A = 42
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jun; Singh, Balraj
The experimental data are evaluated for known nuclides of mass number A = 42 (Al, Si, P, S, Cl, Ar, K, Ca, Sc, Ti, V, Cr). Detailed evaluated level properties and related information are presented, including adopted values of level and γ–ray energies, decay data (energies, intensities and placement of radiations), and other spectroscopic data. This work supersedes earlier full evaluations of A = 42 published by B. Singh, J.A. Cameron – Nucl.Data Sheets 92, 1 (2001) and P.M. Endt – Nucl. Phys. A521, 1 (1990); Errata and Addenda Nucl. Phys. A529, 763 (1991); Errata Nucl. Phys. A564, 609 (1993)more » (also P.M. Endt – Nucl. Phys. A633, 1 (1998) update). No excited states are known in {sup 42}Al, {sup 42}P, {sup 42}V and {sup 42}Cr, and structure information for {sup 42}Si and {sup 42}S is quite limited. There are no decay schemes available for the decay of {sup 42}Al, {sup 42}Si, {sup 42}P, {sup 42}V and {sup 42}Cr, while the decay schemes of {sup 42}Cl and {sup 42}Ti are incomplete in view of scarcity of data, and large gap between their Q–values and the highest energy levels populated in corresponding daughter nuclei. Structures of {sup 42}Ca, {sup 42}K, {sup 42}Sc and {sup 42}Ar nuclides remain the most extensively studied via many different nuclear reactions and decays.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maire, Pierre-Henri, E-mail: maire@celia.u-bordeaux1.fr; Abgrall, Rémi, E-mail: remi.abgrall@math.u-bordeau1.fr; Breil, Jérôme, E-mail: breil@celia.u-bordeaux1.fr
2013-02-15
In this paper, we describe a cell-centered Lagrangian scheme devoted to the numerical simulation of solid dynamics on two-dimensional unstructured grids in planar geometry. This numerical method, utilizes the classical elastic-perfectly plastic material model initially proposed by Wilkins [M.L. Wilkins, Calculation of elastic–plastic flow, Meth. Comput. Phys. (1964)]. In this model, the Cauchy stress tensor is decomposed into the sum of its deviatoric part and the thermodynamic pressure which is defined by means of an equation of state. Regarding the deviatoric stress, its time evolution is governed by a classical constitutive law for isotropic material. The plasticity model employs themore » von Mises yield criterion and is implemented by means of the radial return algorithm. The numerical scheme relies on a finite volume cell-centered method wherein numerical fluxes are expressed in terms of sub-cell force. The generic form of the sub-cell force is obtained by requiring the scheme to satisfy a semi-discrete dissipation inequality. Sub-cell force and nodal velocity to move the grid are computed consistently with cell volume variation by means of a node-centered solver, which results from total energy conservation. The nominally second-order extension is achieved by developing a two-dimensional extension in the Lagrangian framework of the Generalized Riemann Problem methodology, introduced by Ben-Artzi and Falcovitz [M. Ben-Artzi, J. Falcovitz, Generalized Riemann Problems in Computational Fluid Dynamics, Cambridge Monogr. Appl. Comput. Math. (2003)]. Finally, the robustness and the accuracy of the numerical scheme are assessed through the computation of several test cases.« less
Self-adjoint realisations of the Dirac-Coulomb Hamiltonian for heavy nuclei
NASA Astrophysics Data System (ADS)
Gallone, Matteo; Michelangeli, Alessandro
2018-02-01
We derive a classification of the self-adjoint extensions of the three-dimensional Dirac-Coulomb operator in the critical regime of the Coulomb coupling. Our approach is solely based upon the Kreĭn-Višik-Birman extension scheme, or also on Grubb's universal classification theory, as opposite to previous works within the standard von Neumann framework. This let the boundary condition of self-adjointness emerge, neatly and intrinsically, as a multiplicative constraint between regular and singular part of the functions in the domain of the extension, the multiplicative constant giving also immediate information on the invertibility property and on the resolvent and spectral gap of the extension.
Pavement marking extensions for deceleration lanes.
DOT National Transportation Integrated Search
1974-01-01
Pavement markings have definite and important functions in a proper scheme of traffic control. One such marking, the pavement edge line, has received much favorable public reaction. One of the limitations of the edge line as conventionally applied is...
High order accurate solutions of viscous problems
NASA Technical Reports Server (NTRS)
Hayder, M. Ehtesham; Turkel, Eli
1993-01-01
We consider a fourth order extension to MacCormack's scheme. The original extension was fourth order only for the inviscid terms but was second order for the viscous terms. We show how to modify the viscous terms so that the scheme is uniformly fourth order in the spatial derivatives. Applications are given to some boundary layer flows. In addition, for applications to shear flows the effect of the outflow boundary conditions are very important. We compare the accuracy of several of these different boundary conditions for both boundary layer and shear flows. Stretching at the outflow usually increases the oscillations in the numerical solution but the addition of a filtered sponge layer (with or without stretching) reduces such oscillations. The oscillations are generated by insufficient resolution of the shear layer. When the shear layer is sufficiently resolved then oscillations are not generated and there is less of a need for a nonreflecting boundary condition.
PRESAGE: PRivacy-preserving gEnetic testing via SoftwAre Guard Extension.
Chen, Feng; Wang, Chenghong; Dai, Wenrui; Jiang, Xiaoqian; Mohammed, Noman; Al Aziz, Md Momin; Sadat, Md Nazmus; Sahinalp, Cenk; Lauter, Kristin; Wang, Shuang
2017-07-26
Advances in DNA sequencing technologies have prompted a wide range of genomic applications to improve healthcare and facilitate biomedical research. However, privacy and security concerns have emerged as a challenge for utilizing cloud computing to handle sensitive genomic data. We present one of the first implementations of Software Guard Extension (SGX) based securely outsourced genetic testing framework, which leverages multiple cryptographic protocols and minimal perfect hash scheme to enable efficient and secure data storage and computation outsourcing. We compared the performance of the proposed PRESAGE framework with the state-of-the-art homomorphic encryption scheme, as well as the plaintext implementation. The experimental results demonstrated significant performance over the homomorphic encryption methods and a small computational overhead in comparison to plaintext implementation. The proposed PRESAGE provides an alternative solution for secure and efficient genomic data outsourcing in an untrusted cloud by using a hybrid framework that combines secure hardware and multiple crypto protocols.
A simple extension of Roe's scheme for real gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arabi, Sina, E-mail: sina.arabi@polymtl.ca; Trépanier, Jean-Yves; Camarero, Ricardo
The purpose of this paper is to develop a highly accurate numerical algorithm to model real gas flows in local thermodynamic equilibrium (LTE). The Euler equations are solved using a finite volume method based on Roe's flux difference splitting scheme including real gas effects. A novel algorithm is proposed to calculate the Jacobian matrix which satisfies the flux difference splitting exactly in the average state for a general equation of state. This algorithm increases the robustness and accuracy of the method, especially around the contact discontinuities and shock waves where the gas properties jump appreciably. The results are compared withmore » an exact solution of the Riemann problem for the shock tube which considers the real gas effects. In addition, the method is applied to a blunt cone to illustrate the capability of the proposed extension in solving two dimensional flows.« less
Heuristic pattern correction scheme using adaptively trained generalized regression neural networks.
Hoya, T; Chambers, J A
2001-01-01
In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studies.
Positivity-preserving numerical schemes for multidimensional advection
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Macvean, M. K.; Lock, A. P.
1993-01-01
This report describes the construction of an explicit, single time-step, conservative, finite-volume method for multidimensional advective flow, based on a uniformly third-order polynomial interpolation algorithm (UTOPIA). Particular attention is paid to the problem of flow-to-grid angle-dependent, anisotropic distortion typical of one-dimensional schemes used component-wise. The third-order multidimensional scheme automatically includes certain cross-difference terms that guarantee good isotropy (and stability). However, above first-order, polynomial-based advection schemes do not preserve positivity (the multidimensional analogue of monotonicity). For this reason, a multidimensional generalization of the first author's universal flux-limiter is sought. This is a very challenging problem. A simple flux-limiter can be found; but this introduces strong anisotropic distortion. A more sophisticated technique, limiting part of the flux and then restoring the isotropy-maintaining cross-terms afterwards, gives more satisfactory results. Test cases are confined to two dimensions; three-dimensional extensions are briefly discussed.
Umar, Nasir; Mohammed, Shafiu
2011-09-05
The need for health care reforms and alternative financing mechanism in many low and middle-income countries has been advocated. This led to the introduction of the national health insurance scheme (NHIS) in Nigeria, at first with the enrollment of formal sector employees. A qualitative study was conducted to assess enrollee's perception on the quality of health care before and after enrollment. Initial results revealed that respondents (heads of households) have generally viewed the NHIS favorably, but consistently expressed dissatisfaction over the terms of coverage. Specifically, because the NHIS enrollment covers only the primary insured person, their spouse and only up to four biological children (child defined as <18 years of age), in a setting where extended family is common. Dissatisfaction of enrollees could affect their willingness to participate in the insurance scheme, which may potentially affect the success and future extension of the scheme.
Identification of hybrid node and link communities in complex networks
He, Dongxiao; Jin, Di; Chen, Zheng; Zhang, Weixiong
2015-01-01
Identifying communities in complex networks is an effective means for analyzing complex systems, with applications in diverse areas such as social science, engineering, biology and medicine. Finding communities of nodes and finding communities of links are two popular schemes for network analysis. These schemes, however, have inherent drawbacks and are inadequate to capture complex organizational structures in real networks. We introduce a new scheme and an effective approach for identifying complex mixture structures of node and link communities, called hybrid node-link communities. A central piece of our approach is a probabilistic model that accommodates node, link and hybrid node-link communities. Our extensive experiments on various real-world networks, including a large protein-protein interaction network and a large network of semantically associated words, illustrated that the scheme for hybrid communities is superior in revealing network characteristics. Moreover, the new approach outperformed the existing methods for finding node or link communities separately. PMID:25728010
Identification of hybrid node and link communities in complex networks.
He, Dongxiao; Jin, Di; Chen, Zheng; Zhang, Weixiong
2015-03-02
Identifying communities in complex networks is an effective means for analyzing complex systems, with applications in diverse areas such as social science, engineering, biology and medicine. Finding communities of nodes and finding communities of links are two popular schemes for network analysis. These schemes, however, have inherent drawbacks and are inadequate to capture complex organizational structures in real networks. We introduce a new scheme and an effective approach for identifying complex mixture structures of node and link communities, called hybrid node-link communities. A central piece of our approach is a probabilistic model that accommodates node, link and hybrid node-link communities. Our extensive experiments on various real-world networks, including a large protein-protein interaction network and a large network of semantically associated words, illustrated that the scheme for hybrid communities is superior in revealing network characteristics. Moreover, the new approach outperformed the existing methods for finding node or link communities separately.
Identification of hybrid node and link communities in complex networks
NASA Astrophysics Data System (ADS)
He, Dongxiao; Jin, Di; Chen, Zheng; Zhang, Weixiong
2015-03-01
Identifying communities in complex networks is an effective means for analyzing complex systems, with applications in diverse areas such as social science, engineering, biology and medicine. Finding communities of nodes and finding communities of links are two popular schemes for network analysis. These schemes, however, have inherent drawbacks and are inadequate to capture complex organizational structures in real networks. We introduce a new scheme and an effective approach for identifying complex mixture structures of node and link communities, called hybrid node-link communities. A central piece of our approach is a probabilistic model that accommodates node, link and hybrid node-link communities. Our extensive experiments on various real-world networks, including a large protein-protein interaction network and a large network of semantically associated words, illustrated that the scheme for hybrid communities is superior in revealing network characteristics. Moreover, the new approach outperformed the existing methods for finding node or link communities separately.
Zhang, Lei; Zhang, Jing
2017-08-07
A Smart Grid (SG) facilitates bidirectional demand-response communication between individual users and power providers with high computation and communication performance but also brings about the risk of leaking users' private information. Therefore, improving the individual power requirement and distribution efficiency to ensure communication reliability while preserving user privacy is a new challenge for SG. Based on this issue, we propose an efficient and privacy-preserving power requirement and distribution aggregation scheme (EPPRD) based on a hierarchical communication architecture. In the proposed scheme, an efficient encryption and authentication mechanism is proposed for better fit to each individual demand-response situation. Through extensive analysis and experiment, we demonstrate how the EPPRD resists various security threats and preserves user privacy while satisfying the individual requirement in a semi-honest model; it involves less communication overhead and computation time than the existing competing schemes.
Zhang, Lei; Zhang, Jing
2017-01-01
A Smart Grid (SG) facilitates bidirectional demand-response communication between individual users and power providers with high computation and communication performance but also brings about the risk of leaking users’ private information. Therefore, improving the individual power requirement and distribution efficiency to ensure communication reliability while preserving user privacy is a new challenge for SG. Based on this issue, we propose an efficient and privacy-preserving power requirement and distribution aggregation scheme (EPPRD) based on a hierarchical communication architecture. In the proposed scheme, an efficient encryption and authentication mechanism is proposed for better fit to each individual demand-response situation. Through extensive analysis and experiment, we demonstrate how the EPPRD resists various security threats and preserves user privacy while satisfying the individual requirement in a semi-honest model; it involves less communication overhead and computation time than the existing competing schemes. PMID:28783122
Historical extension of operational NDVI products for livestock insurance in Kenya
NASA Astrophysics Data System (ADS)
Vrieling, Anton; Meroni, Michele; Shee, Apurba; Mude, Andrew G.; Woodard, Joshua; de Bie, C. A. J. M. (Kees); Rembold, Felix
2014-05-01
Droughts induce livestock losses that severely affect Kenyan pastoralists. Recent index insurance schemes have the potential of being a viable tool for insuring pastoralists against drought-related risk. Such schemes require as input a forage scarcity (or drought) index that can be reliably updated in near real-time, and that strongly relates to livestock mortality. Generally, a long record (>25 years) of the index is needed to correctly estimate mortality risk and calculate the related insurance premium. Data from current operational satellites used for large-scale vegetation monitoring span over a maximum of 15 years, a time period that is considered insufficient for accurate premium computation. This study examines how operational NDVI datasets compare to, and could be combined with the non-operational recently constructed 30-year GIMMS AVHRR record (1981-2011) to provide a near-real time drought index with a long term archive for the arid lands of Kenya. We compared six freely available, near-real time NDVI products: five from MODIS and one from SPOT-VEGETATION. Prior to comparison, all datasets were averaged in time for the two vegetative seasons in Kenya, and aggregated spatially at the administrative division level at which the insurance is offered. The feasibility of extending the resulting aggregated drought indices back in time was assessed using jackknifed R2 statistics (leave-one-year-out) for the overlapping period 2002-2011. We found that division-specific models were more effective than a global model for linking the division-level temporal variability of the index between NDVI products. Based on our results, good scope exists for historically extending the aggregated drought index, thus providing a longer operational record for insurance purposes. We showed that this extension may have large effects on the calculated insurance premium. Finally, we discuss several possible improvements to the drought index.
NASA Astrophysics Data System (ADS)
Kim, Tae-Wook; Park, Sang-Gyu; Choi, Byong-Deok
2011-03-01
The previous pixel-level digital-to-analog-conversion (DAC) scheme that implements a part of a DAC in a pixel circuit turned out to be very efficient for reducing the peripheral area of an integrated data driver fabricated with low-temperature polycrystalline silicon thin-film transistors (LTPS TFTs). However, how the pixel-level DAC can be compatible with the existing pixel circuits including compensation schemes of TFT variations and IR drops on supply rails, which is of primary importance for active matrix organic light emitting diodes (AMOLEDs) is an issue in this scheme, because LTPS TFTs suffer from random variations in their characteristics. In this paper, we show that the pixel-level DAC scheme can be successfully used with the previous compensation schemes by giving two examples of voltage- and current-programming pixels. The previous pixel-level DAC schemes require additional two TFTs and one capacitor, but for these newly proposed pixel circuits, the overhead is no more than two TFTs by utilizing the already existing capacitor. In addition, through a detailed analysis, it has been shown that the pixel-level DAC can be expanded to a 4-bit resolution, or be applied together with 1:2 demultiplexing driving for 6- to 8-in. diagonal XGA AMOLED display panels.
NASA Astrophysics Data System (ADS)
Altenkamp, Lukas; Boggia, Michele; Dittmaier, Stefan
2018-04-01
We consider an extension of the Standard Model by a real singlet scalar field with a ℤ2-symmetric Lagrangian and spontaneous symmetry breaking with vacuum expectation value for the singlet. Considering the lighter of the two scalars of the theory to be the 125 GeV Higgs particle, we parametrize the scalar sector by the mass of the heavy Higgs boson, a mixing angle α, and a scalar Higgs self-coupling λ 12. Taking into account theoretical constraints from perturbativity and vacuum stability, we compute next-to-leading-order electroweak and QCD corrections to the decays h → WW/ZZ → 4 fermions of the light Higgs boson for some scenarios proposed in the literature. We formulate two renormalization schemes and investigate the conversion of the input parameters between the schemes, finding sizeable effects. Solving the renormalization-group equations for the \\overline{MS} parameters α and λ 12, we observe a significantly reduced scale and scheme dependence in the next-to-leading-order results. For some scenarios suggested in the literature, the total decay width for the process h → 4 f is computed as a function of the mixing angle and compared to the width of a corresponding Standard Model Higgs boson, revealing deviations below 10%. Differential distributions do not show significant distortions by effects beyond the Standard Model. The calculations are implemented in the Monte Carlo generator P rophecy4 f, which is ready for applications in data analyses in the framework of the singlet extension.
Physical angular momentum separation for QED
NASA Astrophysics Data System (ADS)
Sun, Weimin
2017-04-01
We study the non-uniqueness problem of the gauge-invariant angular momentum separation for the case of QED, which stems from the recent controversy concerning the proper definitions of the orbital angular momentum and spin operator of the individual parts of a gauge field system. For the free quantum electrodynamics without matter, we show that the basic requirement of Euclidean symmetry selects a unique physical angular momentum separation scheme from the multitude of the possible angular momentum separation schemes constructed using the various gauge-invariant extensions (GIEs). Based on these results, we propose a set of natural angular momentum separation schemes for the case of interacting QED by invoking the formalism of asymptotic fields. Some perspectives on such a problem for the case of QCD are briefly discussed.
NASA Technical Reports Server (NTRS)
Reed, M. A.
1974-01-01
The need for an obstacle detection system on the Mars roving vehicle was assumed, and a practical scheme was investigated and simulated. The principal sensing device on this vehicle was taken to be a laser range finder. Both existing and original algorithms, ending with thresholding operations, were used to obtain the outlines of obstacles from the raw data of this laser scan. A theoretical analysis was carried out to show how proper value of threshold may be chosen. Computer simulations considered various mid-range boulders, for which the scheme was quite successful. The extension to other types of obstacles, such as craters, was considered. The special problems of bottom edge detection and scanning procedure are discussed.
Four-level conservative finite-difference schemes for Boussinesq paradigm equation
NASA Astrophysics Data System (ADS)
Kolkovska, N.
2013-10-01
In this paper a two-parametric family of four level conservative finite difference schemes is constructed for the multidimensional Boussinesq paradigm equation. The schemes are explicit in the sense that no inner iterations are needed for evaluation of the numerical solution. The preservation of the discrete energy with this method is proved. The schemes have been numerically tested on one soliton propagation model and two solitons interaction model. The numerical experiments demonstrate that the proposed family of schemes has second order of convergence in space and time steps in the discrete maximal norm.
Performance of a TKE diffusion scheme in ECMWF IFS Single Column Model
NASA Astrophysics Data System (ADS)
Svensson, Jacob; Bazile, Eric; Sandu, Irina; Svensson, Gunilla
2015-04-01
Numerical Weather Prediction models (NWP) as well as climate models are used for decision making on all levels in society and their performance and accuracy are of great importance for both economical and safety reasons. Today's extensive use of weather apps and websites that directly uses model output even more highlights the importance of realistic output parameters. The turbulent atmospheric boundary layer (ABL) includes many physical processes which occur on a subgrid scale and need to be parameterized. As the absolute major part of the biosphere is located in the ABL, it is of great importance that these subgrid processes are parametrized so that they give realistic values of e.g. temperature and wind on the levels close to the surface. GEWEX (Global Energy and Water Exchange Project) Atmospheric Boundary Layer Study (GABLS), has the overall objective to improve the understanding and the representation of the atmospheric boundary layers in climate models. The study has pointed out that there is a need for a better understanding and representation of stable atmospheric boundary layers (SBL). Therefore four test cases have been designed to highlight the performance of and differences between a number of climate models and NWP:s in SBL. In the experiments, most global NWP and climate models have shown to be too diffusive in stable conditions and thus give too weak temperature gradients, too strong momentum mixing and too weak ageostrophic Ekman flow. The reason for this is that the models need enhanced diffusion to create enough friction for the large scale weather systems, which otherwise would be too fast and too active. In the GABLS test cases, turbulence schemes that use Turbulent Kinetic Energy (TKE) have shown to be more skilful than schemes that only use stability and gradients. TKE as a prognostic variable allows for advection both vertically and horizontally and gives a "memory" from previous time steps. Therefore, e.g. the ECMWF-GABLS workshop in 2011 recommended a move for global NWP models towards a TKE scheme. Here a comparison between a TKE diffusion scheme (based on the implementation in the ARPEGE model by Meteo France) is compared to ECMWF:s IFS operational first-order scheme and to a less diffusive version, using a single column version of ECMWF:s IFS model. Results from the test cases GABLS 1, 3 and 4 together with the Diurnal land/atmosphere coupling experiment (DICE) are presented.
A level set approach for shock-induced α-γ phase transition of RDX
NASA Astrophysics Data System (ADS)
Josyula, Kartik; Rahul; De, Suvranu
2018-02-01
We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.
(t, n) Threshold d-Level Quantum Secret Sharing.
Song, Xiu-Li; Liu, Yan-Bing; Deng, Hong-Yao; Xiao, Yong-Gang
2017-07-25
Most of Quantum Secret Sharing(QSS) are (n, n) threshold 2-level schemes, in which the 2-level secret cannot be reconstructed until all n shares are collected. In this paper, we propose a (t, n) threshold d-level QSS scheme, in which the d-level secret can be reconstructed only if at least t shares are collected. Compared with (n, n) threshold 2-level QSS, the proposed QSS provides better universality, flexibility, and practicability. Moreover, in this scheme, any one of the participants does not know the other participants' shares, even the trusted reconstructor Bob 1 is no exception. The transformation of the particles includes some simple operations such as d-level CNOT, Quantum Fourier Transform(QFT), Inverse Quantum Fourier Transform(IQFT), and generalized Pauli operator. The transformed particles need not to be transmitted from one participant to another in the quantum channel. Security analysis shows that the proposed scheme can resist intercept-resend attack, entangle-measure attack, collusion attack, and forgery attack. Performance comparison shows that it has lower computation and communication costs than other similar schemes when 2 < t < n - 1.
Mapping quadrupole collectivity in the Cd isotopes: The breakdown of harmonic vibrational motion
NASA Astrophysics Data System (ADS)
Garrett, P. E.; Green, K. L.; Bangay, J.; Varela, A. Diaz; Sumithrarachchi, C. S.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D. S.; Bianco, L.; Colosimo, S.; Cross, D. S.; Demand, G. A.; Finlay, P.; Garnsworthy, A. B.; Grinyer, G. F.; Hackman, G.; Kulp, W. D.; Leach, K. G.; Morton, A. C.; Orce, J. N.; Pearson, C. J.; Phillips, A. A.; Schumaker, M. A.; Svensson, C. E.; Triambak, S.; Wong, J.; Wood, J. L.; Yates, S. W.
2011-10-01
The stable Cd isotopes have long been used as paradigms for spherical vibrational motion. Extensive investigations with in-beam γ spectroscopy have resulted in very-well-established level schemes, including many lifetimes or lifetime limits. A programme has been initiated to complement these studies with very-high-statistics β decay using the 8π spectrometer at the TRIUMF radioactive beam facility. The decays of 112In and 112Ag have been studied with an emphasis on the observation of, or the placement of stringent limits on, low-energy branches between potential multi-phonon levels. A lack of suitable 0+ or 2+ three-phonon candidates has been revealed. Further, the sum of the B(E2) strength from spin 0+ and 2+ states up to 3 MeV in excitation energy to the assigned two-phonon levels falls far short of the harmonic-vibrational expectations. This lack of strength points to the failing of collective models based on vibrational phonon structures.
Hardman, Chloe J; Harrison, Dominic P G; Shaw, Pete J; Nevard, Tim D; Hughes, Brin; Potts, Simon G; Norris, Ken
2016-02-01
Restoration and maintenance of habitat diversity have been suggested as conservation priorities in farmed landscapes, but how this should be achieved and at what scale are unclear. This study makes a novel comparison of the effectiveness of three wildlife-friendly farming schemes for supporting local habitat diversity and species richness on 12 farms in England.The schemes were: (i) Conservation Grade (Conservation Grade: a prescriptive, non-organic, biodiversity-focused scheme), (ii) organic agriculture and (iii) a baseline of Entry Level Stewardship (Entry Level Stewardship: a flexible widespread government scheme). Conservation Grade farms supported a quarter higher habitat diversity at the 100-m radius scale compared to Entry Level Stewardship farms. Conservation Grade and organic farms both supported a fifth higher habitat diversity at the 250-m radius scale compared to Entry Level Stewardship farms. Habitat diversity at the 100-m and 250-m scales significantly predicted species richness of butterflies and plants. Habitat diversity at the 100-m scale also significantly predicted species richness of birds in winter and solitary bees. There were no significant relationships between habitat diversity and species richness for bumblebees or birds in summer.Butterfly species richness was significantly higher on organic farms (50% higher) and marginally higher on Conservation Grade farms (20% higher), compared with farms in Entry Level Stewardship. Organic farms supported significantly more plant species than Entry Level Stewardship farms (70% higher) but Conservation Grade farms did not (10% higher). There were no significant differences between the three schemes for species richness of bumblebees, solitary bees or birds. Policy implications . The wildlife-friendly farming schemes which included compulsory changes in management, Conservation Grade and organic, were more effective at increasing local habitat diversity and species richness compared with the less prescriptive Entry Level Stewardship scheme. We recommend that wildlife-friendly farming schemes should aim to enhance and maintain high local habitat diversity, through mechanisms such as option packages, where farmers are required to deliver a combination of several habitats.
An MBO Scheme for Minimizing the Graph Ohta-Kawasaki Functional
NASA Astrophysics Data System (ADS)
van Gennip, Yves
2018-06-01
We study a graph-based version of the Ohta-Kawasaki functional, which was originally introduced in a continuum setting to model pattern formation in diblock copolymer melts and has been studied extensively as a paradigmatic example of a variational model for pattern formation. Graph-based problems inspired by partial differential equations (PDEs) and variational methods have been the subject of many recent papers in the mathematical literature, because of their applications in areas such as image processing and data classification. This paper extends the area of PDE inspired graph-based problems to pattern-forming models, while continuing in the tradition of recent papers in the field. We introduce a mass conserving Merriman-Bence-Osher (MBO) scheme for minimizing the graph Ohta-Kawasaki functional with a mass constraint. We present three main results: (1) the Lyapunov functionals associated with this MBO scheme Γ -converge to the Ohta-Kawasaki functional (which includes the standard graph-based MBO scheme and total variation as a special case); (2) there is a class of graphs on which the Ohta-Kawasaki MBO scheme corresponds to a standard MBO scheme on a transformed graph and for which generalized comparison principles hold; (3) this MBO scheme allows for the numerical computation of (approximate) minimizers of the graph Ohta-Kawasaki functional with a mass constraint.
Progress in multi-dimensional upwind differencing
NASA Technical Reports Server (NTRS)
Vanleer, Bram
1992-01-01
Multi-dimensional upwind-differencing schemes for the Euler equations are reviewed. On the basis of the first-order upwind scheme for a one-dimensional convection equation, the two approaches to upwind differencing are discussed: the fluctuation approach and the finite-volume approach. The usual extension of the finite-volume method to the multi-dimensional Euler equations is not entirely satisfactory, because the direction of wave propagation is always assumed to be normal to the cell faces. This leads to smearing of shock and shear waves when these are not grid-aligned. Multi-directional methods, in which upwind-biased fluxes are computed in a frame aligned with a dominant wave, overcome this problem, but at the expense of robustness. The same is true for the schemes incorporating a multi-dimensional wave model not based on multi-dimensional data but on an 'educated guess' of what they could be. The fluctuation approach offers the best possibilities for the development of genuinely multi-dimensional upwind schemes. Three building blocks are needed for such schemes: a wave model, a way to achieve conservation, and a compact convection scheme. Recent advances in each of these components are discussed; putting them all together is the present focus of a worldwide research effort. Some numerical results are presented, illustrating the potential of the new multi-dimensional schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abou El-Maaref, A., E-mail: aahmh@hotmail.com; Ahmad, Mahmoud; Allam, S.H.
Energy levels, oscillator strengths, and transition probabilities for transitions among the 14 LS states belonging to configurations of sulfur-like iron, Fe XI, have been calculated. These states are represented by configuration interaction wavefunctions and have configurations 3s{sup 2}3p{sup 4}, 3s3p{sup 5}, 3s{sup 2}3p{sup 3}3d, 3s{sup 2}3p{sup 3}4s, 3s{sup 2}3p{sup 3}4p, and 3s{sup 2}3p{sup 3}4d, which give rise to 123 fine-structure energy levels. Extensive configuration interaction calculations using the CIV3 code have been performed. To assess the importance of relativistic effects, the intermediate coupling scheme by means of the Breit–Pauli Hamiltonian terms, such as the one-body mass correction and Darwin term,more » and spin–orbit, spin–other-orbit, and spin–spin corrections, are incorporated within the code. These incorporations adjusted the energy levels, therefore the calculated values are close to the available experimental data. Comparisons between the present calculated energy levels as well as oscillator strengths and both experimental and theoretical data have been performed. Our results show good agreement with earlier works, and they might be useful in thermonuclear fusion research and astrophysical applications. -- Highlights: •Accurate atomic data of iron ions are needed for identification of solar corona. •Extensive configuration interaction wavefunctions including 123 fine-structure levels have been calculated. •The relativistic effects by means of the Breit–Pauli Hamiltonian terms are incorporated. •This incorporation adjusts the energy levels, therefore the calculated values are close to experimental values.« less
NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization
NASA Technical Reports Server (NTRS)
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.
NASA Astrophysics Data System (ADS)
Ghasemi, S.; Khorasani, K.
2015-10-01
In this paper, the problem of fault detection and isolation (FDI) of the attitude control subsystem (ACS) of spacecraft formation flying systems is considered. For developing the FDI schemes, an extended Kalman filter (EKF) is utilised which belongs to a class of nonlinear state estimation methods. Three architectures, namely centralised, decentralised, and semi-decentralised, are considered and the corresponding FDI strategies are designed and constructed. Appropriate residual generation techniques and threshold selection criteria are proposed for these architectures. The capabilities of the proposed architectures for accomplishing the FDI tasks are studied through extensive numerical simulations for a team of four satellites in formation flight. Using a confusion matrix evaluation criterion, it is shown that the centralised architecture can achieve the most reliable results relative to the semi-decentralised and decentralised architectures at the expense of availability of a centralised processing module that requires the entire team information set. On the other hand, the semi-decentralised performance is close to the centralised scheme without relying on the availability of the entire team information set. Furthermore, the results confirm that the FDI results in formations with angular velocity measurement sensors achieve higher level of accuracy, true faulty, and precision, along with lower level of false healthy misclassification as compared to the formations that utilise attitude measurement sensors.
Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab
2016-01-01
In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.
NASA Technical Reports Server (NTRS)
Harten, A.; Tal-Ezer, H.
1981-01-01
This paper presents a family of two-level five-point implicit schemes for the solution of one-dimensional systems of hyperbolic conservation laws, which generalized the Crank-Nicholson scheme to fourth order accuracy (4-4) in both time and space. These 4-4 schemes are nondissipative and unconditionally stable. Special attention is given to the system of linear equations associated with these 4-4 implicit schemes. The regularity of this system is analyzed and efficiency of solution-algorithms is examined. A two-datum representation of these 4-4 implicit schemes brings about a compactification of the stencil to three mesh points at each time-level. This compact two-datum representation is particularly useful in deriving boundary treatments. Numerical results are presented to illustrate some properties of the proposed scheme.
NASA Technical Reports Server (NTRS)
Constantinescu, George S.; Lele, S. K.
2001-01-01
Numerical methods for solving the flow equations in cylindrical or spherical coordinates should be able to capture the behavior of the exact solution near the regions where the particular form of the governing equations is singular. In this work we focus on the treatment of these numerical singularities for finite-differences methods by reinterpreting the regularity conditions developed in the context of pseudo-spectral methods. A generally applicable numerical method for treating the singularities present at the polar axis, when nonaxisymmetric flows are solved in cylindrical, coordinates using highly accurate finite differences schemes (e.g., Pade schemes) on non-staggered grids, is presented. Governing equations for the flow at the polar axis are derived using series expansions near r=0. The only information needed to calculate the coefficients in these equations are the values of the flow variables and their radial derivatives at the previous iteration (or time) level. These derivatives, which are multi-valued at the polar axis, are calculated without dropping the accuracy of the numerical method using a mapping of the flow domain from (0,R)*(0,2pi) to (-R,R)*(0,pi), where R is the radius of the computational domain. This allows the radial derivatives to be evaluated using high-order differencing schemes (e.g., compact schemes) at points located on the polar axis. The proposed technique is illustrated by results from simulations of laminar-forced jets and turbulent compressible jets using large eddy simulation (LES) methods. In term of the general robustness of the numerical method and smoothness of the solution close to the polar axis, the present results compare very favorably to similar calculations in which the equations are solved in Cartesian coordinates at the polar axis, or in which the singularity is removed by employing a staggered mesh in the radial direction without a mesh point at r=0, following the method proposed recently by Mohseni and Colonius (1). Extension of the method described here for incompressible flows or for any other set of equations that are solved on a non-staggered mesh in cylindrical or spherical coordinates with finite-differences schemes of various level of accuracy is immediate.
2018-01-01
The Session Initiation Protocol (SIP) is an extensive and esteemed communication protocol employed to regulate signaling as well as for controlling multimedia communication sessions. Recently, Kumari et al. proposed an improved smart card based authentication scheme for SIP based on Farash’s scheme. Farash claimed that his protocol is resistant against various known attacks. But, we observe some accountable flaws in Farash’s protocol. We point out that Farash’s protocol is prone to key-compromise impersonation attack and is unable to provide pre-verification in the smart card, efficient password change and perfect forward secrecy. To overcome these limitations, in this paper we present an enhanced authentication mechanism based on Kumari et al.’s scheme. We prove that the proposed protocol not only overcomes the issues in Farash’s scheme, but it can also resist against all known attacks. We also provide the security analysis of the proposed scheme with the help of widespread AVISPA (Automated Validation of Internet Security Protocols and Applications) software. At last, comparing with the earlier proposals in terms of security and efficiency, we conclude that the proposed protocol is efficient and more secure. PMID:29547619
Discretisation Schemes for Level Sets of Planar Gaussian Fields
NASA Astrophysics Data System (ADS)
Beliaev, D.; Muirhead, S.
2018-01-01
Smooth random Gaussian functions play an important role in mathematical physics, a main example being the random plane wave model conjectured by Berry to give a universal description of high-energy eigenfunctions of the Laplacian on generic compact manifolds. Our work is motivated by questions about the geometry of such random functions, in particular relating to the structure of their nodal and level sets. We study four discretisation schemes that extract information about level sets of planar Gaussian fields. Each scheme recovers information up to a different level of precision, and each requires a maximum mesh-size in order to be valid with high probability. The first two schemes are generalisations and enhancements of similar schemes that have appeared in the literature (Beffara and Gayet in Publ Math IHES, 2017. https://doi.org/10.1007/s10240-017-0093-0; Mischaikow and Wanner in Ann Appl Probab 17:980-1018, 2007); these give complete topological information about the level sets on either a local or global scale. As an application, we improve the results in Beffara and Gayet (2017) on Russo-Seymour-Welsh estimates for the nodal set of positively-correlated planar Gaussian fields. The third and fourth schemes are, to the best of our knowledge, completely new. The third scheme is specific to the nodal set of the random plane wave, and provides global topological information about the nodal set up to `visible ambiguities'. The fourth scheme gives a way to approximate the mean number of excursion domains of planar Gaussian fields.
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
Cache-based error recovery for shared memory multiprocessor systems
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1989-01-01
A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.
NASA Astrophysics Data System (ADS)
Ivan, L.; De Sterck, H.; Susanto, A.; Groth, C. P. T.
2015-02-01
A fourth-order accurate finite-volume scheme for hyperbolic conservation laws on three-dimensional (3D) cubed-sphere grids is described. The approach is based on a central essentially non-oscillatory (CENO) finite-volume method that was recently introduced for two-dimensional compressible flows and is extended to 3D geometries with structured hexahedral grids. Cubed-sphere grids feature hexahedral cells with nonplanar cell surfaces, which are handled with high-order accuracy using trilinear geometry representations in the proposed approach. Varying stencil sizes and slope discontinuities in grid lines occur at the boundaries and corners of the six sectors of the cubed-sphere grid where the grid topology is unstructured, and these difficulties are handled naturally with high-order accuracy by the multidimensional least-squares based 3D CENO reconstruction with overdetermined stencils. A rotation-based mechanism is introduced to automatically select appropriate smaller stencils at degenerate block boundaries, where fewer ghost cells are available and the grid topology changes, requiring stencils to be modified. Combining these building blocks results in a finite-volume discretization for conservation laws on 3D cubed-sphere grids that is uniformly high-order accurate in all three grid directions. While solution-adaptivity is natural in the multi-block setting of our code, high-order accurate adaptive refinement on cubed-sphere grids is not pursued in this paper. The 3D CENO scheme is an accurate and robust solution method for hyperbolic conservation laws on general hexahedral grids that is attractive because it is inherently multidimensional by employing a K-exact overdetermined reconstruction scheme, and it avoids the complexity of considering multiple non-central stencil configurations that characterizes traditional ENO schemes. Extensive numerical tests demonstrate fourth-order convergence for stationary and time-dependent Euler and magnetohydrodynamic flows on cubed-sphere grids, and robustness against spurious oscillations at 3D shocks. Performance tests illustrate efficiency gains that can be potentially achieved using fourth-order schemes as compared to second-order methods for the same error level. Applications on extended cubed-sphere grids incorporating a seventh root block that discretizes the interior of the inner sphere demonstrate the versatility of the spatial discretization method.
Nuclear Data Sheets for A = 70
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gürdal, G.; McCutchan, E. A.
2016-09-01
We evaluated spectroscopic data for all nuclei with mass number A = 70, and the corresponding level schemes from radioactive decay and reaction studies are presented. Since the previous evaluation, the half-life of 70Mn has been measured and excited states in 70Fe observed for the first time. Furthermore we studied the excited states in 70Ni extensively while Coulomb excitation and collinear laser spectroscopy measurements in 70Cu have allowed for firm Jπ assignments. Despite new measurements, there remain some discrepancies in half-lives of low lying states in 70Zn. New measurements have extended the knowledge of high-spin band structures in 70Ge andmore » 70As. Our evaluation supersedes the prior A = 70 evaluation of 2004Tu09.« less
Service-Oriented Architecture for NVO and TeraGrid Computing
NASA Technical Reports Server (NTRS)
Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew
2008-01-01
The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.
Jia, Di; Li, Yanlin; Wang, Guoliang; Gao, Huanyu; Yu, Yang
2016-01-01
To conclude the revision reason of unicompartmental knee arthroplasty (UKA) using computer-assisted technology so as to provide reference for reducing the revision incidence and improving the level of surgical technique and rehabilitation. The relevant literature on analyzing revision reason of UKA using computer-assisted technology in recent years was extensively reviewed. The revision reasons by computer-assisted technology are fracture of the medial tibial plateau, progressive osteoarthritis of reserved compartment, dislocation of mobile bearing, prosthesis loosening, polyethylene wear, and unexplained persistent pain. Computer-assisted technology can be used to analyze the revision reason of UKA and guide the best operating method and rehabilitation scheme by simulating the operative process and knee joint activities.
Characterisation of the PXIE Allison-type emittance scanner
D'Arcy, R.; Alvarez, M.; Gaynier, J.; ...
2016-01-26
An Allison-type emittance scanner has been designed for PXIE at FNAL with the goal of providing fast and accurate phase space reconstruction. The device has been modified from previous LBNL/SNS designs to operate in both pulsed and DC modes with the addition of water-cooled front slits. Extensive calibration techniques and error analysis allowed confinement of uncertainty to the <5% level (with known caveats). With a 16-bit, 1 MHz electronics scheme the device is able to analyse a pulse with a resolution of 1 μs, allowing for analysis of neutralisation effects. As a result, this paper describes a detailed breakdown ofmore » the R&D, as well as post-run analysis techniques.« less
Location verification algorithm of wearable sensors for wireless body area networks.
Wang, Hua; Wen, Yingyou; Zhao, Dazhe
2018-01-01
Knowledge of the location of sensor devices is crucial for many medical applications of wireless body area networks, as wearable sensors are designed to monitor vital signs of a patient while the wearer still has the freedom of movement. However, clinicians or patients can misplace the wearable sensors, thereby causing a mismatch between their physical locations and their correct target positions. An error of more than a few centimeters raises the risk of mistreating patients. The present study aims to develop a scheme to calculate and detect the position of wearable sensors without beacon nodes. A new scheme was proposed to verify the location of wearable sensors mounted on the patient's body by inferring differences in atmospheric air pressure and received signal strength indication measurements from wearable sensors. Extensive two-sample t tests were performed to validate the proposed scheme. The proposed scheme could easily recognize a 30-cm horizontal body range and a 65-cm vertical body range to correctly perform sensor localization and limb identification. All experiments indicate that the scheme is suitable for identifying wearable sensor positions in an indoor environment.
Elenchezhiyan, M; Prakash, J
2015-09-01
In this work, state estimation schemes for non-linear hybrid dynamic systems subjected to stochastic state disturbances and random errors in measurements using interacting multiple-model (IMM) algorithms are formulated. In order to compute both discrete modes and continuous state estimates of a hybrid dynamic system either an IMM extended Kalman filter (IMM-EKF) or an IMM based derivative-free Kalman filters is proposed in this study. The efficacy of the proposed IMM based state estimation schemes is demonstrated by conducting Monte-Carlo simulation studies on the two-tank hybrid system and switched non-isothermal continuous stirred tank reactor system. Extensive simulation studies reveal that the proposed IMM based state estimation schemes are able to generate fairly accurate continuous state estimates and discrete modes. In the presence and absence of sensor bias, the simulation studies reveal that the proposed IMM unscented Kalman filter (IMM-UKF) based simultaneous state and parameter estimation scheme outperforms multiple-model UKF (MM-UKF) based simultaneous state and parameter estimation scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Soh, Woo-Yung; Yoon, Seokkwan
1989-01-01
A finite-volume lower-upper (LU) implicit scheme is used to simulate an inviscid flow in a tubine cascade. This approximate factorization scheme requires only the inversion of sparse lower and upper triangular matrices, which can be done efficiently without extensive storage. As an implicit scheme it allows a large time step to reach the steady state. An interactive grid generation program (TURBO), which is being developed, is used to generate grids. This program uses the control point form of algebraic grid generation which uses a sparse collection of control points from which the shape and position of coordinate curves can be adjusted. A distinct advantage of TURBO compared with other grid generation programs is that it allows the easy change of local mesh structure without affecting the grid outside the domain of independence. Sample grids are generated by TURBO for a compressor rotor blade and a turbine cascade. The turbine cascade flow is simulated by using the LU implicit scheme on the grid generated by TURBO.
Zhu, Wensheng; Yuan, Ying; Zhang, Jingwen; Zhou, Fan; Knickmeyer, Rebecca C; Zhu, Hongtu
2017-02-01
The aim of this paper is to systematically evaluate a biased sampling issue associated with genome-wide association analysis (GWAS) of imaging phenotypes for most imaging genetic studies, including the Alzheimer's Disease Neuroimaging Initiative (ADNI). Specifically, the original sampling scheme of these imaging genetic studies is primarily the retrospective case-control design, whereas most existing statistical analyses of these studies ignore such sampling scheme by directly correlating imaging phenotypes (called the secondary traits) with genotype. Although it has been well documented in genetic epidemiology that ignoring the case-control sampling scheme can produce highly biased estimates, and subsequently lead to misleading results and suspicious associations, such findings are not well documented in imaging genetics. We use extensive simulations and a large-scale imaging genetic data analysis of the Alzheimer's Disease Neuroimaging Initiative (ADNI) data to evaluate the effects of the case-control sampling scheme on GWAS results based on some standard statistical methods, such as linear regression methods, while comparing it with several advanced statistical methods that appropriately adjust for the case-control sampling scheme. Copyright © 2016 Elsevier Inc. All rights reserved.
Distributed Fair Auto Rate Medium Access Control for IEEE 802.11 Based WLANs
NASA Astrophysics Data System (ADS)
Zhu, Yanfeng; Niu, Zhisheng
Much research has shown that a carefully designed auto rate medium access control can utilize the underlying physical multi-rate capability to exploit the time-variation of the channel. In this paper, we develop a simple analytical model to elucidate the rule that maximizes the throughput of RTS/CTS based multi-rate wireless local area networks. Based on the discovered rule, we propose two distributed fair auto rate medium access control schemes called FARM and FARM+ from the view-point of throughput fairness and time-share fairness, respectively. With the proposed schemes, after receiving a RTS frame, the receiver selectively returns the CTS frame to inform the transmitter the maximum feasible rate probed by the signal-to-noise ratio of the received RTS frame. The key feature of the proposed schemes is that they are capable of maintaining throughput/time-share fairness in asymmetric situation where the distribution of SNR varies with stations. Extensive simulation results show that the proposed schemes outperform the existing throughput/time-share fair auto rate schemes in time-varying channel conditions.
A novel 2.5D finite difference scheme for simulations of resistivity logging in anisotropic media
NASA Astrophysics Data System (ADS)
Zeng, Shubin; Chen, Fangzhou; Li, Dawei; Chen, Ji; Chen, Jiefu
2018-03-01
The objective of this study is to develop a method to model 3D resistivity well logging problems in 2D formation with anisotropy, known as 2.5D modeling. The traditional 1D forward modeling extensively used in practice lacks the capability of modeling 2D formation. A 2.5D finite difference method (FDM) solving all the electric and magnetic field components simultaneously is proposed. Compared to other previous 2.5D FDM schemes, this method is more straightforward in modeling fully anisotropic media and easy to be implemented. Fourier transform is essential to this FDM scheme, and by employing Gauss-Legendre (GL) quadrature rule the computational time of this step can be greatly reduced. In the numerical examples, we first demonstrate the validity of the FDM scheme with GL rule by comparing with 1D forward modeling for layered anisotropic problems, and then we model a complicated 2D formation case and find that the proposed 2.5D FD scheme is much more efficient than 3D numerical methods.
Murmur intensity in adult dogs with pulmonic and subaortic stenosis reflects disease severity.
Caivano, D; Dickson, D; Martin, M; Rishniw, M
2018-03-01
The aims of this study were to determine whether murmur intensity in adult dogs with pulmonic stenosis or subaortic stenosis reflects echocardiographic disease severity and to determine whether a six-level murmur grading scheme provides clinical advantages over a four-level scheme. In this retrospective multi-investigator study on adult dogs with pulmonic stenosis or subaortic stenosis, murmur intensity was compared to echocardiographically determined pressure gradient across the affected valve. Disease severity, based on pressure gradients, was assessed between sequential murmur grades to identify redundancy in classification. A simplified four-level murmur intensity classification scheme ('soft', 'moderate', 'loud', 'palpable') was evaluated. In total, 284 dogs (153 with pulmonic stenosis, 131 with subaortic stenosis) were included; 55 dogs had soft, 59 had moderate, 72 had loud and 98 had palpable murmurs. 95 dogs had mild stenosis, 46 had moderate stenosis, and 143 had severe stenosis. No dogs with soft murmurs of either pulmonic or subaortic stenosis had transvalvular pressure gradients greater than 50 mmHg. Dogs with loud or palpable murmurs mostly, but not always, had severe stenosis. Stenosis severity increased with increasing murmur intensity. The traditional six-level murmur grading scheme provided no additional clinical information than the four-level descriptive murmur grading scheme. A simplified descriptive four-level murmur grading scheme differentiated stenosis severity without loss of clinical information, compared to the traditional six-level scheme. Soft murmurs in dogs with pulmonic or subaortic stenosis are strongly indicative of mild lesions. Loud or palpable murmurs are strongly suggestive of severe stenosis. © 2017 British Small Animal Veterinary Association.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1993-01-01
A new numerical framework for solving conservation laws is being developed. This new approach differs substantially in both concept and methodology from the well-established methods--i.e., finite difference, finite volume, finite element, and spectral methods. It is conceptually simple and designed to avoid several key limitations to the above traditional methods. An explicit model scheme for solving a simple 1-D unsteady convection-diffusion equation is constructed and used to illuminate major differences between the current method and those mentioned above. Unexpectedly, its amplification factors for the pure convection and pure diffusion cases are identical to those of the Leapfrog and the DuFort-Frankel schemes, respectively. Also, this explicit scheme and its Navier-Stokes extension have the unusual property that their stabilities are limited only by the CFL condition. Moreover, despite the fact that it does not use any flux-limiter or slope-limiter, the Navier-Stokes solver is capable of generating highly accurate shock tube solutions with shock discontinuities being resolved within one mesh interval. An accurate Euler solver also is constructed through another extension. It has many unusual properties, e.g., numerical diffusion at all mesh points can be controlled by a set of local parameters.
Numerical Schemes for the Hamilton-Jacobi and Level Set Equations on Triangulated Domains
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Sethian, James A.
1997-01-01
Borrowing from techniques developed for conservation law equations, numerical schemes which discretize the Hamilton-Jacobi (H-J), level set, and Eikonal equations on triangulated domains are presented. The first scheme is a provably monotone discretization for certain forms of the H-J equations. Unfortunately, the basic scheme lacks proper Lipschitz continuity of the numerical Hamiltonian. By employing a virtual edge flipping technique, Lipschitz continuity of the numerical flux is restored on acute triangulations. Next, schemes are introduced and developed based on the weaker concept of positive coefficient approximations for homogeneous Hamiltonians. These schemes possess a discrete maximum principle on arbitrary triangulations and naturally exhibit proper Lipschitz continuity of the numerical Hamiltonian. Finally, a class of Petrov-Galerkin approximations are considered. These schemes are stabilized via a least-squares bilinear form. The Petrov-Galerkin schemes do not possess a discrete maximum principle but generalize to high order accuracy.
NASA Astrophysics Data System (ADS)
Liu, Yin; Zhang, Wei
2016-12-01
This study develops a proper way to incorporate Atmospheric Infrared Sounder (AIRS) ozone data into the bogus data assimilation (BDA) initialization scheme for improving hurricane prediction. First, the observation operator at some model levels with the highest correlation coefficients is established to assimilate AIRS ozone data based on the correlation between total column ozone and potential vorticity (PV) ranging from 400 to 50 hPa level. Second, AIRS ozone data act as an augmentation to a BDA procedure using a four-dimensional variational (4D-Var) data assimilation system. Case studies of several hurricanes are performed to demonstrate the effectiveness of the bogus and ozone data assimilation (BODA) scheme. The statistical result indicates that assimilating AIRS ozone data at 4, 5, or 6 model levels can produce a significant improvement in hurricane track and intensity prediction, with reasonable computation time for the hurricane initialization. Moreover, a detailed analysis of how BODA scheme affects hurricane prediction is conducted for Hurricane Earl (2010). It is found that the new scheme developed in this study generates significant adjustments in the initial conditions (ICs) from the lower levels to the upper levels, compared with the BDA scheme. With the BODA scheme, hurricane development is found to be much more sensitive to the number of ozone data assimilation levels. In particular, the experiment with the assimilation of AIRS ozone data at proper number of model levels shows great capabilities in reproducing the intensity and intensity changes of Hurricane Earl, as well as improve the track prediction. These results suggest that AIRS ozone data convey valuable meteorological information in the upper troposphere, which can be assimilated into a numerical model to improve hurricane initialization when the low-level bogus data are included.
Chao, Eunice; Krewski, Daniel
2008-12-01
This paper presents an exploratory evaluation of four functional components of a proposed risk-based classification scheme (RBCS) for crop-derived genetically modified (GM) foods in a concordance study. Two independent raters assigned concern levels to 20 reference GM foods using a rating form based on the proposed RBCS. The four components of evaluation were: (1) degree of concordance, (2) distribution across concern levels, (3) discriminating ability of the scheme, and (4) ease of use. At least one of the 20 reference foods was assigned to each of the possible concern levels, demonstrating the ability of the scheme to identify GM foods of different concern with respect to potential health risk. There was reasonably good concordance between the two raters for the three separate parts of the RBCS. The raters agreed that the criteria in the scheme were sufficiently clear in discriminating reference foods into different concern levels, and that with some experience, the scheme was reasonably easy to use. Specific issues and suggestions for improvements identified in the concordance study are discussed.
NASA Technical Reports Server (NTRS)
Dieudonne, J. E.
1972-01-01
A set of equations which transform position and angular orientation of the centroid of the payload platform of a six-degree-of-freedom motion simulator into extensions of the simulator's actuators has been derived and is based on a geometrical representation of the system. An iterative scheme, Newton-Raphson's method, has been successfully used in a real time environment in the calculation of the position and angular orientation of the centroid of the payload platform when the magnitude of the actuator extensions is known. Sufficient accuracy is obtained by using only one Newton-Raphson iteration per integration step of the real time environment.
NASA Astrophysics Data System (ADS)
Verma, Surendra P.; Rivera-Gómez, M. Abdelaly; Díaz-González, Lorena; Pandarinath, Kailasa; Amezcua-Valdez, Alejandra; Rosales-Rivera, Mauricio; Verma, Sanjeet K.; Quiroz-Ruiz, Alfredo; Armstrong-Altrin, John S.
2017-05-01
A new multidimensional scheme consistent with the International Union of Geological Sciences (IUGS) is proposed for the classification of igneous rocks in terms of four magma types: ultrabasic, basic, intermediate, and acid. Our procedure is based on an extensive database of major element composition of a total of 33,868 relatively fresh rock samples having a multinormal distribution (initial database with 37,215 samples). Multinormally distributed database in terms of log-ratios of samples was ascertained by a new computer program DOMuDaF, in which the discordancy test was applied at the 99.9% confidence level. Isometric log-ratio (ilr) transformation was used to provide overall percent correct classification of 88.7%, 75.8%, 88.0%, and 80.9% for ultrabasic, basic, intermediate, and acid rocks, respectively. Given the known mathematical and uncertainty propagation properties, this transformation could be adopted for routine applications. The incorrect classification was mainly for the "neighbour" magma types, e.g., basic for ultrabasic and vice versa. Some of these misclassifications do not have any effect on multidimensional tectonic discrimination. For an efficient application of this multidimensional scheme, a new computer program MagClaMSys_ilr (MagClaMSys-Magma Classification Major-element based System) was written, which is available for on-line processing on http://tlaloc.ier.unam.mx/index.html. This classification scheme was tested from newly compiled data for relatively fresh Neogene igneous rocks and was found to be consistent with the conventional IUGS procedure. The new scheme was successfully applied to inter-laboratory data for three geochemical reference materials (basalts JB-1 and JB-1a, and andesite JA-3) from Japan and showed that the inferred magma types are consistent with the rock name (basic for basalts JB-1 and JB-1a and intermediate for andesite JA-3). The scheme was also successfully applied to five case studies of older Archaean to Mesozoic igneous rocks. Similar or more reliable results were obtained from existing tectonomagmatic discrimination diagrams when used in conjunction with the new computer program as compared to the IUGS scheme. The application to three case studies of igneous provenance of sedimentary rocks was demonstrated as a novel approach. Finally, we show that the new scheme is more robust for post-emplacement compositional changes than the conventional IUGS procedure.
Discrete diffusion Lyman α radiative transfer
NASA Astrophysics Data System (ADS)
Smith, Aaron; Tsang, Benny T.-H.; Bromm, Volker; Milosavljević, Miloš
2018-06-01
Due to its accuracy and generality, Monte Carlo radiative transfer (MCRT) has emerged as the prevalent method for Lyα radiative transfer in arbitrary geometries. The standard MCRT encounters a significant efficiency barrier in the high optical depth, diffusion regime. Multiple acceleration schemes have been developed to improve the efficiency of MCRT but the noise from photon packet discretization remains a challenge. The discrete diffusion Monte Carlo (DDMC) scheme has been successfully applied in state-of-the-art radiation hydrodynamics (RHD) simulations. Still, the established framework is not optimal for resonant line transfer. Inspired by the DDMC paradigm, we present a novel extension to resonant DDMC (rDDMC) in which diffusion in space and frequency are treated on equal footing. We explore the robustness of our new method and demonstrate a level of performance that justifies incorporating the method into existing Lyα codes. We present computational speedups of ˜102-106 relative to contemporary MCRT implementations with schemes that skip scattering in the core of the line profile. This is because the rDDMC runtime scales with the spatial and frequency resolution rather than the number of scatterings—the latter is typically ∝τ0 for static media, or ∝(aτ0)2/3 with core-skipping. We anticipate new frontiers in which on-the-fly Lyα radiative transfer calculations are feasible in 3D RHD. More generally, rDDMC is transferable to any computationally demanding problem amenable to a Fokker-Planck approximation of frequency redistribution.
NASA Astrophysics Data System (ADS)
Bastani, Ali Foroush; Dastgerdi, Maryam Vahid; Mighani, Abolfazl
2018-06-01
The main aim of this paper is the analytical and numerical study of a time-dependent second-order nonlinear partial differential equation (PDE) arising from the endogenous stochastic volatility model, introduced in [Bensoussan, A., Crouhy, M. and Galai, D., Stochastic equity volatility related to the leverage effect (I): equity volatility behavior. Applied Mathematical Finance, 1, 63-85, 1994]. As the first step, we derive a consistent set of initial and boundary conditions to complement the PDE, when the firm is financed by equity and debt. In the sequel, we propose a Newton-based iteration scheme for nonlinear parabolic PDEs which is an extension of a method for solving elliptic partial differential equations introduced in [Fasshauer, G. E., Newton iteration with multiquadrics for the solution of nonlinear PDEs. Computers and Mathematics with Applications, 43, 423-438, 2002]. The scheme is based on multilevel collocation using radial basis functions (RBFs) to solve the resulting locally linearized elliptic PDEs obtained at each level of the Newton iteration. We show the effectiveness of the resulting framework by solving a prototypical example from the field and compare the results with those obtained from three different techniques: (1) a finite difference discretization; (2) a naive RBF collocation and (3) a benchmark approximation, introduced for the first time in this paper. The numerical results confirm the robustness, higher convergence rate and good stability properties of the proposed scheme compared to other alternatives. We also comment on some possible research directions in this field.
Optical Network Virtualisation Using Multitechnology Monitoring and SDN-Enabled Optical Transceiver
NASA Astrophysics Data System (ADS)
Ou, Yanni; Davis, Matthew; Aguado, Alejandro; Meng, Fanchao; Nejabati, Reza; Simeonidou, Dimitra
2018-05-01
We introduce the real-time multi-technology transport layer monitoring to facilitate the coordinated virtualisation of optical and Ethernet networks supported by optical virtualise-able transceivers (V-BVT). A monitoring and network resource configuration scheme is proposed to include the hardware monitoring in both Ethernet and Optical layers. The scheme depicts the data and control interactions among multiple network layers under the software defined network (SDN) background, as well as the application that analyses the monitored data obtained from the database. We also present a re-configuration algorithm to adaptively modify the composition of virtual optical networks based on two criteria. The proposed monitoring scheme is experimentally demonstrated with OpenFlow (OF) extensions for a holistic (re-)configuration across both layers in Ethernet switches and V-BVTs.
Multidimensional FEM-FCT schemes for arbitrary time stepping
NASA Astrophysics Data System (ADS)
Kuzmin, D.; Möller, M.; Turek, S.
2003-05-01
The flux-corrected-transport paradigm is generalized to finite-element schemes based on arbitrary time stepping. A conservative flux decomposition procedure is proposed for both convective and diffusive terms. Mathematical properties of positivity-preserving schemes are reviewed. A nonoscillatory low-order method is constructed by elimination of negative off-diagonal entries of the discrete transport operator. The linearization of source terms and extension to hyperbolic systems are discussed. Zalesak's multidimensional limiter is employed to switch between linear discretizations of high and low order. A rigorous proof of positivity is provided. The treatment of non-linearities and iterative solution of linear systems are addressed. The performance of the new algorithm is illustrated by numerical examples for the shock tube problem in one dimension and scalar transport equations in two dimensions.
Anetoh, Maureen Ugonwa; Jibuaku, Chiamaka Henrietta; Nduka, Sunday Odunke; Uzodinma, Samuel Uchenna
2017-01-01
Tertiary Institutions' Social Health Insurance Programme (TISHIP) is an arm of the National Health Insurance Scheme (NHIS), which provides quality healthcare to students in Nigerian higher institutions. The success of this scheme depends on the students' knowledge and awareness of its existence as well as the level of its implementation by healthcare providers. This study was therefore designed to assess students' knowledge and attitude towards TISHIP and its implementation level among health workers in Nnamdi Azikiwe University Medical Centre. Using a stratified random sampling technique, 420 undergraduate students of Nnamdi Azikiwe University, Awka were assessed on their level of awareness and general assessment of TISHIP through an adapted and validated questionnaire instrument. The level of implementation of the scheme was then assessed among 50 randomly selected staff of the University Medical Center. Data collected were analyzed using Statistical Package for Social Sciences (SPSS) version 20 software. Whereas the students in general, showed a high level of TISHIP awareness, more than half of them (56.3%) have never benefited from the scheme with 52.8% showing dissatisfaction with the quality of care offered with the scheme. However, an overwhelming number of the students (87.9%) opined that the scheme should continue. On the other hand, the University Medical Centre staff responses showed a satisfactory scheme implementation. The study found satisfactory TISHIP awareness with poor attitude among Nnamdi Azikiwe University students. Furthermore, the University Medical Centre health workers showed a strong commitment to the objectives of the scheme.
NASA Astrophysics Data System (ADS)
Etilé, A.; Verney, D.; Arsenyev, N. N.; Bettane, J.; Borzov, I. N.; Cheikh Mhamed, M.; Cuong, P. V.; Delafosse, C.; Didierjean, F.; Gaulard, C.; Van Giai, Nguyen; Goasduff, A.; Ibrahim, F.; Kolos, K.; Lau, C.; Niikura, M.; Roccia, S.; Severyukhin, A. P.; Testov, D.; Tusseau-Nenez, S.; Voronov, V. V.
2015-06-01
The β decay of 82Ge Ge was re-investigated using the newly commissioned tape station BEDO at the electron-driven ISOL (isotope separation on line) facility ALTO operated by the Institut de Physique Nucléaire, Orsay. The original motivation of this work was focused on the sudden occurrence in the light N =49 odd-odd isotonic chain of a large number of J ≤1 states (positive or negative parity) in 80Ga by providing a reliable intermediate example, viz., 82As. The extension of the 82As level scheme towards higher energies from the present work has revealed three potential 1+ states above the already known one at 1092 keV. In addition our data allow ruling out the hypothesis that the 843 keV level could be a 1+ state. A detailed analysis of the level scheme using both an empirical core-particle coupling model and a fully microscopic treatment within a Skyrme-QRPA (quasiparticle random-phase approximation) approach using the finite-rank separable approximation was performed. From this analysis two conclusions can be drawn: (i) the presence of a large number of low-lying low-spin negative parity states is due to intruder states stemming from above the N =50 shell closure, and (ii) the sudden increase, from 82As to 80Ga, of the number of low-lying 1+ states and the corresponding Gamow-Teller fragmentation are naturally reproduced by the inclusion of tensor correlations and couplings to 2p-2h excitations.
Chem/bio sensing with non-classical light and integrated photonics.
Haas, J; Schwartz, M; Rengstl, U; Jetter, M; Michler, P; Mizaikoff, B
2018-01-29
Modern quantum technology currently experiences extensive advances in applicability in communications, cryptography, computing, metrology and lithography. Harnessing this technology platform for chem/bio sensing scenarios is an appealing opportunity enabling ultra-sensitive detection schemes. This is further facilliated by the progress in fabrication, miniaturization and integration of visible and infrared quantum photonics. Especially, the combination of efficient single-photon sources together with waveguiding/sensing structures, serving as active optical transducer, as well as advanced detector materials is promising integrated quantum photonic chem/bio sensors. Besides the intrinsic molecular selectivity and non-destructive character of visible and infrared light based sensing schemes, chem/bio sensors taking advantage of non-classical light sources promise sensitivities beyond the standard quantum limit. In the present review, recent achievements towards on-chip chem/bio quantum photonic sensing platforms based on N00N states are discussed along with appropriate recognition chemistries, facilitating the detection of relevant (bio)analytes at ultra-trace concentration levels. After evaluating recent developments in this field, a perspective for a potentially promising sensor testbed is discussed for reaching integrated quantum sensing with two fiber-coupled GaAs chips together with semiconductor quantum dots serving as single-photon sources.
Adapting Active Shape Models for 3D segmentation of tubular structures in medical images.
de Bruijne, Marleen; van Ginneken, Bram; Viergever, Max A; Niessen, Wiro J
2003-07-01
Active Shape Models (ASM) have proven to be an effective approach for image segmentation. In some applications, however, the linear model of gray level appearance around a contour that is used in ASM is not sufficient for accurate boundary localization. Furthermore, the statistical shape model may be too restricted if the training set is limited. This paper describes modifications to both the shape and the appearance model of the original ASM formulation. Shape model flexibility is increased, for tubular objects, by modeling the axis deformation independent of the cross-sectional deformation, and by adding supplementary cylindrical deformation modes. Furthermore, a novel appearance modeling scheme that effectively deals with a highly varying background is developed. In contrast with the conventional ASM approach, the new appearance model is trained on both boundary and non-boundary points, and the probability that a given point belongs to the boundary is estimated non-parametrically. The methods are evaluated on the complex task of segmenting thrombus in abdominal aortic aneurysms (AAA). Shape approximation errors were successfully reduced using the two shape model extensions. Segmentation using the new appearance model significantly outperformed the original ASM scheme; average volume errors are 5.1% and 45% respectively.
On High-Order Upwind Methods for Advection
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2017-01-01
In the fourth installment of the celebrated series of five papers entitled "Towards the ultimate conservative difference scheme", Van Leer (1977) introduced five schemes for advection, the first three are piecewise linear, and the last two, piecewise parabolic. Among the five, scheme I, which is the least accurate, extends with relative ease to systems of equations in multiple dimensions. As a result, it became the most popular and is widely known as the MUSCL scheme (monotone upstream-centered schemes for conservation laws). Schemes III and V have the same accuracy, are the most accurate, and are closely related to current high-order methods. Scheme III uses a piecewise linear approximation that is discontinuous across cells, and can be considered as a precursor of the discontinuous Galerkin methods. Scheme V employs a piecewise quadratic approximation that is, as opposed to the case of scheme III, continuous across cells. This method is the basis for the on-going "active flux scheme" developed by Roe and collaborators. Here, schemes III and V are shown to be equivalent in the sense that they yield identical (reconstructed) solutions, provided the initial condition for scheme III is defined from that of scheme V in a manner dependent on the CFL number. This equivalence is counter intuitive since it is generally believed that piecewise linear and piecewise parabolic methods cannot produce the same solutions due to their different degrees of approximation. The finding also shows a key connection between the approaches of discontinuous and continuous polynomial approximations. In addition to the discussed equivalence, a framework using both projection and interpolation that extends schemes III and V into a single family of high-order schemes is introduced. For these high-order extensions, it is demonstrated via Fourier analysis that schemes with the same number of degrees of freedom ?? per cell, in spite of the different piecewise polynomial degrees, share the same sets of eigenvalues and thus, have the same stability and accuracy. Moreover, these schemes are accurate to order 2??-1, which is higher than the expected order of ??.
On Space-Time Inversion Invariance and its Relation to Non-Dissipatedness of a CESE Core Scheme
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2006-01-01
The core motivating ideas of the space-time CESE method are clearly presented and critically analyzed. It is explained why these ideas result in all the simplifying and enabling features of the CESE method. A thorough discussion of the a scheme, a two-level non-dissipative CESE solver of a simple advection equation with two independent mesh variables and two equations per mesh point is also presented. It is shown that the scheme possesses some rather intriguing properties such as: (i) its two independent mesh variables separately satisfy two decoupled three-level leapfrog schemes and (ii) it shares with the leapfrog scheme the same amplification factors, even though the a scheme and the leapfrog scheme have completely different origins and structures. It is also explained why the leapfrog scheme is not as robust as the a scheme. The amplification factors/matrices of several non-dissipative schemes are carefully studied and the key properties that contribute to their non-dissipatedness are clearly spelled out. Finally we define and establish space-time inversion (STI) invariance for several non-dissipative schemes and show that their non-dissipatedness is a result of their STI invariance.
Lahariya, Chandrakant; Mishra, Ashok; Nandan, Deoki; Gautam, Praveen; Gupta, Sanjay
2011-01-01
Conditional Cash Transfer (CCT) schemes have shown largely favorable changes in the health seeking behavior. This evaluation study assesses the process and performance of an Additional Cash Incentive (ACI) scheme within an ongoing CCT scheme in India, and document lessons. A controlled before and during design study was conducted in Madhya Pradesh state of India, from August 2007 to March 2008, with increased in institutional deliveries as a primary outcome. In depth interviews, focus group discussions and household surveys were done for data collection. Lack of awareness about ACI scheme amongst general population and beneficiaries, cumbersome cash disbursement procedure, intricate eligibility criteria, extensive paper work, and insufficient focus on community involvement were the major implementation challenges. There were anecdotal reports of political interference and possible scope for corruption. At the end of implementation period, overall rate of institutional deliveries had increased in both target and control populations; however, the differences were not statistically significant. No cause and effect association could be proven by this study. Poor planning and coordination, and lack of public awareness about the scheme resulted in low utilization. Thus, proper IEC and training, detailed implementation plan, orientation training for implementer, sufficient budgetary allocation, and community participation should be an integral part for successful implementation of any such scheme. The lesson learned this evaluation study may be useful in any developing country setting and may be utilized for planning and implementation of any ACI scheme in future.
An entropy-variables-based formulation of residual distribution schemes for non-equilibrium flows
NASA Astrophysics Data System (ADS)
Garicano-Mena, Jesús; Lani, Andrea; Degrez, Gérard
2018-06-01
In this paper we present an extension of Residual Distribution techniques for the simulation of compressible flows in non-equilibrium conditions. The latter are modeled by means of a state-of-the-art multi-species and two-temperature model. An entropy-based variable transformation that symmetrizes the projected advective Jacobian for such a thermophysical model is introduced. Moreover, the transformed advection Jacobian matrix presents a block diagonal structure, with mass-species and electronic-vibrational energy being completely decoupled from the momentum and total energy sub-system. The advantageous structure of the transformed advective Jacobian can be exploited by contour-integration-based Residual Distribution techniques: established schemes that operate on dense matrices can be substituted by the same scheme operating on the momentum-energy subsystem matrix and repeated application of scalar scheme to the mass-species and electronic-vibrational energy terms. Finally, the performance gain of the symmetrizing-variables formulation is quantified on a selection of representative testcases, ranging from subsonic to hypersonic, in inviscid or viscous conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel
Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less
NASA Astrophysics Data System (ADS)
Poirier, Vincent
Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Briley, W. Roger
2001-01-01
There has been some recent work to develop two and three-dimensional alternating direction implicit (ADI) FDTD schemes. These ADI schemes are based upon the original ADI concept developed by Peaceman and Rachford and Douglas and Gunn, which is a popular solution method in Computational Fluid Dynamics (CFD). These ADI schemes work well and they require solution of a tridiagonal system of equations. A new approach proposed in this paper applies a LU/AF approximate factorization technique from CFD to Maxwell s equations in flux conservative form for one space dimension. The result is a scheme that will retain its unconditional stability in three space dimensions, but does not require the solution of tridiagonal systems. The theory for this new algorithm is outlined in a one-dimensional context for clarity. An extension to two and threedimensional cases is discussed. Results of Fourier analysis are discussed for both stability and dispersion/damping properties of the algorithm. Results are presented for a one-dimensional model problem, and the explicit FDTD algorithm is chosen as a convenient reference for comparison.
NASA Astrophysics Data System (ADS)
Heo, Jino; Kang, Min-Sung; Hong, Chang-Ho; Yang, Hyeon; Choi, Seong-Gon
2017-01-01
We propose quantum information processing schemes based on cavity quantum electrodynamics (QED) for quantum communication. First, to generate entangled states (Bell and Greenberger-Horne-Zeilinger [GHZ] states) between flying photons and three-level atoms inside optical cavities, we utilize a controlled phase flip (CPF) gate that can be implemented via cavity QED). Subsequently, we present an entanglement swapping scheme that can be realized using single-qubit measurements and CPF gates via optical cavities. These schemes can be directly applied to construct an entanglement channel for a communication system between two users. Consequently, it is possible for the trust center, having quantum nodes, to accomplish the linked channel (entanglement channel) between the two separate long-distance users via the distribution of Bell states and entanglement swapping. Furthermore, in our schemes, the main physical component is the CPF gate between the photons and the three-level atoms in cavity QED, which is feasible in practice. Thus, our schemes can be experimentally realized with current technology.
A suggested color scheme for reducing perception-related accidents on construction work sites.
Yi, June-seong; Kim, Yong-woo; Kim, Ki-aeng; Koo, Bonsang
2012-09-01
Changes in workforce demographics have led to the need for more sophisticated approaches to addressing the safety requirements of the construction industry. Despite extensive research in other industry domains, the construction industry has been passive in exploring the impact of a color scheme; perception-related accidents have been effectively diminished by its implementation. The research demonstrated that the use of appropriate color schemes could improve the actions and psychology of workers on site, thereby increasing their perceptions of potentially dangerous situations. As a preliminary study, the objects selected by rigorous analysis on accident reports were workwear, safety net, gondola, scaffolding, and safety passage. The colors modified on site for temporary facilities were adopted from existing theoretical and empirical research that suggests the use of certain colors and their combinations to improve visibility and conspicuity while minimizing work fatigue. The color schemes were also tested and confirmed through two workshops with workers and managers currently involved in actual projects. The impacts of color schemes suggested in this paper are summarized as follows. First, the color schemes improve the conspicuity of facilities with other on site components, enabling workers to quickly discern and orient themselves in their work environment. Secondly, the color schemes have been selected to minimize the visual work fatigue and monotony that can potentially increase accidents. Copyright © 2011 Elsevier Ltd. All rights reserved.
Zhou, Jian; Wang, Lusheng; Wang, Weidong; Zhou, Qingfeng
2017-01-01
In future scenarios of heterogeneous and dense networks, randomly-deployed small star networks (SSNs) become a key paradigm, whose system performance is restricted to inter-SSN interference and requires an efficient resource allocation scheme for interference coordination. Traditional resource allocation schemes do not specifically focus on this paradigm and are usually too time consuming in dense networks. In this article, a very efficient graph-based scheme is proposed, which applies the maximal independent set (MIS) concept in graph theory to help divide SSNs into almost interference-free groups. We first construct an interference graph for the system based on a derived distance threshold indicating for any pair of SSNs whether there is intolerable inter-SSN interference or not. Then, SSNs are divided into MISs, and the same resource can be repetitively used by all the SSNs in each MIS. Empirical parameters and equations are set in the scheme to guarantee high performance. Finally, extensive scenarios both dense and nondense are randomly generated and simulated to demonstrate the performance of our scheme, indicating that it outperforms the classical max K-cut-based scheme in terms of system capacity, utility and especially time cost. Its achieved system capacity, utility and fairness can be close to the near-optimal strategy obtained by a time-consuming simulated annealing search. PMID:29113109
DOT National Transportation Integrated Search
2015-02-01
Although the freeway travel time data has been validated extensively in recent : years, the quality of arterial travel time data is not well known. This project : presents a comprehensive validation scheme for arterial travel time data based : on GPS...
Landmark-based elastic registration using approximating thin-plate splines.
Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H
2001-06-01
We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.
Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters.
Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng
2016-09-07
Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers.
ATTDES: An Expert System for Satellite Attitude Determination and Control. 2
NASA Technical Reports Server (NTRS)
Mackison, Donald L.; Gifford, Kevin
1996-01-01
The design, analysis, and flight operations of satellite attitude determintion and attitude control systems require extensive mathematical formulations, optimization studies, and computer simulation. This is best done by an analyst with extensive education and experience. The development of programs such as ATTDES permit the use of advanced techniques by those with less experience. Typical tasks include the mission analysis to select stabilization and damping schemes, attitude determination sensors and algorithms, and control system designs to meet program requirements. ATTDES is a system that includes all of these activities, including high fidelity orbit environment models that can be used for preliminary analysis, parameter selection, stabilization schemes, the development of estimators covariance analyses, and optimization, and can support ongoing orbit activities. The modification of existing simulations to model new configurations for these purposes can be an expensive, time consuming activity that becomes a pacing item in the development and operation of such new systems. The use of an integrated tool such as ATTDES significantly reduces the effort and time required for these tasks.
Sampling design for long-term regional trends in marine rocky intertidal communities
Irvine, Gail V.; Shelley, Alice
2013-01-01
Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of “vertical” line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.
Goode, N; Salmon, P M; Taylor, N Z; Lenné, M G; Finch, C F
2017-10-01
One factor potentially limiting the uptake of Rasmussen's (1997) Accimap method by practitioners is the lack of a contributing factor classification scheme to guide accident analyses. This article evaluates the intra- and inter-rater reliability and criterion-referenced validity of a classification scheme developed to support the use of Accimap by led outdoor activity (LOA) practitioners. The classification scheme has two levels: the system level describes the actors, artefacts and activity context in terms of 14 codes; the descriptor level breaks the system level codes down into 107 specific contributing factors. The study involved 11 LOA practitioners using the scheme on two separate occasions to code a pre-determined list of contributing factors identified from four incident reports. Criterion-referenced validity was assessed by comparing the codes selected by LOA practitioners to those selected by the method creators. Mean intra-rater reliability scores at the system (M = 83.6%) and descriptor (M = 74%) levels were acceptable. Mean inter-rater reliability scores were not consistently acceptable for both coding attempts at the system level (M T1 = 68.8%; M T2 = 73.9%), and were poor at the descriptor level (M T1 = 58.5%; M T2 = 64.1%). Mean criterion referenced validity scores at the system level were acceptable (M T1 = 73.9%; M T2 = 75.3%). However, they were not consistently acceptable at the descriptor level (M T1 = 67.6%; M T2 = 70.8%). Overall, the results indicate that the classification scheme does not currently satisfy reliability and validity requirements, and that further work is required. The implications for the design and development of contributing factors classification schemes are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ancestral inference from haplotypes and mutations.
Griffiths, Robert C; Tavaré, Simon
2018-04-25
We consider inference about the history of a sample of DNA sequences, conditional upon the haplotype counts and the number of segregating sites observed at the present time. After deriving some theoretical results in the coalescent setting, we implement rejection sampling and importance sampling schemes to perform the inference. The importance sampling scheme addresses an extension of the Ewens Sampling Formula for a configuration of haplotypes and the number of segregating sites in the sample. The implementations include both constant and variable population size models. The methods are illustrated by two human Y chromosome datasets. Copyright © 2018. Published by Elsevier Inc.
A Regev-type fully homomorphic encryption scheme using modulus switching.
Chen, Zhigang; Wang, Jian; Chen, Liqun; Song, Xinxia
2014-01-01
A critical challenge in a fully homomorphic encryption (FHE) scheme is to manage noise. Modulus switching technique is currently the most efficient noise management technique. When using the modulus switching technique to design and implement a FHE scheme, how to choose concrete parameters is an important step, but to our best knowledge, this step has drawn very little attention to the existing FHE researches in the literature. The contributions of this paper are twofold. On one hand, we propose a function of the lower bound of dimension value in the switching techniques depending on the LWE specific security levels. On the other hand, as a case study, we modify the Brakerski FHE scheme (in Crypto 2012) by using the modulus switching technique. We recommend concrete parameter values of our proposed scheme and provide security analysis. Our result shows that the modified FHE scheme is more efficient than the original Brakerski scheme in the same security level.
SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
2015-06-15
Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less
NASA Astrophysics Data System (ADS)
Kunisetti, V. Praveen Kumar; Thippiripati, Vinay Kumar
2018-01-01
Open End Winding Induction Motors (OEWIM) are popular for electric vehicles, ship propulsion applications due to less DC link voltage. Electric vehicles, ship propulsions require ripple free torque. In this article, an enhanced three-level voltage switching state scheme for direct torque controlled OEWIM drive is implemented to reduce torque and flux ripples. The limitations of conventional Direct Torque Control (DTC) are: possible problems during low speeds and starting, it operates with variable switching frequency due to hysteresis controllers and produces higher torque and flux ripple. The proposed DTC scheme can abate the problems of conventional DTC with an enhanced voltage switching state scheme. The three-level inversion was obtained by operating inverters with equal DC-link voltages and it produces 18 voltage space vectors. These 18 vectors are divided into low and high frequencies of operation based on rotor speed. The hardware results prove the validity of proposed DTC scheme during steady-state and transients. From simulation and experimental results, proposed DTC scheme gives less torque and flux ripples on comparison to two-level DTC. The proposed DTC is implemented using dSPACE DS-1104 control board interface with MATLAB/SIMULINK-RTI model.
NASA Astrophysics Data System (ADS)
Kunisetti, V. Praveen Kumar; Thippiripati, Vinay Kumar
2018-06-01
Open End Winding Induction Motors (OEWIM) are popular for electric vehicles, ship propulsion applications due to less DC link voltage. Electric vehicles, ship propulsions require ripple free torque. In this article, an enhanced three-level voltage switching state scheme for direct torque controlled OEWIM drive is implemented to reduce torque and flux ripples. The limitations of conventional Direct Torque Control (DTC) are: possible problems during low speeds and starting, it operates with variable switching frequency due to hysteresis controllers and produces higher torque and flux ripple. The proposed DTC scheme can abate the problems of conventional DTC with an enhanced voltage switching state scheme. The three-level inversion was obtained by operating inverters with equal DC-link voltages and it produces 18 voltage space vectors. These 18 vectors are divided into low and high frequencies of operation based on rotor speed. The hardware results prove the validity of proposed DTC scheme during steady-state and transients. From simulation and experimental results, proposed DTC scheme gives less torque and flux ripples on comparison to two-level DTC. The proposed DTC is implemented using dSPACE DS-1104 control board interface with MATLAB/SIMULINK-RTI model.
The a(3) Scheme--A Fourth-Order Space-Time Flux-Conserving and Neutrally Stable CESE Solver
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2008-01-01
The CESE development is driven by a belief that a solver should (i) enforce conservation laws in both space and time, and (ii) be built from a non-dissipative (i.e., neutrally stable) core scheme so that the numerical dissipation can be controlled effectively. To initiate a systematic CESE development of high order schemes, in this paper we provide a thorough discussion on the structure, consistency, stability, phase error, and accuracy of a new 4th-order space-time flux-conserving and neutrally stable CESE solver of an 1D scalar advection equation. The space-time stencil of this two-level explicit scheme is formed by one point at the upper time level and three points at the lower time level. Because it is associated with three independent mesh variables (the numerical analogues of the dependent variable and its 1st-order and 2ndorder spatial derivatives, respectively) and three equations per mesh point, the new scheme is referred to as the a(3) scheme. Through the von Neumann analysis, it is shown that the a(3) scheme is stable if and only if the Courant number is less than 0.5. Moreover, it is established numerically that the a(3) scheme is 4th-order accurate.
A semi-implicit level set method for multiphase flows and fluid-structure interaction problems
NASA Astrophysics Data System (ADS)
Cottet, Georges-Henri; Maitre, Emmanuel
2016-06-01
In this paper we present a novel semi-implicit time-discretization of the level set method introduced in [8] for fluid-structure interaction problems. The idea stems from a linear stability analysis derived on a simplified one-dimensional problem. The semi-implicit scheme relies on a simple filter operating as a pre-processing on the level set function. It applies to multiphase flows driven by surface tension as well as to fluid-structure interaction problems. The semi-implicit scheme avoids the stability constraints that explicit scheme need to satisfy and reduces significantly the computational cost. It is validated through comparisons with the original explicit scheme and refinement studies on two-dimensional benchmarks.
NASA Astrophysics Data System (ADS)
Habtu, Solomon; Ludi, Eva; Jamin, Jean Yves; Oates, Naomi; Fissahaye Yohannes, Degol
2014-05-01
Practicing various innovations pertinent to irrigated farming at local field scale is instrumental to increase productivity and yield for small holder farmers in Africa. However the translation of innovations from local scale to the scale of a jointly operated irrigation scheme is far from trivial. It requires insight on the drivers for adoption of local innovations within the wider farmer communities. Participatory methods are expected to improve not only the acceptance of locally developed innovations within the wider farmer communities, but to allow also an estimation to which extend changes will occur within the entire irrigation scheme. On such a base, more realistic scenarios of future water productivity within an irrigation scheme, which is operated by small holder farmers, can be estimated. Initial participatory problem and innovation appraisal was conducted in Gumselassa small scale irrigation scheme, Ethiopia, from Feb 27 to March 3, 2012 as part of the EAU4FOOD project funded by EC. The objective was to identify and appraise problems which hinder sustainable water management to enhance production and productivity and to identify future research strategies. Workshops were conducted both at local (Community of Practices) and regional (Learning Practice Alliance) level. At local levels, intensive collaboration with farmers using participatory methods produced problem trees and a "Photo Safari" documented a range of problems that negatively impact on productive irrigated farming. A range of participatory methods were also used to identify local innovations. At regional level a Learning Platform was established that includes a wide range of stakeholders (technical experts from various government ministries, policy makers, farmers, extension agents, researchers). This stakeholder group did a range of exercise as well to identify major problems related to irrigated smallholder farming and already identified innovations. Both groups identified similar problems to productive smallholder irrigation: soil nutrient depletion, salinization, disease and pest resulting from inefficient irrigation practices, infrastructure problems leading to a reduction of the size of the command area and decrease in reservoir volume. The major causes have been poor irrigation infrastructure, poor on-farm soil and water management, prevalence of various crop pests and diseases, lack of inputs and reservoir siltation. On-farm participatory research focusing on soil, crop and water management issues, including technical, institutional and managerial aspects, to identify best performing innovations while taking care of the environment was recommended. Currently, a range of interlinked activities are implemented a multiple scales, combining participatory and scientific approaches towards innovation development and up-scaling of promising technologies and institutional and managerial approaches from local to regional scales. ____________________________ Key words: Irrigation scheme, productivity, innovation, participatory method, Gumselassa, Ethiopia
Pulse design for multilevel systems by utilizing Lie transforms
NASA Astrophysics Data System (ADS)
Kang, Yi-Hao; Chen, Ye-Hong; Shi, Zhi-Cheng; Huang, Bi-Hua; Song, Jie; Xia, Yan
2018-03-01
We put forward a scheme to design pulses to manipulate multilevel systems with Lie transforms. A formula to reverse construct a control Hamiltonian is given and is applied in pulse design in the three- and four-level systems as examples. To demonstrate the validity of the scheme, we perform numerical simulations, which show the population transfers for cascaded three-level and N -type four-level Rydberg atoms can be completed successfully with high fidelities. Therefore, the scheme may benefit quantum information tasks based on multilevel systems.
Adaptive Packet Combining Scheme in Three State Channel Model
NASA Astrophysics Data System (ADS)
Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak
2018-01-01
The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.
VOLATILE ORGANIC COMPOUND EMISSIONS FROM 46 IN-USE PASSENGER CARS
Emissions from automobiles have long been considered a prime source of pollutants involved in smog formation and ozone production. The reactive potential of the species emitted has been studied extensively, and many reactivity schemes have been proposed. Most of the data on the d...
The EU Emissions Trading Scheme: A Challenge to U.S. Sovereignty
2012-02-07
biofuels, and fuel-conserving winglets .51 The technological improvements are not insignificant. The IPCC assumed that advances in aircraft...16, 2012.) 51 Winglets are extensions added to the ends of an aircraft wings. They disrupt the wingtip vortices created during the production of lift
Two-Carbon Homologation of Ketones to 3-Methyl Unsaturated Aldehydes
USDA-ARS?s Scientific Manuscript database
The usual scheme of two-carbon homologation of ketones to 3-methyl unsaturated aldehydes by Horner-Wadsworth-Emmons condensations with phosphonate esters, such as triethyl-2-phosphonoacetate, involves three steps. The phosphonate condensation step results in extension of the carbon chain by two carb...
Medical image enhancement using resolution synthesis
NASA Astrophysics Data System (ADS)
Wong, Tak-Shing; Bouman, Charles A.; Thibault, Jean-Baptiste; Sauer, Ken D.
2011-03-01
We introduce a post-processing approach to improve the quality of CT reconstructed images. The scheme is adapted from the resolution-synthesis (RS)1 interpolation algorithm. In this approach, we consider the input image, scanned at a particular dose level, as a degraded version of a high quality image scanned at a high dose level. Image enhancement is achieved by predicting the high quality image by classification based linear regression. To improve the robustness of our scheme, we also apply the minimum description length principle to determine the optimal number of predictors to use in the scheme, and the ridge regression to regularize the design of the predictors. Experimental results show that our scheme is effective in reducing the noise in images reconstructed from filtered back projection without significant loss of image details. Alternatively, our scheme can also be applied to reduce dose while maintaining image quality at an acceptable level.
Development of Implicit Methods in CFD NASA Ames Research Center 1970's - 1980's
NASA Technical Reports Server (NTRS)
Pulliam, Thomas H.
2010-01-01
The focus here is on the early development (mid 1970's-1980's) at NASA Ames Research Center of implicit methods in Computational Fluid Dynamics (CFD). A class of implicit finite difference schemes of the Beam and Warming approximate factorization type will be addressed. The emphasis will be on the Euler equations. A review of material pertinent to the solution of the Euler equations within the framework of implicit methods will be presented. The eigensystem of the equations will be used extensively in developing a framework for various methods applied to the Euler equations. The development and analysis of various aspects of this class of schemes will be given along with the motivations behind many of the choices. Various acceleration and efficiency modifications such as matrix reduction, diagonalization and flux split schemes will be presented.
NASA Astrophysics Data System (ADS)
Hajarolasvadi, Setare; Elbanna, Ahmed E.
2017-11-01
The finite difference (FD) and the spectral boundary integral (SBI) methods have been used extensively to model spontaneously-propagating shear cracks in a variety of engineering and geophysical applications. In this paper, we propose a new modelling approach in which these two methods are combined through consistent exchange of boundary tractions and displacements. Benefiting from the flexibility of FD and the efficiency of SBI methods, the proposed hybrid scheme will solve a wide range of problems in a computationally efficient way. We demonstrate the validity of the approach using two examples for dynamic rupture propagation: one in the presence of a low-velocity layer and the other in which off-fault plasticity is permitted. We discuss possible potential uses of the hybrid scheme in earthquake cycle simulations as well as an exact absorbing boundary condition.
First-Order Hyperbolic System Method for Time-Dependent Advection-Diffusion Problems
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Nishikawa, Hiroaki
2014-01-01
A time-dependent extension of the first-order hyperbolic system method for advection-diffusion problems is introduced. Diffusive/viscous terms are written and discretized as a hyperbolic system, which recovers the original equation in the steady state. The resulting scheme offers advantages over traditional schemes: a dramatic simplification in the discretization, high-order accuracy in the solution gradients, and orders-of-magnitude convergence acceleration. The hyperbolic advection-diffusion system is discretized by the second-order upwind residual-distribution scheme in a unified manner, and the system of implicit-residual-equations is solved by Newton's method over every physical time step. The numerical results are presented for linear and nonlinear advection-diffusion problems, demonstrating solutions and gradients produced to the same order of accuracy, with rapid convergence over each physical time step, typically less than five Newton iterations.
Factorizable Schemes for the Equations of Fluid Flow
NASA Technical Reports Server (NTRS)
Sidilkover, David
1999-01-01
We present an upwind high-resolution factorizable (UHF) discrete scheme for the compressible Euler equations that allows to distinguish between full-potential and advection factors at the discrete level. The scheme approximates equations in their general conservative form and is related to the family of genuinely multidimensional upwind schemes developed previously and demonstrated to have good shock-capturing capabilities. A unique property of this scheme is that in addition to the aforementioned features it is also factorizable, i.e., it allows to distinguish between full-potential and advection factors at the discrete level. The latter property facilitates the construction of optimally efficient multigrid solvers. This is done through a relaxation procedure that utilizes the factorizability property.
Cryptanalysis of Chatterjee-Sarkar Hierarchical Identity-Based Encryption Scheme at PKC 06
NASA Astrophysics Data System (ADS)
Park, Jong Hwan; Lee, Dong Hoon
In 2006, Chatterjee and Sarkar proposed a hierarchical identity-based encryption (HIBE) scheme which can support an unbounded number of identity levels. This property is particularly useful in providing forward secrecy by embedding time components within hierarchical identities. In this paper we show that their scheme does not provide the claimed property. Our analysis shows that if the number of identity levels becomes larger than the value of a fixed public parameter, an unintended receiver can reconstruct a new valid ciphertext and decrypt the ciphertext using his or her own private key. The analysis is similarly applied to a multi-receiver identity-based encryption scheme presented as an application of Chatterjee and Sarkar's HIBE scheme.
NASA Astrophysics Data System (ADS)
Farrell, Patricio; Koprucki, Thomas; Fuhrmann, Jürgen
2017-10-01
We compare three thermodynamically consistent numerical fluxes known in the literature, appearing in a Voronoï finite volume discretization of the van Roosbroeck system with general charge carrier statistics. Our discussion includes an extension of the Scharfetter-Gummel scheme to non-Boltzmann (e.g. Fermi-Dirac) statistics. It is based on the analytical solution of a two-point boundary value problem obtained by projecting the continuous differential equation onto the interval between neighboring collocation points. Hence, it serves as a reference flux. The exact solution of the boundary value problem can be approximated by computationally cheaper fluxes which modify certain physical quantities. One alternative scheme averages the nonlinear diffusion (caused by the non-Boltzmann nature of the problem), another one modifies the effective density of states. To study the differences between these three schemes, we analyze the Taylor expansions, derive an error estimate, visualize the flux error and show how the schemes perform for a carefully designed p-i-n benchmark simulation. We present strong evidence that the flux discretization based on averaging the nonlinear diffusion has an edge over the scheme based on modifying the effective density of states.
NASA Astrophysics Data System (ADS)
Canestrelli, Alberto; Dumbser, Michael; Siviglia, Annunziato; Toro, Eleuterio F.
2010-03-01
In this paper, we study the numerical approximation of the two-dimensional morphodynamic model governed by the shallow water equations and bed-load transport following a coupled solution strategy. The resulting system of governing equations contains non-conservative products and it is solved simultaneously within each time step. The numerical solution is obtained using a new high-order accurate centered scheme of the finite volume type on unstructured meshes, which is an extension of the one-dimensional PRICE-C scheme recently proposed in Canestrelli et al. (2009) [5]. The resulting first-order accurate centered method is then extended to high order of accuracy in space via a high order WENO reconstruction technique and in time via a local continuous space-time Galerkin predictor method. The scheme is applied to the shallow water equations and the well-balanced properties of the method are investigated. Finally, we apply the new scheme to different test cases with both fixed and movable bed. An attractive future of the proposed method is that it is particularly suitable for engineering applications since it allows practitioners to adopt the most suitable sediment transport formula which better fits the field data.
Towards a Low-Cost Remote Memory Attestation for the Smart Grid
Yang, Xinyu; He, Xiaofei; Yu, Wei; Lin, Jie; Li, Rui; Yang, Qingyu; Song, Houbing
2015-01-01
In the smart grid, measurement devices may be compromised by adversaries, and their operations could be disrupted by attacks. A number of schemes to efficiently and accurately detect these compromised devices remotely have been proposed. Nonetheless, most of the existing schemes detecting compromised devices depend on the incremental response time in the attestation process, which are sensitive to data transmission delay and lead to high computation and network overhead. To address the issue, in this paper, we propose a low-cost remote memory attestation scheme (LRMA), which can efficiently and accurately detect compromised smart meters considering real-time network delay and achieve low computation and network overhead. In LRMA, the impact of real-time network delay on detecting compromised nodes can be eliminated via investigating the time differences reported from relay nodes. Furthermore, the attestation frequency in LRMA is dynamically adjusted with the compromised probability of each node, and then, the total number of attestations could be reduced while low computation and network overhead can be achieved. Through a combination of extensive theoretical analysis and evaluations, our data demonstrate that our proposed scheme can achieve better detection capacity and lower computation and network overhead in comparison to existing schemes. PMID:26307998
Towards a Low-Cost Remote Memory Attestation for the Smart Grid.
Yang, Xinyu; He, Xiaofei; Yu, Wei; Lin, Jie; Li, Rui; Yang, Qingyu; Song, Houbing
2015-08-21
In the smart grid, measurement devices may be compromised by adversaries, and their operations could be disrupted by attacks. A number of schemes to efficiently and accurately detect these compromised devices remotely have been proposed. Nonetheless, most of the existing schemes detecting compromised devices depend on the incremental response time in the attestation process, which are sensitive to data transmission delay and lead to high computation and network overhead. To address the issue, in this paper, we propose a low-cost remote memory attestation scheme (LRMA), which can efficiently and accurately detect compromised smart meters considering real-time network delay and achieve low computation and network overhead. In LRMA, the impact of real-time network delay on detecting compromised nodes can be eliminated via investigating the time differences reported from relay nodes. Furthermore, the attestation frequency in LRMA is dynamically adjusted with the compromised probability of each node, and then, the total number of attestations could be reduced while low computation and network overhead can be achieved. Through a combination of extensive theoretical analysis and evaluations, our data demonstrate that our proposed scheme can achieve better detection capacity and lower computation and network overhead in comparison to existing schemes.
Privacy-Aware Image Encryption Based on Logistic Map and Data Hiding
NASA Astrophysics Data System (ADS)
Sun, Jianglin; Liao, Xiaofeng; Chen, Xin; Guo, Shangwei
The increasing need for image communication and storage has created a great necessity for securely transforming and storing images over a network. Whereas traditional image encryption algorithms usually consider the security of the whole plain image, region of interest (ROI) encryption schemes, which are of great importance in practical applications, protect the privacy regions of plain images. Existing ROI encryption schemes usually adopt approximate techniques to detect the privacy region and measure the quality of encrypted images; however, their performance is usually inconsistent with a human visual system (HVS) and is sensitive to statistical attacks. In this paper, we propose a novel privacy-aware ROI image encryption (PRIE) scheme based on logistical mapping and data hiding. The proposed scheme utilizes salient object detection to automatically, adaptively and accurately detect the privacy region of a given plain image. After private pixels have been encrypted using chaotic cryptography, the significant bits are embedded into the nonprivacy region of the plain image using data hiding. Extensive experiments are conducted to illustrate the consistency between our automatic ROI detection and HVS. Our experimental results also demonstrate that the proposed scheme exhibits satisfactory security performance.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Hixon, Duane; Sankar, L. N.
1993-01-01
During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.
NASA Astrophysics Data System (ADS)
Belazi, Akram; Abd El-Latif, Ahmed A.; Diaconu, Adrian-Viorel; Rhouma, Rhouma; Belghith, Safya
2017-01-01
In this paper, a new chaos-based partial image encryption scheme based on Substitution-boxes (S-box) constructed by chaotic system and Linear Fractional Transform (LFT) is proposed. It encrypts only the requisite parts of the sensitive information in Lifting-Wavelet Transform (LWT) frequency domain based on hybrid of chaotic maps and a new S-box. In the proposed encryption scheme, the characteristics of confusion and diffusion are accomplished in three phases: block permutation, substitution, and diffusion. Then, we used dynamic keys instead of fixed keys used in other approaches, to control the encryption process and make any attack impossible. The new S-box was constructed by mixing of chaotic map and LFT to insure the high confidentiality in the inner encryption of the proposed approach. In addition, the hybrid compound of S-box and chaotic systems strengthened the whole encryption performance and enlarged the key space required to resist the brute force attacks. Extensive experiments were conducted to evaluate the security and efficiency of the proposed approach. In comparison with previous schemes, the proposed cryptosystem scheme showed high performances and great potential for prominent prevalence in cryptographic applications.
Non-Hermitian optics in atomic systems
NASA Astrophysics Data System (ADS)
Zhang, Zhaoyang; Ma, Danmeng; Sheng, Jiteng; Zhang, Yiqi; Zhang, Yanpeng; Xiao, Min
2018-04-01
A wide class of non-Hermitian Hamiltonians can possess entirely real eigenvalues when they have parity-time (PT) symmetric potentials. Recently, this family of non-Hermitian systems has attracted considerable attention in diverse areas of physics due to their extraordinary properties, especially in optical systems based on solid-state materials, such as coupled gain-loss waveguides and microcavities. Considering the desired refractive index can be effectively manipulated through atomic coherence, it is important to realize such non-Hermitian optical potentials and further investigate their distinct properties in atomic systems. In this paper, we review the recent theoretical and experimental progress of non-Hermitian optics with coherently prepared multi-level atomic configurations. The realizations of (anti-) PT symmetry with different schemes have extensively demonstrated the special optical properties of non-Hermitian optical systems with atomic coherence.
Nuclear Data Sheets for A = 70
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gürdal, G.; McCutchan, E.A.
2016-09-15
Spectroscopic data for all nuclei with mass number A = 70 have been evaluated, and the corresponding level schemes from radioactive decay and reaction studies are presented. Since the previous evaluation, the half-life of {sup 70}Mn has been measured and excited states in {sup 70}Fe observed for the first time. Excited states in {sup 70}Ni have been more extensively studied while Coulomb excitation and collinear laser spectroscopy measurements in {sup 70}Cu have allowed for firm Jπ assignments. Despite new measurements, there remain some discrepancies in half-lives of low lying states in {sup 70}Zn. New measurements have extended the knowledge ofmore » high-spin band structures in {sup 70}Ge and {sup 70}As. This evaluation supersedes the prior A = 70 evaluation of 2004Tu09.« less
Land cover classification of VHR airborne images for citrus grove identification
NASA Astrophysics Data System (ADS)
Amorós López, J.; Izquierdo Verdiguier, E.; Gómez Chova, L.; Muñoz Marí, J.; Rodríguez Barreiro, J. Z.; Camps Valls, G.; Calpe Maravilla, J.
Managing land resources using remote sensing techniques is becoming a common practice. However, data analysis procedures should satisfy the high accuracy levels demanded by users (public or private companies and governments) in order to be extensively used. This paper presents a multi-stage classification scheme to update the citrus Geographical Information System (GIS) of the Comunidad Valenciana region (Spain). Spain is the first citrus fruit producer in Europe and the fourth in the world. In particular, citrus fruits represent 67% of the agricultural production in this region, with a total production of 4.24 million tons (campaign 2006-2007). The citrus GIS inventory, created in 2001, needs to be regularly updated in order to monitor changes quickly enough, and allow appropriate policy making and citrus production forecasting. Automatic methods are proposed in this work to facilitate this update, whose processing scheme is summarized as follows. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution aerial images (0.5 m). Next, several automatic classifiers (decision trees, artificial neural networks, and support vector machines) are trained and combined to improve the final classification accuracy. Finally, the citrus GIS is automatically updated if a high enough level of confidence, based on the agreement between classifiers, is achieved. This is the case for 85% of the parcels and accuracy results exceed 94%. The remaining parcels are classified by expert photo-interpreters in order to guarantee the high accuracy demanded by policy makers.
Mediator- and co-catalyst-free direct Z-scheme composites of Bi2WO6-Cu3P for solar-water splitting.
Rauf, Ali; Ma, Ming; Kim, Sungsoon; Sher Shah, Md Selim Arif; Chung, Chan-Hwa; Park, Jong Hyeok; Yoo, Pil J
2018-02-08
Exploring new single, active photocatalysts for solar-water splitting is highly desirable to expedite current research on solar-chemical energy conversion. In particular, Z-scheme-based composites (ZBCs) have attracted extensive attention due to their unique charge transfer pathway, broader redox range, and stronger redox power compared to conventional heterostructures. In the present report, we have for the first time explored Cu 3 P, a new, single photocatalyst for solar-water splitting applications. Moreover, a novel ZBC system composed of Bi 2 WO 6 -Cu 3 P was designed employing a simple method of ball-milling complexation. The synthesized materials were examined and further investigated through various microscopic, spectroscopic, and surface area characterization methods, which have confirmed the successful hybridization between Bi 2 WO 6 and Cu 3 P and the formation of a ZBC system that shows the ideal position of energy levels for solar-water splitting. Notably, the ZBC composed of Bi 2 WO 6 -Cu 3 P is a mediator- and co-catalyst-free photocatalyst system. The improved photocatalytic efficiency obtained with this system compared to other ZBC systems assisted by mediators and co-catalysts establishes the critical importance of interfacial solid-solid contact and the well-balanced position of energy levels for solar-water splitting. The promising solar-water splitting under optimum composition conditions highlighted the relationship between effective charge separation and composition.
A metadata schema for data objects in clinical research.
Canham, Steve; Ohmann, Christian
2016-11-24
A large number of stakeholders have accepted the need for greater transparency in clinical research and, in the context of various initiatives and systems, have developed a diverse and expanding number of repositories for storing the data and documents created by clinical studies (collectively known as data objects). To make the best use of such resources, we assert that it is also necessary for stakeholders to agree and deploy a simple, consistent metadata scheme. The relevant data objects and their likely storage are described, and the requirements for metadata to support data sharing in clinical research are identified. Issues concerning persistent identifiers, for both studies and data objects, are explored. A scheme is proposed that is based on the DataCite standard, with extensions to cover the needs of clinical researchers, specifically to provide (a) study identification data, including links to clinical trial registries; (b) data object characteristics and identifiers; and (c) data covering location, ownership and access to the data object. The components of the metadata scheme are described. The metadata schema is proposed as a natural extension of a widely agreed standard to fill a gap not tackled by other standards related to clinical research (e.g., Clinical Data Interchange Standards Consortium, Biomedical Research Integrated Domain Group). The proposal could be integrated with, but is not dependent on, other moves to better structure data in clinical research.
An Extension of the Time-Spectral Method to Overset Solvers
NASA Technical Reports Server (NTRS)
Leffell, Joshua Isaac; Murman, Scott M.; Pulliam, Thomas
2013-01-01
Relative motion in the Cartesian or overset framework causes certain spatial nodes to move in and out of the physical domain as they are dynamically blanked by moving solid bodies. This poses a problem for the conventional Time-Spectral approach, which expands the solution at every spatial node into a Fourier series spanning the period of motion. The proposed extension to the Time-Spectral method treats unblanked nodes in the conventional manner but expands the solution at dynamically blanked nodes in a basis of barycentric rational polynomials spanning partitions of contiguously defined temporal intervals. Rational polynomials avoid Runge's phenomenon on the equidistant time samples of these sub-periodic intervals. Fourier- and rational polynomial-based differentiation operators are used in tandem to provide a consistent hybrid Time-Spectral overset scheme capable of handling relative motion. The hybrid scheme is tested with a linear model problem and implemented within NASA's OVERFLOW Reynolds-averaged Navier- Stokes (RANS) solver. The hybrid Time-Spectral solver is then applied to inviscid and turbulent RANS cases of plunging and pitching airfoils and compared to time-accurate and experimental data. A limiter was applied in the turbulent case to avoid undershoots in the undamped turbulent eddy viscosity while maintaining accuracy. The hybrid scheme matches the performance of the conventional Time-Spectral method and converges to the time-accurate results with increased temporal resolution.
Fadlallah, Racha; El-Jardali, Fadi; Hemadi, Nour; Morsi, Rami Z; Abou Samra, Clara Abou; Ahmad, Ali; Arif, Khurram; Hishi, Lama; Honein-AbouHaidar, Gladys; Akl, Elie A
2018-01-29
Community-based health insurance (CBHI) has evolved as an alternative health financing mechanism to out of pocket payments in low- and middle-income countries (LMICs), particularly in areas where government or employer-based health insurance is minimal. This systematic review aimed to assess the barriers and facilitators to implementation, uptake and sustainability of CHBI schemes in LMICs. We searched six electronic databases and grey literature. We included both quantitative and qualitative studies written in English language and published after year 1992. Two reviewers worked in duplicate and independently to complete study selection, data abstraction, and assessment of methodological features. We synthesized the findings based on thematic analysis and categorized according to the ecological model into individual, interpersonal, community and systems levels. Of 15,510 citations, 51 met the eligibility criteria. Individual factors included awareness and understanding of the concept of CBHI, trust in scheme and scheme managers, perceived service quality, and demographic characteristics, which influenced enrollment and sustainability. Interpersonal factors such as household dynamics, other family members enrolled in the scheme, and social solidarity influenced enrollment and renewal of membership. Community-level factors such as culture and community involvement in scheme development influenced enrollment and sustainability of scheme. Systems-level factors encompassed governance, financial and delivery arrangement. Government involvement, accountability of scheme management, and strong policymaker-implementer relation facilitated implementation and sustainability of scheme. Packages that covered outpatient and inpatient care and those tailored to community needs contributed to increased enrollment. Amount and timing of premium collection was reported to negatively influence enrollment while factors reported as threats to sustainability included facility bankruptcy, operating on small budgets, rising healthcare costs, small risk pool, irregular contributions, and overutilization of services. At the delivery level, accessibility of facilities, facility environment, and health personnel influenced enrollment, service utilization and dropout rates. There are a multitude of interrelated factors at the individual, interpersonal, community and systems levels that drive the implementation, uptake and sustainability of CBHI schemes. We discuss the implications of the findings at the policy and research level. The review protocol is registered in PROSPERO International prospective register of systematic reviews (ID = CRD42015019812 ).
DAnTE: a statistical tool for quantitative analysis of –omics data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polpitiya, Ashoka D.; Qian, Weijun; Jaitly, Navdeep
2008-05-03
DAnTE (Data Analysis Tool Extension) is a statistical tool designed to address challenges unique to quantitative bottom-up, shotgun proteomics data. This tool has also been demonstrated for microarray data and can easily be extended to other high-throughput data types. DAnTE features selected normalization methods, missing value imputation algorithms, peptide to protein rollup methods, an extensive array of plotting functions, and a comprehensive ANOVA scheme that can handle unbalanced data and random effects. The Graphical User Interface (GUI) is designed to be very intuitive and user friendly.
Identification of Organic Colorants in Art Objects by Solution Spectrophotometry: Pigments.
ERIC Educational Resources Information Center
Billmeyer, Fred W., Jr.; And Others
1981-01-01
Describes solution spectrophotometry as a simple, rapid identification technique for organic paint pigments. Reports research which includes analytical schemes for the extraction and separation of organic pigments based on their solubilities, and the preparation of an extensive reference collection of spectral curves allowing their identification.…
Cluster Housing for Adults with Intellectual Disabilities
ERIC Educational Resources Information Center
Emerson, Eric
2004-01-01
While there is extensive evidence on the overall benefits of deinstitutionalisation, the move from institutional care to providing accommodation and support in small to medium sized dispersed housing schemes has not gone uncontested. Recently, a number of commentators have argued for the development of cluster housing on the basis that it may…
ERIC Educational Resources Information Center
Ingham, Donald
1995-01-01
Describes a long-term scheme to develop a pond, nature trail, and tree-planting project (in Cornwall, England). The project was designed by teams of students. Plans included a large pond, meadow area, sequential cuttings of school fields to encourage insects, butterfly garden, extensive tree plantings (including a dwindling native species), and a…
DOT National Transportation Integrated Search
2016-08-01
A steel girder twin bridge structure located near Park City, Kansas, has experienced : extensive distortion-induced fatigue cracking in its web-gap regions. Due to : the bridges skewed, staggered configuration, the majority of these cracks have : ...
ERIC Educational Resources Information Center
Halstead, D. Kent
This study presents a scheme for yearly, comparative, computation of state and local government tax capacity and effort. Figures for all states for fiscal year 1975 are presented in extensive tables. The system used is a simplified version of the Representative Tax System, which identifies tax bases, determines national average tax rates for those…
77 FR 29755 - Additional Designations, Foreign Narcotics Kingpin Designation Act
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-18
..., Fisherman Colony Mahim, Mumbai, India; House No. C-201, Extension-A, Karachi Development Scheme, Karachi, Pakistan; DOB 24 Nov 1960; POB Mumbai (Bombay), India; nationality India; Passport AA762402 (Pakistan); alt... Manzil, 78 Temkar Street, Nagpada, Mumbai, India; DOB 31 Dec 1955; alt. DOB 1960; POB Mumbai (Bombay...
PBF Cubicle 13. Shield wall details illustrate shielding technique of ...
PBF Cubicle 13. Shield wall details illustrate shielding technique of stepped penetrations and brick layout scheme for valve stem extension sleeve. Aerojet Nuclear Company. Date: May 1976. INEEL index no. 761-0620-00-400-195280 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
Experiments in encoding multilevel images as quadtrees
NASA Technical Reports Server (NTRS)
Lansing, Donald L.
1987-01-01
Image storage requirements for several encoding methods are investigated and the use of quadtrees with multigray level or multicolor images are explored. The results of encoding a variety of images having up to 256 gray levels using three schemes (full raster, runlength and quadtree) are presented. Although there is considerable literature on the use of quadtrees to store and manipulate binary images, their application to multilevel images is relatively undeveloped. The potential advantage of quadtree encoding is that an entire area with a uniform gray level may be encoded as a unit. A pointerless quadtree encoding scheme is described. Data are presented on the size of the quadtree required to encode selected images and on the relative storage requirements of the three encoding schemes. A segmentation scheme based on the statistical variation of gray levels within a quadtree quadrant is described. This parametric scheme may be used to control the storage required by an encoded image and to preprocess a scene for feature identification. Several sets of black and white and pseudocolor images obtained by varying the segmentation parameter are shown.
Quantum iSWAP gate in optical cavities with a cyclic three-level system
NASA Astrophysics Data System (ADS)
Yan, Guo-an; Qiao, Hao-xue; Lu, Hua
2018-04-01
In this paper we present a scheme to directly implement the iSWAP gate by passing a cyclic three-level system across a two-mode cavity quantum electrodynamics. In the scheme, a three-level Δ -type atom ensemble prepared in its ground state mediates the interaction between the two-cavity modes. For this theoretical model, we also analyze its performance under practical noise, including spontaneous emission and the decay of the cavity modes. It is shown that our scheme may have a high fidelity under the practical noise.
NASA Astrophysics Data System (ADS)
Firestone, R. B.; Gilat, J.; Nitschke, J. M.; Wilmarth, P. A.; Vierinen, K. S.
1991-03-01
The electron-capture and β+-decay branchings (EC/β+) and delayed proton decays of A=142 isotopes with 61<=Z<=66 and A=140 isotopes with 63<=Z<=65 were investigated with the OASIS facility on-line at the Lawrence Berkeley Laboratory SuperHILAC. Electron capture and positron-decay emission probabilities have been determined for 142Pm and 142Sm decays, and extensive decay schemes have been constructed for 142Eug(2.34+/-0.12 s), 142Gd(70.2+/-0.6 s), 140Eu(1.51+/-0.02 s), and 140Gd(15.8+/-0.4 s). Decay schemes for the new isotopes 142Tbg(597+/-17 ms), 142Tbm(303+/-17 ms), 142Dy(2.3+/-0.3 s), 140Eum(125+/-2 ms), and 140Tb(2.4+/-0.2 s) are also presented. We have assigned γ rays to these isotopes on the basis of γγ and xγ coincidences, and from half-life determinations. Electron-capture and β+-decay branchings were measured for each decay, and β-delayed proton branchings were determined for 142Dy, 142Tb, and 140Tb decays. QEC values, derived from the measured EC/β+ branchings and the level schemes are compared with those from the Wapstra and Audi mass evaluation and the Liran and Zeldes mass calculation. The systematics of the N=77 isomer decays are discussed, and the intense 0+-->1+ and 1+-->0+ ground-state beta decays are compared with shell-model predictions for simple spin-flip transitions.
NASA Astrophysics Data System (ADS)
Borge, Rafael; Alexandrov, Vassil; José del Vas, Juan; Lumbreras, Julio; Rodríguez, Encarnacion
Meteorological inputs play a vital role on regional air quality modelling. An extensive sensitivity analysis of the Weather Research and Forecasting (WRF) model was performed, in the framework of the Integrated Assessment Modelling System for the Iberian Peninsula (SIMCA) project. Up to 23 alternative model configurations, including Planetary Boundary Layer schemes, Microphysics, Land-surface models, Radiation schemes, Sea Surface Temperature and Four-Dimensional Data Assimilation were tested in a 3 km spatial resolution domain. Model results for the most significant meteorological variables, were assessed through a series of common statistics. The physics options identified to produce better results (Yonsei University Planetary Boundary Layer, WRF Single-Moment 6-class microphysics, Noah Land-surface model, Eta Geophysical Fluid Dynamics Laboratory longwave radiation and MM5 shortwave radiation schemes) along with other relevant user settings (time-varying Sea Surface Temperature and combined grid-observational nudging) where included in a "best case" configuration. This setup was tested and found to produce more accurate estimation of temperature, wind and humidity fields at surface level than any other configuration for the two episodes simulated. Planetary Boundary Layer height predictions showed a reasonable agreement with estimations derived from routine atmospheric soundings. Although some seasonal and geographical differences were observed, the model showed an acceptable behaviour overall. Despite being useful to define the most appropriate setup of the WRF model for air quality modelling over the Iberian Peninsula, this study provides a general overview of WRF sensitivity and can constitute a reference for future mesoscale meteorological modelling exercises.
Maternal healthcare financing: Gujarat's Chiranjeevi Scheme and its beneficiaries.
Bhat, Ramesh; Mavalankar, Dileep V; Singh, Prabal V; Singh, Neelu
2009-04-01
Maternal mortality is an important public-health issue in India, specifically in Gujarat. Contributing factors are the Government's inability to operationalize the First Referral Units and to provide an adequate level of skilled birth attendants, especially to the poor. In response, the Gujarat state has developed a unique public-private partnership called the Chiranjeevi Scheme. This scheme focuses on institutional delivery, specifically emergency obstetric care for the poor. The objective of the study was to explore the targeting of the scheme, its coverage, and socioeconomic profile of the beneficiaries and to assess financial protection offered by the scheme, if any, in Dahod, one of the initial pilot districts of Gujarat. A household-level survey of beneficiaries (n=262) and non-users (n=394) indicated that the scheme is well-targeted to the poor but many poor people do not use the services. The beneficiaries saved more than Rs 3000 (US$ 75) in delivery-related expenses and were generally satisfied with the scheme. The study provided insights on how to improve the scheme further. Such a financing scheme could be replicated in other states and countries to address the cost barrier, especially in areas where high numbers of private specialists are available.
Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder
2017-09-04
Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.
Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng
2017-04-01
Accurate classification of different anatomical structures of teeth from medical images provides crucial information for the stress analysis in dentistry. Usually, the anatomical structures of teeth are manually labeled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing 3 dimensional (3D) information, and classify the tooth by employing unsupervised learning i.e., k-means++ method. In order to evaluate the proposed method, the experiments are conducted on the sufficient and extensive datasets of mandibular molars. The experimental results show that our method can achieve higher accuracy and robustness compared to other three clustering methods. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Regev-Type Fully Homomorphic Encryption Scheme Using Modulus Switching
Chen, Zhigang; Wang, Jian; Song, Xinxia
2014-01-01
A critical challenge in a fully homomorphic encryption (FHE) scheme is to manage noise. Modulus switching technique is currently the most efficient noise management technique. When using the modulus switching technique to design and implement a FHE scheme, how to choose concrete parameters is an important step, but to our best knowledge, this step has drawn very little attention to the existing FHE researches in the literature. The contributions of this paper are twofold. On one hand, we propose a function of the lower bound of dimension value in the switching techniques depending on the LWE specific security levels. On the other hand, as a case study, we modify the Brakerski FHE scheme (in Crypto 2012) by using the modulus switching technique. We recommend concrete parameter values of our proposed scheme and provide security analysis. Our result shows that the modified FHE scheme is more efficient than the original Brakerski scheme in the same security level. PMID:25093212
77 FR 63355 - Proposed Revision to Emergency Action Level Development Guidance Document
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-16
... action level (EAL) scheme. The NRC is publishing this proposed revision to inform the public and solicit... EAL scheme using site-specific information. Dated at Rockville, Maryland, this 10th day of October...
Comparing the Performance of Two Dynamic Load Distribution Methods
NASA Technical Reports Server (NTRS)
Kale, L. V.
1987-01-01
Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.
Rugged Metropolis sampling with simultaneous updating of two dynamical variables
NASA Astrophysics Data System (ADS)
Berg, Bernd A.; Zhou, Huan-Xiang
2005-07-01
The rugged Metropolis (RM) algorithm is a biased updating scheme which aims at directly hitting the most likely configurations in a rugged free-energy landscape. Details of the one-variable (RM1) implementation of this algorithm are presented. This is followed by an extension to simultaneous updating of two dynamical variables (RM2) . In a test with the brain peptide Met-Enkephalin in vacuum RM2 improves conventional Metropolis simulations by a factor of about 4. Correlations between three or more dihedral angles appear to prevent larger improvements at low temperatures. We also investigate a multihit Metropolis scheme, which spends more CPU time on variables with large autocorrelation times.
Optimized multilayered wideband absorbers with graded fractal FSS
NASA Astrophysics Data System (ADS)
Vinoy, K. J.; Jose, K. A.; Varadan, Vijay K.; Varadan, Vasundara V.
2001-08-01
Various approaches have been followed for the reduction of radar cross section (RCS), especially of aircraft and missiles. In this paper we present the use of multiple layers of FSS-like fractal geometries printed on dielectric substrates for the same goal. The experimental results shown here indicate 15 dB reduction in the reflection of a flat surface, by the use of this configuration with low loss dielectrics. An extensive optimization scheme is required for extending the angle coverage as well as the bandwidth of the absorber. A brief investigation of such a scheme involving genetic algorithm for this purpose is also presented here.
The Impact of Microphysics on Intensity and Structure of Hurricanes
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Shi, Jainn; Lang, Steve; Peters-Lidard, Christa
2006-01-01
During the past decade, both research and operational numerical weather prediction models, e.g. Weather Research and Forecast (WRF) model, have started using more complex microphysical schemes originally developed for high-resolution cloud resolving models (CRMs) with a 1-2 km or less horizontal resolutions. WFW is a next-generation mesoscale forecast model and assimilation system that has incorporated modern software framework, advanced dynamics, numeric and data assimilation techniques, a multiple moveable nesting capability, and improved physical packages. WFW model can be used for a wide range of applications, from idealized research to operational forecasting, with an emphasis on horizontal grid sizes in the range of 1-10 km. The current WRF includes several different microphysics options such as Lin et al. (1983), WSM 6-class and Thompson microphysics schemes. We have recently implemented three sophisticated cloud microphysics schemes into WRF. The cloud microphysics schemes have been extensively tested and applied for different mesoscale systems in different geographical locations. The performances of these schemes have been compared to those from other WRF microphysics options. We are performing sensitivity tests in using WW to examine the impact of six different cloud microphysical schemes on hurricane track, intensity and rainfall forecast. We are also performing the inline tracer calculation to comprehend the physical processes @e., boundary layer and each quadrant in the boundary layer) related to the development and structure of hurricanes.
NASA Astrophysics Data System (ADS)
Desiraju, Naveen Kumar; Doclo, Simon; Wolff, Tobias
2017-12-01
Acoustic echo cancellation (AEC) is a key speech enhancement technology in speech communication and voice-enabled devices. AEC systems employ adaptive filters to estimate the acoustic echo paths between the loudspeakers and the microphone(s). In applications involving surround sound, the computational complexity of an AEC system may become demanding due to the multiple loudspeaker channels and the necessity of using long filters in reverberant environments. In order to reduce the computational complexity, the approach of partially updating the AEC filters is considered in this paper. In particular, we investigate tap selection schemes which exploit the sparsity present in the loudspeaker channels for partially updating subband AEC filters. The potential for exploiting signal sparsity across three dimensions, namely time, frequency, and channels, is analyzed. A thorough analysis of different state-of-the-art tap selection schemes is performed and insights about their limitations are gained. A novel tap selection scheme is proposed which overcomes these limitations by exploiting signal sparsity while not ignoring any filters for update in the different subbands and channels. Extensive simulation results using both artificial as well as real-world multichannel signals show that the proposed tap selection scheme outperforms state-of-the-art tap selection schemes in terms of echo cancellation performance. In addition, it yields almost identical echo cancellation performance as compared to updating all filter taps at a significantly reduced computational cost.
Centrifuge: rapid and sensitive classification of metagenomic sequences
Song, Li; Breitwieser, Florian P.
2016-01-01
Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. PMID:27852649
A Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin for Diffusion
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2009-01-01
We introduce a new approach to high-order accuracy for the numerical solution of diffusion problems by solving the equations in differential form using a reconstruction technique. The approach has the advantages of simplicity and economy. It results in several new high-order methods including a simplified version of discontinuous Galerkin (DG). It also leads to new definitions of common value and common gradient quantities at each interface shared by the two adjacent cells. In addition, the new approach clarifies the relations among the various choices of new and existing common quantities. Fourier stability and accuracy analyses are carried out for the resulting schemes. Extensions to the case of quadrilateral meshes are obtained via tensor products. For the two-point boundary value problem (steady state), it is shown that these schemes, which include most popular DG methods, yield exact common interface quantities as well as exact cell average solutions for nearly all cases.
A depictive neural model for the representation of motion verbs.
Rao, Sunil; Aleksander, Igor
2011-11-01
In this paper, we present a depictive neural model for the representation of motion verb semantics in neural models of visual awareness. The problem of modelling motion verb representation is shown to be one of function application, mapping a set of given input variables defining the moving object and the path of motion to a defined output outcome in the motion recognition context. The particular function-applicative implementation and consequent recognition model design presented are seen as arising from a noun-adjective recognition model enabling the recognition of colour adjectives as applied to a set of shapes representing objects to be recognised. The presence of such a function application scheme and a separately implemented position identification and path labelling scheme are accordingly shown to be the primitives required to enable the design and construction of a composite depictive motion verb recognition scheme. Extensions to the presented design to enable the representation of transitive verbs are also discussed.
An upstream burst-mode equalization scheme for 40 Gb/s TWDM PON based on optimized SOA cascade
NASA Astrophysics Data System (ADS)
Sun, Xiao; Chang, Qingjiang; Gao, Zhensen; Ye, Chenhui; Xiao, Simiao; Huang, Xiaoan; Hu, Xiaofeng; Zhang, Kaibin
2016-02-01
We present a novel upstream burst-mode equalization scheme based on optimized SOA cascade for 40 Gb/s TWDMPON. The power equalizer is placed at the OLT which consists of two SOAs, two circulators, an optical NOT gate, and a variable optical attenuator. The first SOA operates in the linear region which acts as a pre-amplifier to let the second SOA operate in the saturation region. The upstream burst signals are equalized through the second SOA via nonlinear amplification. From theoretical analysis, this scheme gives sufficient dynamic range suppression up to 16.7 dB without any dynamic control or signal degradation. In addition, a total power budget extension of 9.3 dB for loud packets and 26 dB for soft packets has been achieved to allow longer transmission distance and increased splitting ratio.
Filter Bank Multicarrier (FBMC) for long-reach intensity modulated optical access networks
NASA Astrophysics Data System (ADS)
Saljoghei, Arsalan; Gutiérrez, Fernando A.; Perry, Philip; Barry, Liam P.
2017-04-01
Filter Bank Multi Carrier (FBMC) is a modulation scheme which has recently attracted significant interest in both wireless and optical communications. The interest in optical communications arises due to FBMC's capability to operate without a Cyclic Prefix (CP) and its high resilience to synchronisation errors. However, the operation of FBMC in optical access networks has not been extensively studied either in downstream or upstream. In this work we use experimental work to investigate the operation of FBMC in intensity modulated Passive Optical Networks (PONs) employing direct detection in conjunction with both direct and external modulation schemes. The data rates and propagation lengths employed here vary from 8.4 to 14.8 Gb/s and 0-75 km. The results suggest that by using FBMC it is possible to accomplish CP-Less transmission up to 75 km of SSMF in passive links using cost effective intensity modulation and detection schemes.
NASA Technical Reports Server (NTRS)
Sayood, K.; Chen, Y. C.; Wang, X.
1992-01-01
During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.
A RONI Based Visible Watermarking Approach for Medical Image Authentication.
Thanki, Rohit; Borra, Surekha; Dwivedi, Vedvyas; Borisagar, Komal
2017-08-09
Nowadays medical data in terms of image files are often exchanged between different hospitals for use in telemedicine and diagnosis. Visible watermarking being extensively used for Intellectual Property identification of such medical images, leads to serious issues if failed to identify proper regions for watermark insertion. In this paper, the Region of Non-Interest (RONI) based visible watermarking for medical image authentication is proposed. In this technique, to RONI of the cover medical image is first identified using Human Visual System (HVS) model. Later, watermark logo is visibly inserted into RONI of the cover medical image to get watermarked medical image. Finally, the watermarked medical image is compared with the original medical image for measurement of imperceptibility and authenticity of proposed scheme. The experimental results showed that this proposed scheme reduces the computational complexity and improves the PSNR when compared to many existing schemes.
NASA Astrophysics Data System (ADS)
Schoenberg Ferrier, Brad; Tao, Wei-Kuo; Simpson, Joanne
1995-04-01
Part I of this study described a detailed four-class bulk ice scheme (4ICE) developed to simulate the hydro-meteor profiles of convective and stratiform precipitation associated with mesoscale convective systems. In Part II, the 4ICE scheme is incorporated into the Goddard Cumulus Ensemble (GCE) model and applied without any `tuning' to two squall lines occurring in widely different environments, namely, one over the `Pica) ocean in the Global Atmospheric Research Program's (GARP) Atlantic Tropical Experiment (GATE) and the other over a midlatitude continent in the Cooperative Huntsville Meteorological Experiment (COHMEX). Comparisons were made both with earlier three-class ice formulations and with observations. In both cases, the 4ICE scheme interacted with the dynamics so as to resemble the observations much more closely than did the model runs with either of the three-class ice parameterizations. The following features were well simulated in the COHMEX case: a lack of stratiform rain at the surface ahead of the storm, reflectivity maxima near 60 dBZ in the vicinity of the melting level, and intense radar echoes up to near the tropopause. These features were in strong contrast with the GATE simulation, which showed extensive trailing stratiform precipitation containing a horizontally oriented radar bright band. Peak reflectivities were below the melting level, rarely exceeding 50 dBz, with a steady decrease in reflectivity with height above. With the other bulk formulations, the large stratiform rain areas were not reproduced in the GATE conditions.The microphysical structure of the model clouds in both environments were more realistic than that of earlier modeling efforts. Number concentrations of ice of O(100 L1) occurred above 6 km in the GATE model clouds as a result of ice enhancement and rime splintering in the 4ICE runs. These processes were more effective in the GATE simulation, because near the freezing level the weaker updrafts were comparable in magnitude to the fall speeds of newly frozen drops. Many of the ice crystals initiated at relatively warm temperatures (above 15°C) grew rapidly by deposition into sizes large enough to be converted to snow. In contrast, in the more intense COHMEX updrafts, very large numbers of small ice crystals were initiated at colder temperatures (below 15°C) by nucleation and stochastic freezing of droplets, such that relatively few ice crystals grew by deposition to sizes large enough to be converted to snow. In addition, the large number of frozen drops of O(5 L1) in the 4ICE run am consistent with airborne microphysical data in intense COHMEX updrafts.Numerous sensitivity experiments were made with the four-class and three-class ice schemes, varying fall speed relationships, particle characteristics, and ice collection efficiencies. These tests provide strong support to the conclusion that the 4ICE scheme gives improved resemblance to observations despite present uncertainties in a number of important microphysical parameters.
Techniques to control and position laser targets. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, T.B.
1978-06-01
The purpose of the work was to investigate the potential role of various electrohydrodynamic phenomena in the fabrication of small spherical particles and shells for laser target applications. A number of topics were considered. These included charged droplet levitation, specifically the combined effects of the Rayleigh limit and droplet elongation in the presence of electric fields. Two new levitation schemes for uncharged dielectric particles were studied. A dynamic dielectrophoretic levitation scheme was proposed and unsuccessful attempts were made to observe levitation with it. Another static dielectrophoretic levitation scheme was studied and used extensively. A theory was developed for this typemore » of levitation, and a dielectric constant measurement scheme proposed. A charged droplet generator for the production of single droplets (< 1 mm dia of insulating liquids was developed. The synchronous DEP pumping of bubbles and spheres has been considered. Finally, some preliminary experiments with SiH/sub 4//O/sub 2/ bubbles in Viscasil silicone fluid were conducted to learn about the possibility of using silane to form SiO/sub 2/ microballons from bubbles.« less
Path scheduling for multiple mobile actors in wireless sensor network
NASA Astrophysics Data System (ADS)
Trapasiya, Samir D.; Soni, Himanshu B.
2017-05-01
In wireless sensor network (WSN), energy is the main constraint. In this work we have addressed this issue for single as well as multiple mobile sensor actor network. In this work, we have proposed Rendezvous Point Selection Scheme (RPSS) in which Rendezvous Nodes are selected by set covering problem approach and from that, Rendezvous Points are selected in a way to reduce the tour length. The mobile actors tour is scheduled to pass through those Rendezvous Points as per Travelling Salesman Problem (TSP). We have also proposed novel rendezvous node rotation scheme for fair utilisation of all the nodes. We have compared RPSS with Stationery Actor scheme as well as RD-VT, RD-VT-SMT and WRP-SMT for performance metrics like energy consumption, network lifetime, route length and found the better outcome in all the cases for single actor. We have also applied RPSS for multiple mobile actor case like Multi-Actor Single Depot (MASD) termination and Multi-Actor Multiple Depot (MAMD) termination and observed by extensive simulation that MAMD saves the network energy in optimised way and enhance network lifetime compared to all other schemes.
Ahmed, Shakil; Khan, M Mahmud
2011-01-01
It is now more than 2 years since the Ministry of Health and Family Welfare of the Government of Bangladesh implemented the Maternal Health Voucher Scheme, a specialized form of demand-side financing programme. To analyse the early lessons from the scheme, information was obtained through semi-structured interviews with stakeholders at the sub-district level. The analysis identified a number of factors affecting the efficiency and performance of the scheme in the program area: delay in the release of voucher funds, selection criteria used for enrolling pregnant women in the programme, incentives created by the reimbursement system, etc. One of the objectives of the scheme was to encourage market competition among health care providers, but it failed to increase market competitiveness in the area. The resources made available through the scheme did not attract any new providers into the market and public facilities remained the only eligible provider both before and after scheme implementation. However, incentives provided through the voucher system did motivate public providers to offer a higher level of services. The beneficiaries expressed their overall satisfaction with the scheme as well. Since the local facility was not technically ready to provide all types of maternal health care services, providing vouchers may not improve access to care for many pregnant women. To improve the performance of the demand-side strategy, it has become important to adopt some supply-side interventions. In poor developing countries, a demand-side strategy may not be very effective without significant expansion of the service delivery capacity of health facilities at the sub-district level.
Age-Related Evolution Patterns in Online Handwriting
2016-01-01
Characterizing age from handwriting (HW) has important applications, as it is key to distinguishing normal HW evolution with age from abnormal HW change, potentially triggered by neurodegenerative decline. We propose, in this work, an original approach for online HW style characterization based on a two-level clustering scheme. The first level generates writer-independent word clusters from raw spatial-dynamic HW information. At the second level, each writer's words are converted into a Bag of Prototype Words that is augmented by an interword stability measure. This two-level HW style representation is input to an unsupervised learning technique, aiming at uncovering HW style categories and their correlation with age. To assess the effectiveness of our approach, we propose information theoretic measures to quantify the gain on age information from each clustering layer. We have carried out extensive experiments on a large public online HW database, augmented by HW samples acquired at Broca Hospital in Paris from people mostly between 60 and 85 years old. Unlike previous works claiming that there is only one pattern of HW change with age, our study reveals three major aging HW styles, one specific to aged people and the two others shared by other age groups. PMID:27752277
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bathke, C. G.; Wallace, R. K.; Ireland, J. R.
2010-09-01
This paper is an extension to earlier studies1,2 that examined the attractiveness of materials mixtures containing special nuclear materials (SNM) and alternate nuclear materials (ANM) associated with the PUREX, UREX, COEX, THOREX, and PYROX reprocessing schemes. This study extends the figure of merit (FOM) for evaluating attractiveness to cover a broad range of proliferant state and sub-national group capabilities. The primary conclusion of this study is that all fissile material needs to be rigorously safeguarded to detect diversion by a state and provided the highest levels of physical protection to prevent theft by sub-national groups; no “silver bullet” has beenmore » found that will permit the relaxation of current international safeguards or national physical security protection levels. This series of studies has been performed at the request of the United States Department of Energy (DOE) and is based on the calculation of "attractiveness levels" that are expressed in terms consistent with, but normally reserved for nuclear materials in DOE nuclear facilities.3 The expanded methodology and updated findings are presented. Additionally, how these attractiveness levels relate to proliferation resistance and physical security are discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bathke, Charles G; Wallace, Richard K; Ireland, John R
2009-01-01
This paper is an extension to earlier studies that examined the attractiveness of materials mixtures containing special nuclear materials (SNM) and alternate nuclear materials (ANM) associated with the PUREX, UREX, COEX, THOREX, and PYROX reprocessing schemes. This study extends the figure of merit (FOM) for evaluating attractiveness to cover a broad range of proliferant state and sub-national group capabilities. The primary conclusion of this study is that all fissile material needs to be rigorously safeguarded to detect diversion by a state and provided the highest levels of physical protection to prevent theft by sub-national groups; no 'silver bullet' has beenmore » found that will permit the relaxation of current international safeguards or national physical security protection levels. This series of studies has been performed at the request of the United States Department of Energy (DOE) and is based on the calculation of 'attractiveness levels' that are expressed in terms consistent with, but normally reserved for nuclear materials in DOE nuclear facilities. The expanded methodology and updated findings are presented. Additionally, how these attractiveness levels relate to proliferation resistance and physical security are discussed.« less
Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira
2015-01-01
Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to generate code for physiological simulations and provides a tool for studying cardiac electrophysiology. PMID:26356082
Seismic waves in heterogeneous material: subcell resolution of the discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Castro, Cristóbal E.; Käser, Martin; Brietzke, Gilbert B.
2010-07-01
We present an important extension of the arbitrary high-order discontinuous Galerkin (DG) finite-element method to model 2-D elastic wave propagation in highly heterogeneous material. In this new approach we include space-variable coefficients to describe smooth or discontinuous material variations inside each element using the same numerical approximation strategy as for the velocity-stress variables in the formulation of the elastic wave equation. The combination of the DG method with a time integration scheme based on the solution of arbitrary accuracy derivatives Riemann problems still provides an explicit, one-step scheme which achieves arbitrary high-order accuracy in space and time. Compared to previous formulations the new scheme contains two additional terms in the form of volume integrals. We show that the increasing computational cost per element can be overcompensated due to the improved material representation inside each element as coarser meshes can be used which reduces the total number of elements and therefore computational time to reach a desired error level. We confirm the accuracy of the proposed scheme performing convergence tests and several numerical experiments considering smooth and highly heterogeneous material. As the approximation of the velocity and stress variables in the wave equation and of the material properties in the model can be chosen independently, we investigate the influence of the polynomial material representation on the accuracy of the synthetic seismograms with respect to computational cost. Moreover, we study the behaviour of the new method on strong material discontinuities, in the case where the mesh is not aligned with such a material interface. In this case second-order linear material approximation seems to be the best choice, with higher-order intra-cell approximation leading to potential instable behaviour. For all test cases we validate our solution against the well-established standard fourth-order finite difference and spectral element method.
One size fits all? An assessment tool for solid waste management at local and national levels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broitman, Dani, E-mail: danib@techunix.technion.ac.il; Ayalon, Ofira; Kan, Iddo
2012-10-15
Highlights: Black-Right-Pointing-Pointer Waste management schemes are generally implemented at national or regional level. Black-Right-Pointing-Pointer Local conditions characteristics and constraints are often neglected. Black-Right-Pointing-Pointer We developed an economic model able to compare multi-level waste management options. Black-Right-Pointing-Pointer A detailed test case with real economic data and a best-fit scenario is described. Black-Right-Pointing-Pointer Most efficient schemes combine clear National directives with local level flexibility. - Abstract: As environmental awareness rises, integrated solid waste management (WM) schemes are increasingly being implemented all over the world. The different WM schemes usually address issues such as landfilling restrictions (mainly due to methane emissions and competingmore » land use), packaging directives and compulsory recycling goals. These schemes are, in general, designed at a national or regional level, whereas local conditions and constraints are sometimes neglected. When national WM top-down policies, in addition to setting goals, also dictate the methods by which they are to be achieved, local authorities lose their freedom to optimize their operational WM schemes according to their specific characteristics. There are a myriad of implementation options at the local level, and by carrying out a bottom-up approach the overall national WM system will be optimal on economic and environmental scales. This paper presents a model for optimizing waste strategies at a local level and evaluates this effect at a national level. This is achieved by using a waste assessment model which enables us to compare both the economic viability of several WM options at the local (single municipal authority) level, and aggregated results for regional or national levels. A test case based on various WM approaches in Israel (several implementations of mixed and separated waste) shows that local characteristics significantly influence WM costs, and therefore the optimal scheme is one under which each local authority is able to implement its best-fitting mechanism, given that national guidelines are kept. The main result is that strict national/regional WM policies may be less efficient, unless some type of local flexibility is implemented. Our model is designed both for top-down and bottom-up assessment, and can be easily adapted for a wide range of WM option comparisons at different levels.« less
2013-01-01
Background The Government of Lao Peoples’ Democratic Republic (Lao PDR) has embarked on a path to achieve universal health coverage (UHC) through implementation of four risk-protection schemes. One of these schemes is community-based health insurance (CBHI) – a voluntary scheme that targets roughly half the population. However, after 12 years of implementation, coverage through CBHI remains very low. Increasing coverage of the scheme would require expansion to households in both villages where CBHI is currently operating, and new geographic areas. In this study we explore the prospects of both types of expansion by examining household and district level data. Methods Using a household survey based on a case-comparison design of 3000 households, we examine the determinants of enrolment at the household level in areas where the scheme is currently operating. We model the determinants of enrolment using a probit model and predicted probabilities. Findings from focus group discussions are used to explain the quantitative findings. To examine the prospects for geographic scale-up, we use secondary data to compare characteristics of districts with and without insurance, using a combination of univariate and multivariate analyses. The multivariate analysis is a probit model, which models the factors associated with roll-out of CBHI to the districts. Results The household findings show that enrolment is concentrated among the better off and that adverse selection is present in the scheme. The district level findings show that to date, the scheme has been implemented in the most affluent areas, in closest proximity to the district hospitals, and in areas where quality of care is relatively good. Conclusions The household-level findings indicate that the scheme suffers from poor risk-pooling, which threatens financial sustainability. The district-level findings call into question whether or not the Government of Laos can successfully expand to more remote, less affluent districts, with lower population density. We discuss the policy implications of the findings and specifically address whether CBHI can serve as a foundation for a national scheme, while exploring alternative approaches to reaching the informal sector in Laos and other countries attempting to achieve UHC. PMID:24344925
NASA Astrophysics Data System (ADS)
Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio
2012-12-01
We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.
NASA Astrophysics Data System (ADS)
Jin, Wei; Zhang, Chongfu; Yuan, Weicheng
2016-02-01
We propose a physically enhanced secure scheme for direct detection-orthogonal frequency division multiplexing-passive optical network (DD-OFDM-PON) and long reach coherent detection-orthogonal frequency division multiplexing-passive optical network (LRCO-OFDM-PON), by employing noise-based encryption and channel/phase estimation. The noise data generated by chaos mapping are used to substitute training sequences in preamble to realize channel estimation and frame synchronization, and also to be embedded on variable number of key-selected randomly spaced pilot subcarriers to implement phase estimation. Consequently, the information used for signal recovery is totally hidden as unpredictable noise information in OFDM frames to mask useful information and to prevent illegal users from correctly realizing OFDM demodulation, and thereby enhancing resistance to attackers. The levels of illegal-decryption complexity and implementation complexity are theoretically discussed. Through extensive simulations, the performances of the proposed channel/phase estimation and the security introduced by encrypted pilot carriers have been investigated in both DD-OFDM and LRCO-OFDM systems. In addition, in the proposed secure DD-OFDM/LRCO-OFDM PON models, both legal and illegal receiving scenarios have been considered. These results show that, by utilizing the proposed scheme, the resistance to attackers can be significantly enhanced in DD-OFDM-PON and LRCO-OFDM-PON systems without performance degradations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bathke, C. G.; Jarvinen, G. D.; Wallace, R. K.
2008-10-01
This paper summarizes the results of an extension to an earlier study [ ] that examined the attractiveness of materials mixtures containing special nuclear materials (SNM) associated with the PUREX, UREX+, and COEX reprocessing schemes. This study focuses on the materials associated with the UREX, COEX, THOREX, and PYROX reprocessing schemes. This study also examines what is required to render plutonium as “unattractive.” Furthermore, combining the results of this study with those from the earlier study permits a comparison of the uranium and thorium based fuel cycles on the basis of the attractiveness of the SNM associated with each fuelmore » cycle. Both studies were performed at the request of the United States Department of Energy (DOE), and are based on the calculation of “attractiveness levels” that has been couched in terms chosen for consistency with those normally used for nuclear materials in DOE nuclear facilities [ ]. The methodology and key findings will be presented. Additionally, how these attractiveness levels relate to proliferation resistance (e.g. by increasing impediments to the diversion, theft, undeclared production of SNM for the purpose of acquiring a nuclear weapon), and how they could be used to help inform policy makers, will be discussed.« less
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Bishop, Robert H.
1996-01-01
A recently developed rendezvous navigation fusion filter that optimally exploits existing distributed filters for rendezvous and GPS navigation to achieve the relative and inertial state accuracies of both in a global solution is utilized here to process actual flight data. Space Shuttle Mission STS-69 was the first mission to date which gathered data from both the rendezvous and Global Positioning System filters allowing, for the first time, a test of the fusion algorithm with real flight data. Furthermore, a precise best estimate of trajectory is available for portions of STS-69, making possible a check on the performance of the fusion filter. In order to successfully carry out this experiment with flight data, two extensions to the existing scheme were necessary: a fusion edit test based on differences between the filter state vectors, and an underweighting scheme to accommodate the suboptimal perfect target assumption made by the Shuttle rendezvous filter. With these innovations, the flight data was successfully fused from playbacks of downlinked and/or recorded measurement data through ground analysis versions of the Shuttle rendezvous filter and a GPS filter developed for another experiment. The fusion results agree with the best estimate of trajectory at approximately the levels of uncertainty expected from the fusion filter's covariance matrix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pancholi, S. C.; Martin, M. J.
A review of information available on level schemes and decay characteristics for all nuclei with mass number A = 212 is presented. Experimental data and their evaluation, adopted values, comparison with theory, and arguments for spin and parity assignments are given. Inconsistencies and discrepancies in the level schemes are discussed.
γ5 in the four-dimensional helicity scheme
NASA Astrophysics Data System (ADS)
Gnendiger, C.; Signer, A.
2018-05-01
We investigate the regularization-scheme dependent treatment of γ5 in the framework of dimensional regularization, mainly focusing on the four-dimensional helicity scheme (fdh). Evaluating distinctive examples, we find that for one-loop calculations, the recently proposed four-dimensional formulation (fdf) of the fdh scheme constitutes a viable and efficient alternative compared to more traditional approaches. In addition, we extend the considerations to the two-loop level and compute the pseudoscalar form factors of quarks and gluons in fdh. We provide the necessary operator renormalization and discuss at a practical level how the complexity of intermediate calculational steps can be reduced in an efficient way.
Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang
2017-01-01
By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network. PMID:28955217
Proof of cipher text ownership based on convergence encryption
NASA Astrophysics Data System (ADS)
Zhong, Weiwei; Liu, Zhusong
2017-08-01
Cloud storage systems save disk space and bandwidth through deduplication technology, but with the use of this technology has been targeted security attacks: the attacker can get the original file just use hash value to deceive the server to obtain the file ownership. In order to solve the above security problems and the different security requirements of cloud storage system files, an efficient information theory security proof of ownership scheme is proposed. This scheme protects the data through the convergence encryption method, and uses the improved block-level proof of ownership scheme, and can carry out block-level client deduplication to achieve efficient and secure cloud storage deduplication scheme.
NASA Astrophysics Data System (ADS)
Wang, Jingtao; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2017-02-01
In this study, we propose the concept of judgment space to investigate the quantum-secret-sharing scheme based on local distinguishability (called LOCC-QSS). Because of the proposing of this conception, the property of orthogonal mutiqudit entangled states under restricted local operation and classical communication (LOCC) can be described more clearly. According to these properties, we reveal that, in the previous (k ,n )-threshold LOCC-QSS scheme, there are two required conditions for the selected quantum states to resist the unambiguous attack: (i) their k -level judgment spaces are orthogonal, and (ii) their (k -1 )-level judgment spaces are equal. Practically, if k
Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters
Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng
2016-01-01
Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers. PMID:27618046
Hose, D R; Lawford, P V; Narracott, A J; Penrose, J M T; Jones, I P
2003-01-01
Fluid-solid interaction is a primary feature of cardiovascular flows. There is increasing interest in the numerical solution of these systems as the extensive computational resource required for such studies becomes available. One form of coupling is an external weak coupling of separate solid and fluid mechanics codes. Information about the stress tensor and displacement vector at the wetted boundary is passed between the codes, and an iterative scheme is employed to move towards convergence of these parameters at each time step. This approach has the attraction that separate codes with the most extensive functionality for each of the separate phases can be selected, which might be important in the context of the complex rheology and contact mechanics that often feature in cardiovascular systems. Penrose and Staples describe a weak coupling of CFX for computational fluid mechanics to ANSYS for solid mechanics, based on a simple Jacobi iteration scheme. It is important to validate the coupled numerical solutions. An extensive analytical study of flow in elastic-walled tubes was carried out by Womersley in the late 1950s. This paper describes the performance of the coupling software for the straight elastic-walled tube, and compares the results with Womersley's analytical solutions. It also presents preliminary results demonstrating the application of the coupled software in the context of a stented vessel.
Hierarchical atom type definitions and extensible all-atom force fields.
Jin, Zhao; Yang, Chunwei; Cao, Fenglei; Li, Feng; Jing, Zhifeng; Chen, Long; Shen, Zhe; Xin, Liang; Tong, Sijia; Sun, Huai
2016-03-15
The extensibility of force field is a key to solve the missing parameter problem commonly found in force field applications. The extensibility of conventional force fields is traditionally managed in the parameterization procedure, which becomes impractical as the coverage of the force field increases above a threshold. A hierarchical atom-type definition (HAD) scheme is proposed to make extensible atom type definitions, which ensures that the force field developed based on the definitions are extensible. To demonstrate how HAD works and to prepare a foundation for future developments, two general force fields based on AMBER and DFF functional forms are parameterized for common organic molecules. The force field parameters are derived from the same set of quantum mechanical data and experimental liquid data using an automated parameterization tool, and validated by calculating molecular and liquid properties. The hydration free energies are calculated successfully by introducing a polarization scaling factor to the dispersion term between the solvent and solute molecules. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Shi, Yu; Liang, Long; Ge, Hai-Wen; Reitz, Rolf D.
2010-03-01
Acceleration of the chemistry solver for engine combustion is of much interest due to the fact that in practical engine simulations extensive computational time is spent solving the fuel oxidation and emission formation chemistry. A dynamic adaptive chemistry (DAC) scheme based on a directed relation graph error propagation (DRGEP) method has been applied to study homogeneous charge compression ignition (HCCI) engine combustion with detailed chemistry (over 500 species) previously using an R-value-based breadth-first search (RBFS) algorithm, which significantly reduced computational times (by as much as 30-fold). The present paper extends the use of this on-the-fly kinetic mechanism reduction scheme to model combustion in direct-injection (DI) engines. It was found that the DAC scheme becomes less efficient when applied to DI engine simulations using a kinetic mechanism of relatively small size and the accuracy of the original DAC scheme decreases for conventional non-premixed combustion engine. The present study also focuses on determination of search-initiating species, involvement of the NOx chemistry, selection of a proper error tolerance, as well as treatment of the interaction of chemical heat release and the fuel spray. Both the DAC schemes were integrated into the ERC KIVA-3v2 code, and simulations were conducted to compare the two schemes. In general, the present DAC scheme has better efficiency and similar accuracy compared to the previous DAC scheme. The efficiency depends on the size of the chemical kinetics mechanism used and the engine operating conditions. For cases using a small n-heptane kinetic mechanism of 34 species, 30% of the computational time is saved, and 50% for a larger n-heptane kinetic mechanism of 61 species. The paper also demonstrates that by combining the present DAC scheme with an adaptive multi-grid chemistry (AMC) solver, it is feasible to simulate a direct-injection engine using a detailed n-heptane mechanism with 543 species with practical computer time.
A Semi-Implicit, Three-Dimensional Model for Estuarine Circulation
Smith, Peter E.
2006-01-01
A semi-implicit, finite-difference method for the numerical solution of the three-dimensional equations for circulation in estuaries is presented and tested. The method uses a three-time-level, leapfrog-trapezoidal scheme that is essentially second-order accurate in the spatial and temporal numerical approximations. The three-time-level scheme is shown to be preferred over a two-time-level scheme, especially for problems with strong nonlinearities. The stability of the semi-implicit scheme is free from any time-step limitation related to the terms describing vertical diffusion and the propagation of the surface gravity waves. The scheme does not rely on any form of vertical/horizontal mode-splitting to treat the vertical diffusion implicitly. At each time step, the numerical method uses a double-sweep method to transform a large number of small tridiagonal equation systems and then uses the preconditioned conjugate-gradient method to solve a single, large, five-diagonal equation system for the water surface elevation. The governing equations for the multi-level scheme are prepared in a conservative form by integrating them over the height of each horizontal layer. The layer-integrated volumetric transports replace velocities as the dependent variables so that the depth-integrated continuity equation that is used in the solution for the water surface elevation is linear. Volumetric transports are computed explicitly from the momentum equations. The resulting method is mass conservative, efficient, and numerically accurate.
A Layered Searchable Encryption Scheme with Functional Components Independent of Encryption Methods
Luo, Guangchun; Qin, Ke
2014-01-01
Searchable encryption technique enables the users to securely store and search their documents over the remote semitrusted server, which is especially suitable for protecting sensitive data in the cloud. However, various settings (based on symmetric or asymmetric encryption) and functionalities (ranked keyword query, range query, phrase query, etc.) are often realized by different methods with different searchable structures that are generally not compatible with each other, which limits the scope of application and hinders the functional extensions. We prove that asymmetric searchable structure could be converted to symmetric structure, and functions could be modeled separately apart from the core searchable structure. Based on this observation, we propose a layered searchable encryption (LSE) scheme, which provides compatibility, flexibility, and security for various settings and functionalities. In this scheme, the outputs of the core searchable component based on either symmetric or asymmetric setting are converted to some uniform mappings, which are then transmitted to loosely coupled functional components to further filter the results. In such a way, all functional components could directly support both symmetric and asymmetric settings. Based on LSE, we propose two representative and novel constructions for ranked keyword query (previously only available in symmetric scheme) and range query (previously only available in asymmetric scheme). PMID:24719565
High-order conservative finite difference GLM-MHD schemes for cell-centered MHD
NASA Astrophysics Data System (ADS)
Mignone, Andrea; Tzeferacos, Petros; Bodo, Gianluigi
2010-08-01
We present and compare third- as well as fifth-order accurate finite difference schemes for the numerical solution of the compressible ideal MHD equations in multiple spatial dimensions. The selected methods lean on four different reconstruction techniques based on recently improved versions of the weighted essentially non-oscillatory (WENO) schemes, monotonicity preserving (MP) schemes as well as slope-limited polynomial reconstruction. The proposed numerical methods are highly accurate in smooth regions of the flow, avoid loss of accuracy in proximity of smooth extrema and provide sharp non-oscillatory transitions at discontinuities. We suggest a numerical formulation based on a cell-centered approach where all of the primary flow variables are discretized at the zone center. The divergence-free condition is enforced by augmenting the MHD equations with a generalized Lagrange multiplier yielding a mixed hyperbolic/parabolic correction, as in Dedner et al. [J. Comput. Phys. 175 (2002) 645-673]. The resulting family of schemes is robust, cost-effective and straightforward to implement. Compared to previous existing approaches, it completely avoids the CPU intensive workload associated with an elliptic divergence cleaning step and the additional complexities required by staggered mesh algorithms. Extensive numerical testing demonstrate the robustness and reliability of the proposed framework for computations involving both smooth and discontinuous features.
From Three-Photon Greenberger-Horne-Zeilinger States to Ballistic Universal Quantum Computation.
Gimeno-Segovia, Mercedes; Shadbolt, Pete; Browne, Dan E; Rudolph, Terry
2015-07-10
Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. A series of increasingly efficient proposals have shown linear-optical quantum computing to be formally scalable. However, existing schemes typically require extensive adaptive switching, which is experimentally challenging and noisy, thousands of photon sources per renormalized qubit, and/or large quantum memories for repeat-until-success strategies. Our work overcomes all these problems. We present a scheme to construct a cluster state universal for quantum computation, which uses no adaptive switching, no large memories, and which is at least an order of magnitude more resource efficient than previous passive schemes. Unlike previous proposals, it is constructed entirely from loss-detecting gates and offers a robustness to photon loss. Even without the use of an active loss-tolerant encoding, our scheme naturally tolerates a total loss rate ∼1.6% in the photons detected in the gates. This scheme uses only 3 Greenberger-Horne-Zeilinger states as a resource, together with a passive linear-optical network. We fully describe and model the iterative process of cluster generation, including photon loss and gate failure. This demonstrates that building a linear-optical quantum computer needs to be less challenging than previously thought.
Comparison of four stable numerical methods for Abel's integral equation
NASA Technical Reports Server (NTRS)
Murio, Diego A.; Mejia, Carlos E.
1991-01-01
The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.
1989-01-01
The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.
Potential for protein surface shape analysis using spherical harmonics and 3D Zernike descriptors.
Venkatraman, Vishwesh; Sael, Lee; Kihara, Daisuke
2009-01-01
With structure databases expanding at a rapid rate, the task at hand is to provide reliable clues to their molecular function and to be able to do so on a large scale. This, however, requires suitable encodings of the molecular structure which are amenable to fast screening. To this end, moment-based representations provide a compact and nonredundant description of molecular shape and other associated properties. In this article, we present an overview of some commonly used representations with specific focus on two schemes namely spherical harmonics and their extension, the 3D Zernike descriptors. Key features and differences of the two are reviewed and selected applications are highlighted. We further discuss recent advances covering aspects of shape and property-based comparison at both global and local levels and demonstrate their applicability through some of our studies.
Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, Helen C.; Mansour, Nagi (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great challenge to algorithm development. In addition, controlling the numerical error of the divergence free condition of the magnetic fields for high order methods has been a stumbling block. Lower order methods are not practical for the astrophysical problems in question. We propose to extend our hydrodynamics schemes to the MHD equations with several desired properties over commonly used MHD schemes.
Physical oceanography from satellites: Currents and the slope of the sea surface
NASA Technical Reports Server (NTRS)
Sturges, W.
1974-01-01
A global scheme using satellite altimetry in conjunction with thermometry techniques provides for more accurate determinations of first order leveling networks by overcoming discrepancies between ocean leveling and land leveling methods. The high noise content in altimetry signals requires filtering or correction for tides, etc., as well as carefully planned sampling schemes.
NASA Astrophysics Data System (ADS)
Bakoban, Rana A.
2017-08-01
The coefficient of variation [CV] has several applications in applied statistics. So in this paper, we adopt Bayesian and non-Bayesian approaches for the estimation of CV under type-II censored data from extension exponential distribution [EED]. The point and interval estimate of the CV are obtained for each of the maximum likelihood and parametric bootstrap techniques. Also the Bayesian approach with the help of MCMC method is presented. A real data set is presented and analyzed, hence the obtained results are used to assess the obtained theoretical results.
NASA Technical Reports Server (NTRS)
Miller, Timothy L.; Cohen, Charles; Paxton, Jessica; Robertson, F. R. (Pete)
2009-01-01
Global forecasts were made with the 0.25-degree latitude version of GEOS-5, with the RAS scheme and with the Kain-Fritsch scheme. Examination was made of the Katrina (2005) hurricane simulation. Replacement of the RAS convective scheme with the K-F scheme results in a much more vigorous Katrina, closer to reality. Still, the result is not as vigorous as reality. In terms of wind maximum, the gap was closed by 50%. The result seems to be due to the RAS scheme drying out the boundary layer, thus hampering the grid-scale secondary circulation and attending cyclone development. The RAS case never developed a full warm core, whereas the K-F case did. Not shown here: The K-F scheme also resulted in a more vigorous storm than when GEOS-5 is run with no convective parameterization. Also not shown: An experiment in which the RAS firing level was moved up by 3 model levels resulted in a stronger, warm-core storm, though not as strong as the K-F case. Effects on storm track were noticed, but not studied.
Matching soil salinization and cropping systems in communally managed irrigation schemes
NASA Astrophysics Data System (ADS)
Malota, Mphatso; Mchenga, Joshua
2018-03-01
Occurrence of soil salinization in irrigation schemes can be a good indicator to introduce high salt tolerant crops in irrigation schemes. This study assessed the level of soil salinization in a communally managed 233 ha Nkhate irrigation scheme in the Lower Shire Valley region of Malawi. Soil samples were collected within the 0-0.4 m soil depth from eight randomly selected irrigation blocks. Irrigation water samples were also collected from five randomly selected locations along the Nkhate River which supplies irrigation water to the scheme. Salinity of both the soil and the irrigation water samples was determined using an electrical conductivity (EC) meter. Analysis of the results indicated that even for very low salinity tolerant crops (ECi < 2 dS/m), the irrigation water was suitable for irrigation purposes. However, root-zone soil salinity profiles depicted that leaching of salts was not adequate and that the leaching requirement for the scheme needs to be relooked and always be adhered to during irrigation operation. The study concluded that the crop system at the scheme needs to be adjusted to match with prevailing soil and irrigation water salinity levels.
The a(4) Scheme-A High Order Neutrally Stable CESE Solver
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2009-01-01
The CESE development is driven by a belief that a solver should (i) enforce conservation laws in both space and time, and (ii) be built from a nondissipative (i.e., neutrally stable) core scheme so that the numerical dissipation can be controlled effectively. To provide a solid foundation for a systematic CESE development of high order schemes, in this paper we describe a new high order (4-5th order) and neutrally stable CESE solver of a 1D advection equation with a constant advection speed a. The space-time stencil of this two-level explicit scheme is formed by one point at the upper time level and two points at the lower time level. Because it is associated with four independent mesh variables (the numerical analogues of the dependent variable and its first, second, and third-order spatial derivatives) and four equations per mesh point, the new scheme is referred to as the a(4) scheme. As in the case of other similar CESE neutrally stable solvers, the a(4) scheme enforces conservation laws in space-time locally and globally, and it has the basic, forward marching, and backward marching forms. Except for a singular case, these forms are equivalent and satisfy a space-time inversion (STI) invariant property which is shared by the advection equation. Based on the concept of STI invariance, a set of algebraic relations is developed and used to prove the a(4) scheme must be neutrally stable when it is stable. Numerically, it has been established that the scheme is stable if the value of the Courant number is less than 1/3
SYSTEMATIZATION OF MASS LEVELS OF PARTICLES AND RESONANCES ON HEURISTIC BASIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takabayasi, T.
1963-12-16
Once more a scheme of simple mass rules and formulas for particles and resonant levels is investigated and organized, based on some general hypotheses. The essential ingredients in the scheme are, on one hand, the equalinterval rule governing the isosinglet meson series, associated with particularly simple mass ratio between the 2/sup ++/ level f and 0/sup ++/ level ABC, and on the other a new basic mass formula that unifies some of the meson and baryon levels. The whole baryon levels are arranged in a table analogous to the periodic table, and then correspondences between different series and equivalence betweenmore » spin and hypercharge, when properly applied, just fix the whole baryon mass spectrum in good agreement with observations. Connections with the scheme of mass formulas formerly given are also shown. (auth)« less
1992-01-01
multiversioning scheme for this purpose was presented in [9]. The scheme guarantees that high level methods would read down object states at lower levels that...order given by fork-stamp, and terminated writing versions with timestamp WStamp. Such a history is needed to implement the multiversioning scheme...recovery protocol for multiversion schedulers and show that this protocol is both correct and secure. The behavior of the recovery protocol depends
Crystal field parameters and energy levels scheme of trivalent chromium doped BSO
NASA Astrophysics Data System (ADS)
Petkova, P.; Andreici, E.-L.; Avram, N. M.
2014-11-01
The aim of this paper is to give an analysis of crystal field parameters and energy levels schemes for the above doped material, in order to give a reliable explanation for experimental data. The crystal field parameters have been modeled in the frame of Exchange Charge Model (ECM) of the crystal field theory, taken into account the geometry of systems, with actually site symmetry of the impurity ions. The effect of the charges of the ligands and covalence bonding between chromium cation and oxygen anions, in the cluster approach, also were taken into account. With the obtained values of the crystal field parameters we simulated the scheme of energy levels of chromium ions by diagonalizing the matrix of the Hamiltonian of the doped crystal. The obtained energy levels and estimated Racah parameters B and C were compared with the experimental spectroscopic data and discussed. Comparison with experiment shows that the results are quite satisfactory which justify the model and simulation scheme used for the title system.
Crystal field parameters and energy levels scheme of trivalent chromium doped BSO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petkova, P.; Andreici, E.-L.; Avram, N. M., E-mail: n1m2marva@yahoo.com
The aim of this paper is to give an analysis of crystal field parameters and energy levels schemes for the above doped material, in order to give a reliable explanation for experimental data. The crystal field parameters have been modeled in the frame of Exchange Charge Model (ECM) of the crystal field theory, taken into account the geometry of systems, with actually site symmetry of the impurity ions. The effect of the charges of the ligands and covalence bonding between chromium cation and oxygen anions, in the cluster approach, also were taken into account. With the obtained values of themore » crystal field parameters we simulated the scheme of energy levels of chromium ions by diagonalizing the matrix of the Hamiltonian of the doped crystal. The obtained energy levels and estimated Racah parameters B and C were compared with the experimental spectroscopic data and discussed. Comparison with experiment shows that the results are quite satisfactory which justify the model and simulation scheme used for the title system.« less
Seasonal forecasts of groundwater levels in Lanyang Plain in Taiwan
NASA Astrophysics Data System (ADS)
Chang, Ya-Chi; Lin, Yi-Chiu
2017-04-01
Groundwater plays a critical and important role in world's freshwater resources and it is also an important part of Taiwan's water supply for domestic, agricultural and industrial use. Prolonged dry climatic conditions can induce groundwater drought and may have huge impact on water resources. Therefore, this study utilizes seasonal rainfall forecasts from the Model for Prediction Across Scales (MPAS) to simulate groundwater levels in Lanyang Plain in Taiwan up to three months into future. The MPAS is setup with 120 km uniform grid and the physics schemes including WSM6 micorphysics scheme, Kain-Fritsch cumulus scheme, RRTMG radiation scheme, and YSU planetary boundary layer scheme are used to provide the rainfall forecasts. Results of this study can provide a reference for water resources management to ensure the sustainability of groundwater resources in Lanyang Plain.
Moving overlapping grids with adaptive mesh refinement for high-speed reactive and non-reactive flow
NASA Astrophysics Data System (ADS)
Henshaw, William D.; Schwendeman, Donald W.
2006-08-01
We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows that demonstrate the use and accuracy of the numerical approach.
An extrapolation scheme for solid-state NMR chemical shift calculations
NASA Astrophysics Data System (ADS)
Nakajima, Takahito
2017-06-01
Conventional quantum chemical and solid-state physical approaches include several problems to accurately calculate solid-state nuclear magnetic resonance (NMR) properties. We propose a reliable computational scheme for solid-state NMR chemical shifts using an extrapolation scheme that retains the advantages of these approaches but reduces their disadvantages. Our scheme can satisfactorily yield solid-state NMR magnetic shielding constants. The estimated values have only a small dependence on the low-level density functional theory calculation with the extrapolation scheme. Thus, our approach is efficient because the rough calculation can be performed in the extrapolation scheme.
Calculation of the recirculating compressible flow downstream a sudden axisymmetric expansion
NASA Technical Reports Server (NTRS)
Vandromme, D.; Haminh, H.; Brunet, H.
1988-01-01
Significant progress has been made during the last five years to adapt conventional Navier-Stokes solver for handling nonconservative equations. A primary type of application is to use transport equation turbulence models, but the extension is also possible for describing the transport of nonpassive scalars, such as in reactive media. Among others, combustion and gas dissociation phenomena are topics needing a considerable research effort. An implicit two step scheme based on the well-known MacCormack scheme has been modified to treat compressible turbulent flows on complex geometries. Implicit treatment of nonconservative equations (in the present case a two-equation turbulence model) opens the way to the coupled solution of thermochemical transport equations.
Navier-Stokes Dynamics by a Discrete Boltzmann Model
NASA Technical Reports Server (NTRS)
Rubinstein, Robet
2010-01-01
This work investigates the possibility of particle-based algorithms for the Navier-Stokes equations and higher order continuum approximations of the Boltzmann equation; such algorithms would generalize the well-known Pullin scheme for the Euler equations. One such method is proposed in the context of a discrete velocity model of the Boltzmann equation. Preliminary results on shock structure are consistent with the expectation that the shock should be much broader than the near discontinuity predicted by the Pullin scheme, yet narrower than the prediction of the Boltzmann equation. We discuss the extension of this essentially deterministic method to a stochastic particle method that, like DSMC, samples the distribution function rather than resolving it completely.
Lattice-Assisted Spectroscopy: A Generalized Scanning Tunneling Microscope for Ultracold Atoms.
Kantian, A; Schollwöck, U; Giamarchi, T
2015-10-16
We propose a scheme to measure the frequency-resolved local particle and hole spectra of any optical lattice-confined system of correlated ultracold atoms that offers single-site addressing and imaging, which is now an experimental reality. Combining perturbation theory and time-dependent density matrix renormalization group simulations, we quantitatively test and validate this approach of lattice-assisted spectroscopy on several one-dimensional example systems, such as the superfluid and Mott insulator, with and without a parabolic trap, and finally on edge states of the bosonic Su-Schrieffer-Heeger model. We highlight extensions of our basic scheme to obtain an even wider variety of interesting and important frequency resolved spectra.
Error recovery in shared memory multiprocessors using private caches
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1990-01-01
The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.
NASA Technical Reports Server (NTRS)
Kim, Hyoungin; Liou, Meng-Sing
2011-01-01
In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems
Yousaf, Sidrah; Javaid, Nadeem; Qasim, Umar; Alrajeh, Nabil; Khan, Zahoor Ali; Ahmed, Mansoor
2016-02-24
In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption.
Yousaf, Sidrah; Javaid, Nadeem; Qasim, Umar; Alrajeh, Nabil; Khan, Zahoor Ali; Ahmed, Mansoor
2016-01-01
In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption. PMID:26927104
Elbashir, Ahmed B; Abdelbagi, Azhari O; Hammad, Ahmed M A; Elzorgani, Gafar A; Laing, Mark D
2015-03-01
Ninety-six human blood samples were collected from six locations that represent areas of intensive pesticide use in Sudan, which included irrigated cotton schemes (Wad Medani, Hasaheesa, Elmanagil, and Elfaw) and sugarcane schemes (Kenana and Gunaid). Blood samples were analyzed for organochlorine pesticide residues by gas liquid chromatography (GLC) equipped with an electron capture detector (ECD). Residues of p,p'-dichlorodiphenyldichloroethylene (DDE), heptachlor epoxide, γ-HCH, and dieldrin were detected in blood from all locations surveyed. Aldrin was not detected in any of the samples analyzed, probably due to its conversion to dieldrin. The levels of total organochlorine burden detected were higher in the blood from people in the irrigated cotton schemes (mean 261 ng ml(-1), range 38-641 ng ml(-1)) than in the blood of people from the irrigated sugarcane schemes (mean 204 ng ml(-1), range 59-365 ng ml(-1)). The highest levels of heptachlor epoxide (170 ng ml(-1)) and γ-HCH (92 ng ml(-1)) were observed in blood samples from Hasaheesa, while the highest levels of DDE (618 ng ml(-1)) and dieldrin (82 ng ml(-1)) were observed in blood samples from Wad Medani and Kenana, respectively. The organochlorine levels in blood samples seemed to decrease with increasing distance from the old irrigated cotton schemes (Wad Medani, Hasaheesa, and Elmanagil) where the heavy application of these pesticides took place historically.
NASA Astrophysics Data System (ADS)
Zhou, X.; Beljaars, A.; Wang, Y.; Huang, B.; Lin, C.; Chen, Y.; Wu, H.
2017-09-01
Weather Research and Forecasting (WRF) simulations with different selections of subgrid orographic drag over the Tibetan Plateau have been evaluated with observation and ERA-Interim reanalysis. Results show that the subgrid orographic drag schemes, especially the turbulent orographic form drag (TOFD) scheme, efficiently reduce the 10 m wind speed bias and RMS error with respect to station measurements. With the combination of gravity wave, flow blocking and TOFD schemes, wind speed is simulated more realistically than with the individual schemes only. Improvements are also seen in the 2 m air temperature and surface pressure. The gravity wave drag, flow blocking drag, and TOFD schemes combined have the smallest station mean bias (-2.05°C in 2 m air temperature and 1.27 hPa in surface pressure) and RMS error (3.59°C in 2 m air temperature and 2.37 hPa in surface pressure). Meanwhile, the TOFD scheme contributes more to the improvements than the gravity wave drag and flow blocking schemes. The improvements are more pronounced at low levels of the atmosphere than at high levels due to the stronger drag enhancement on the low-level flow. The reduced near-surface cold bias and high-pressure bias over the Tibetan Plateau are the result of changes in the low-level wind components associated with the geostrophic balance. The enhanced drag directly leads to weakened westerlies but also enhances the a-geostrophic flow in this case reducing (enhancing) the northerlies (southerlies), which bring more warm air across the Himalaya Mountain ranges from South Asia (bring less cold air from the north) to the interior Tibetan Plateau.
Avila-Burgos, Leticia; Cahuana-Hurtado, Lucero; Montañez-Hernandez, Julio; Servan-Mori, Edson; Aracena-Genao, Belkis; Del Río-Zolezzi, Aurora
2016-01-01
To analyze whether the changes observed in the level and distribution of resources for maternal health and family planning (MHFP) programs from 2003 to 2012 were consistent with the financial goals of the related policies. A longitudinal descriptive analysis of the Mexican Reproductive Health Subaccounts 2003-2012 was performed by financing scheme and health function. Financing schemes included social security, government schemes, household out-of-pocket (OOP) payments, and private insurance plans. Functions were preventive care, including family planning, antenatal and puerperium health services, normal and cesarean deliveries, and treatment of complications. Changes in the financial imbalance indicators covered by MHFP policy were tracked: (a) public and OOP expenditures as percentages of total MHFP spending; (b) public expenditure per woman of reproductive age (WoRA, 15-49 years) by financing scheme; (c) public expenditure on treating complications as a percentage of preventive care; and (d) public expenditure on WoRA at state level. Statistical analyses of trends and distributions were performed. Public expenditure on government schemes grew by approximately 300%, and the financial imbalance between populations covered by social security and government schemes decreased. The financial burden on households declined, particularly among households without social security. Expenditure on preventive care grew by 16%, narrowing the financing gap between treatment of complications and preventive care. Finally, public expenditure per WoRA for government schemes nearly doubled at the state level, although considerable disparities persist. Changes in the level and distribution of MHFP funding from 2003 to 2012 were consistent with the relevant policy goals. However, improving efficiency requires further analysis to ascertain the impact of investments on health outcomes. This, in turn, will require better financial data systems as a precondition for improving the monitoring and accountability functions in Mexico.
Li, Houfen; Yu, Hongtao; Quan, Xie; Chen, Shuo; Zhang, Yaobin
2016-01-27
Z-scheme photocatalytic system shows superiority in degradation of refractory pollutants and water splitting due to the high redox capacities caused by its unique charge transfer behaviors. As a key component of Z-scheme system, the electron mediator plays an important role in charge carrier migration. According to the energy band theory, we believe the interfacial energy band bendings facilitate the electron transfer via Z-scheme mechanism when the Fermi level of electron mediator is between the Fermi levels of Photosystem II (PS II) and Photosystem I (PS I), whereas charge transfer is inhibited in other cases as energy band barriers would form at the semiconductor-metal interfaces. Here, this inference was verified by the increased hydroxyl radical generation and improved photocurrent on WO3-Cu-gC3N4 (with the desired Fermi level structure), which were not observed on either WO3-Ag-gC3N4 or WO3-Au-gC3N4. Finally, photocatalytic degradation rate of 4-nonylphenol on WO3-Cu-gC3N4 was proved to be as high as 11.6 times than that of WO3-gC3N4, further demonstrating the necessity of a suitable electron mediator in Z-scheme system. This study provides scientific basis for rational construction of Z-scheme photocatalytic system.
Optimal scan strategy for mega-pixel and kilo-gray-level OLED-on-silicon microdisplay.
Ji, Yuan; Ran, Feng; Ji, Weigui; Xu, Meihua; Chen, Zhangjing; Jiang, Yuxi; Shen, Weixin
2012-06-10
The digital pixel driving scheme makes the organic light-emitting diode (OLED) microdisplays more immune to the pixel luminance variations and simplifies the circuit architecture and design flow compared to the analog pixel driving scheme. Additionally, it is easily applied in full digital systems. However, the data bottleneck becomes a notable problem as the number of pixels and gray levels grow dramatically. This paper will discuss the digital driving ability to achieve kilogray-levels for megapixel displays. The optimal scan strategy is proposed for creating ultra high gray levels and increasing light efficiency and contrast ratio. Two correction schemes are discussed to improve the gray level linearity. A 1280×1024×3 OLED-on-silicon microdisplay, with 4096 gray levels, is designed based on the optimal scan strategy. The circuit driver is integrated in the silicon backplane chip in the 0.35 μm 3.3 V-6 V dual voltage one polysilicon layer, four metal layers (1P4M) complementary metal-oxide semiconductor (CMOS) process with custom top metal. The design aspects of the optimal scan controller are also discussed. The test results show the gray level linearity of the correction schemes for the optimal scan strategy is acceptable by the human eye.
NASA Astrophysics Data System (ADS)
Kataev, A. L.; Molokoedov, V. S.
2017-12-01
The analytical {\\mathscr{O}}({a}s4) perturbative QCD expression for the flavour non-singlet contribution to the Bjorken polarized sum rule in the rather applicable at present gauge-dependent miniMOM scheme is obtained. For the considered three values of the gauge parameter, namely ξ = 0 (Landau gauge), ξ = -1 (anti-Feynman gauge) and ξ = -3 (Stefanis-Mikhailov gauge), the scheme-dependent coefficients are considerably smaller than the gauge-independent {\\overline{{MS}}} results. It is found that the fundamental property of the factorization of the QCD renormalization group β-function in the generalized Crewther relation, which is valid in the gauge-invariant {\\overline{{MS}}} scheme up to {\\mathscr{O}}({a}s4)-level at least, is unexpectedly valid at the same level in the miniMOM-scheme for ξ = 0, and for ξ = -1 and ξ = -3 in part.
Simple Numerical Modelling for Gasdynamic Design of Wave Rotors
NASA Astrophysics Data System (ADS)
Okamoto, Koji; Nagashima, Toshio
The precise estimation of pressure waves generated in the passages is a crucial factor in wave rotor design. However, it is difficult to estimate the pressure wave analytically, e.g. by the method of characteristics, because the mechanism of pressure-wave generation and propagation in the passages is extremely complicated as compared to that in a shock tube. In this study, a simple numerical modelling scheme was developed to facilitate the design procedure. This scheme considers the three dominant factors in the loss mechanism —gradual passage opening, wall friction and leakage— for simulating the pressure waves precisely. The numerical scheme itself is based on the one-dimensional Euler equations with appropriate source terms to reduce the calculation time. The modelling of these factors was verified by comparing the results with those of a two-dimensional numerical simulation, which were previously validated by the experimental data in our previous study. Regarding wave rotor miniaturization, the leakage flow effect, which involves the interaction between adjacent cells, was investigated extensively. A port configuration principle was also examined and analyzed in detail to verify the applicability of the present numerical modelling scheme to the wave rotor design.
Additive schemes for certain operator-differential equations
NASA Astrophysics Data System (ADS)
Vabishchevich, P. N.
2010-12-01
Unconditionally stable finite difference schemes for the time approximation of first-order operator-differential systems with self-adjoint operators are constructed. Such systems arise in many applied problems, for example, in connection with nonstationary problems for the system of Stokes (Navier-Stokes) equations. Stability conditions in the corresponding Hilbert spaces for two-level weighted operator-difference schemes are obtained. Additive (splitting) schemes are proposed that involve the solution of simple problems at each time step. The results are used to construct splitting schemes with respect to spatial variables for nonstationary Navier-Stokes equations for incompressible fluid. The capabilities of additive schemes are illustrated using a two-dimensional model problem as an example.
The Impact of Microphysics on Intensity and Structure of Hurricanes and Mesoscale Convective Systems
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Shi, Jainn J.; Jou, Ben Jong-Dao; Lee, Wen-Chau; Lin, Pay-Liam; Chang, Mei-Yu
2007-01-01
During the past decade, both research and operational numerical weather prediction models, e.g. Weather Research and Forecast (WRF) model, have started using more complex microphysical schemes originally developed for high-resolution cloud resolving models (CRMs) with a 1-2 km or less horizontal resolutions. WRF is a next-generation mesoscale forecast model and assimilation system that has incorporated modern software framework, advanced dynamics, numeric and data assimilation techniques, a multiple moveable nesting capability, and improved physical packages. WRF model can be used for a wide range of applications, from idealized research to operational forecasting, with an emphasis on horizontal grid sizes in the range of 1-10 km. The current WRF includes several different microphysics options such as Purdue Lin et al. (1983), WSM 6-class and Thompson microphysics schemes. We have recently implemented three sophisticated cloud microphysics schemes into WRF. The cloud microphysics schemes have been extensively tested and applied for different mesoscale systems in different geographical locations. The performances of these schemes have been compared to those from other WRF microphysics options. We are performing sensitivity tests in using WRF to examine the impact of six different cloud microphysical schemes on precipitation processes associated hurricanes and mesoscale convective systems developed at different geographic locations [Oklahoma (IHOP), Louisiana (Hurricane Katrina), Canada (C3VP - snow events), Washington (fire storm), India (Monsoon), Taiwan (TiMREX - terrain)]. We will determine the microphysical schemes for good simulated convective systems in these geographic locations. We are also performing the inline tracer calculation to comprehend the physical processes (i.e., boundary layer and each quadrant in the boundary layer) related to the development and structure of hurricanes and mesoscale convective systems.
Local readout enhancement for detuned signal-recycling interferometers
NASA Astrophysics Data System (ADS)
Rehbein, Henning; Müller-Ebhardt, Helge; Somiya, Kentaro; Li, Chao; Schnabel, Roman; Danzmann, Karsten; Chen, Yanbei
2007-09-01
High power detuned signal-recycling interferometers currently planned for second-generation interferometric gravitational-wave detectors (for example Advanced LIGO) are characterized by two resonances in the detection band, an optical resonance and an optomechanical resonance which is upshifted from the suspension pendulum frequency due to the so-called optical-spring effect. The detector’s sensitivity is enhanced around these two resonances. However, at frequencies below the optomechanical resonance frequency, the sensitivity of such interferometers is significantly lower than non-optical-spring configurations with comparable circulating power; such a drawback can also compromise high-frequency sensitivity, when an optimization is performed on the overall sensitivity of the interferometer to a class of sources. In this paper, we clarify the reason for such a low sensitivity, and propose a way to fix this problem. Motivated by the optical-bar scheme of Braginsky, Gorodetsky, and Khalili, we propose to add a local readout scheme which measures the motion of the arm-cavity front mirror, which at low frequencies moves together with the arm-cavity end mirror, under the influence of gravitational waves. This scheme improves the low-frequency quantum-noise-limited sensitivity of optical-spring interferometers significantly and can be considered as an incorporation of the optical-bar scheme into currently planned second-generation interferometers. On the other hand it can be regarded as an extension of the optical-bar scheme. Taking compact binary inspiral signals as an example, we illustrate how this scheme can be used to improve the sensitivity of the planned Advanced LIGO interferometer, in various scenarios, using a realistic classical-noise budget. We also discuss how this scheme can be implemented in Advanced LIGO with relative ease.
NASA Astrophysics Data System (ADS)
Navas-Montilla, A.; Murillo, J.
2016-07-01
In this work, an arbitrary order HLL-type numerical scheme is constructed using the flux-ADER methodology. The proposed scheme is based on an augmented Derivative Riemann solver that was used for the first time in Navas-Montilla and Murillo (2015) [1]. Such solver, hereafter referred to as Flux-Source (FS) solver, was conceived as a high order extension of the augmented Roe solver and led to the generation of a novel numerical scheme called AR-ADER scheme. Here, we provide a general definition of the FS solver independently of the Riemann solver used in it. Moreover, a simplified version of the solver, referred to as Linearized-Flux-Source (LFS) solver, is presented. This novel version of the FS solver allows to compute the solution without requiring reconstruction of derivatives of the fluxes, nevertheless some drawbacks are evidenced. In contrast to other previously defined Derivative Riemann solvers, the proposed FS and LFS solvers take into account the presence of the source term in the resolution of the Derivative Riemann Problem (DRP), which is of particular interest when dealing with geometric source terms. When applied to the shallow water equations, the proposed HLLS-ADER and AR-ADER schemes can be constructed to fulfill the exactly well-balanced property, showing that an arbitrary quadrature of the integral of the source inside the cell does not ensure energy balanced solutions. As a result of this work, energy balanced flux-ADER schemes that provide the exact solution for steady cases and that converge to the exact solution with arbitrary order for transient cases are constructed.
A shock-capturing SPH scheme based on adaptive kernel estimation
NASA Astrophysics Data System (ADS)
Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime
2006-02-01
Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.
NASA Astrophysics Data System (ADS)
Okedu, Kenneth Eloghene; Muyeen, S. M.; Takahashi, Rion; Tamura, Junji
Recent wind farm grid codes require wind generators to ride through voltage sags, which means that normal power production should be re-initiated once the nominal grid voltage is recovered. However, fixed speed wind turbine generator system using induction generator (IG) has the stability problem similar to the step-out phenomenon of a synchronous generator. On the other hand, doubly fed induction generator (DFIG) can control its real and reactive powers independently while being operated in variable speed mode. This paper proposes a new control strategy using DFIGs for stabilizing a wind farm composed of DFIGs and IGs, without incorporating additional FACTS devices. A new current controlled voltage source converter (CC-VSC) scheme is proposed to control the converters of DFIG and the performance is verified by comparing the results with those of voltage controlled voltage source converter (VC-VSC) scheme. Another salient feature of this study is to reduce the number of proportionate integral (PI) controllers used in the rotor side converter without degrading dynamic and transient performances. Moreover, DC-link protection scheme during grid fault can be omitted in the proposed scheme which reduces overall cost of the system. Extensive simulation analyses by using PSCAD/EMTDC are carried out to clarify the effectiveness of the proposed CC-VSC based control scheme of DFIGs.
Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.
Centrifuge: rapid and sensitive classification of metagenomic sequences.
Kim, Daehwan; Song, Li; Breitwieser, Florian P; Salzberg, Steven L
2016-12-01
Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. © 2016 Kim et al.; Published by Cold Spring Harbor Laboratory Press.
Digitally balanced detection for optical tomography.
Hafiz, Rehan; Ozanyan, Krikor B
2007-10-01
Analog balanced Photodetection has found extensive usage for sensing of a weak absorption signal buried in laser intensity noise. This paper proposes schemes for compact, affordable, and flexible digital implementation of the already established analog balanced detection, as part of a multichannel digital tomography system. Variants of digitally balanced detection (DBD) schemes, suitable for weak signals on a largely varying background or weakly varying envelopes of high frequency carrier waves, are introduced analytically and elaborated in terms of algorithmic and hardware flow. The DBD algorithms are implemented on a low-cost general purpose reconfigurable hardware (field-programmable gate array), utilizing less than half of its resources. The performance of the DBD schemes compare favorably with their analog counterpart: A common mode rejection ratio of 50 dB was observed over a bandwidth of 300 kHz, limited mainly by the host digital hardware. The close relationship between the DBD outputs and those of known analog balancing circuits is discussed in principle and shown experimentally in the example case of propane gas detection.
On Performance Analysis of Protective Jamming Schemes in Wireless Sensor Networks.
Li, Xuran; Dai, Hong-Ning; Wang, Hao; Xiao, Hong
2016-11-24
Wireless sensor networks (WSNs) play an important role in Cyber Physical Social Sensing (CPSS) systems. An eavesdropping attack is one of the most serious threats to WSNs since it is a prerequisite for other malicious attacks. In this paper, we propose a novel anti-eavesdropping mechanism by introducing friendly jammers to wireless sensor networks (WSNs). In particular, we establish a theoretical framework to evaluate the eavesdropping risk of WSNs with friendly jammers and that of WSNs without jammers. Our theoretical model takes into account various channel conditions such as the path loss and Rayleigh fading, the placement schemes of jammers and the power controlling schemes of jammers. Extensive results show that using jammers in WSNs can effectively reduce the eavesdropping risk. Besides, our results also show that the appropriate placement of jammers and the proper assignment of emitting power of jammers can not only mitigate the eavesdropping risk but also may have no significant impairment to the legitimate communications.
Ringe, Stefan; Oberhofer, Harald; Hille, Christoph; Matera, Sebastian; Reuter, Karsten
2016-08-09
The size-modified Poisson-Boltzmann (MPB) equation is an efficient implicit solvation model which also captures electrolytic solvent effects. It combines an account of the dielectric solvent response with a mean-field description of solvated finite-sized ions. We present a general solution scheme for the MPB equation based on a fast function-space-oriented Newton method and a Green's function preconditioned iterative linear solver. In contrast to popular multigrid solvers, this approach allows us to fully exploit specialized integration grids and optimized integration schemes. We describe a corresponding numerically efficient implementation for the full-potential density-functional theory (DFT) code FHI-aims. We show that together with an additional Stern layer correction the DFT+MPB approach can describe the mean activity coefficient of a KCl aqueous solution over a wide range of concentrations. The high sensitivity of the calculated activity coefficient on the employed ionic parameters thereby suggests to use extensively tabulated experimental activity coefficients of salt solutions for a systematic parametrization protocol.
On Performance Analysis of Protective Jamming Schemes in Wireless Sensor Networks
Li, Xuran; Dai, Hong-Ning; Wang, Hao; Xiao, Hong
2016-01-01
Wireless sensor networks (WSNs) play an important role in Cyber Physical Social Sensing (CPSS) systems. An eavesdropping attack is one of the most serious threats to WSNs since it is a prerequisite for other malicious attacks. In this paper, we propose a novel anti-eavesdropping mechanism by introducing friendly jammers to wireless sensor networks (WSNs). In particular, we establish a theoretical framework to evaluate the eavesdropping risk of WSNs with friendly jammers and that of WSNs without jammers. Our theoretical model takes into account various channel conditions such as the path loss and Rayleigh fading, the placement schemes of jammers and the power controlling schemes of jammers. Extensive results show that using jammers in WSNs can effectively reduce the eavesdropping risk. Besides, our results also show that the appropriate placement of jammers and the proper assignment of emitting power of jammers can not only mitigate the eavesdropping risk but also may have no significant impairment to the legitimate communications. PMID:27886154
A fast chaos-based image encryption scheme with a dynamic state variables selection mechanism
NASA Astrophysics Data System (ADS)
Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo
2015-03-01
In recent years, a variety of chaos-based image cryptosystems have been investigated to meet the increasing demand for real-time secure image transmission. Most of them are based on permutation-diffusion architecture, in which permutation and diffusion are two independent procedures with fixed control parameters. This property results in two flaws. (1) At least two chaotic state variables are required for encrypting one plain pixel, in permutation and diffusion stages respectively. Chaotic state variables produced with high computation complexity are not sufficiently used. (2) The key stream solely depends on the secret key, and hence the cryptosystem is vulnerable against known/chosen-plaintext attacks. In this paper, a fast chaos-based image encryption scheme with a dynamic state variables selection mechanism is proposed to enhance the security and promote the efficiency of chaos-based image cryptosystems. Experimental simulations and extensive cryptanalysis have been carried out and the results prove the superior security and high efficiency of the scheme.
High order finite volume WENO schemes for the Euler equations under gravitational fields
NASA Astrophysics Data System (ADS)
Li, Gang; Xing, Yulong
2016-07-01
Euler equations with gravitational source terms are used to model many astrophysical and atmospheric phenomena. This system admits hydrostatic balance where the flux produced by the pressure is exactly canceled by the gravitational source term, and two commonly seen equilibria are the isothermal and polytropic hydrostatic solutions. Exact preservation of these equilibria is desirable as many practical problems are small perturbations of such balance. High order finite difference weighted essentially non-oscillatory (WENO) schemes have been proposed in [22], but only for the isothermal equilibrium state. In this paper, we design high order well-balanced finite volume WENO schemes, which can preserve not only the isothermal equilibrium but also the polytropic hydrostatic balance state exactly, and maintain genuine high order accuracy for general solutions. The well-balanced property is obtained by novel source term reformulation and discretization, combined with well-balanced numerical fluxes. Extensive one- and two-dimensional simulations are performed to verify well-balanced property, high order accuracy, as well as good resolution for smooth and discontinuous solutions.
Entanglement enhancement in multimode integrated circuits
NASA Astrophysics Data System (ADS)
Léger, Zacharie M.; Brodutch, Aharon; Helmy, Amr S.
2018-06-01
The faithful distribution of entanglement in continuous-variable systems is essential to many quantum information protocols. As such, entanglement distillation and enhancement schemes are a cornerstone of many applications. The photon subtraction scheme offers enhancement with a relatively simple setup and has been studied in various scenarios. Motivated by recent advances in integrated optics, particularly the ability to build stable multimode interferometers with squeezed input states, a multimodal extension to the enhancement via photon subtraction protocol is studied. States generated with multiple squeezed input states, rather than a single input source, are shown to be more sensitive to the enhancement protocol, leading to increased entanglement at the output. Numerical results show the gain in entanglement is not monotonic with the number of modes or the degree of squeezing in the additional modes. Consequently, the advantage due to having multiple squeezed input states can be maximized when the number of modes is still relatively small (e.g., four). The requirement for additional squeezing is within the current realm of implementation, making this scheme achievable with present technologies.
Pseudospectral collocation methods for fourth order differential equations
NASA Technical Reports Server (NTRS)
Malek, Alaeddin; Phillips, Timothy N.
1994-01-01
Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.
NASA Astrophysics Data System (ADS)
Garkusha, A. V.; Kataev, A. L.; Molokoedov, V. S.
2018-02-01
The problem of scheme and gauge dependence of the factorization property of the renormalization group β-function in the SU( N c ) QCD generalized Crewther relation (GCR), which connects the flavor non-singlet contributions to the Adler and Bjorken polarized sum rule functions, is investigated at the O({a}_s^4) level of perturbation theory. It is known that in the gauge-invariant renormalization \\overline{MS} -scheme this property holds in the QCD GCR at least at this order. To study whether this factorization property is true in all gauge-invariant schemes, we consider the MS-like schemes in QCD and the QED-limit of the GCR in the \\overline{MS} -scheme and in two other gauge-independent subtraction schemes, namely in the momentum MOM and the on-shell OS schemes. In these schemes we confirm the existence of the β-function factorization in the QCD and QED variants of the GCR. The problem of the possible β-factorization in the gauge-dependent renormalization schemes in QCD is studied. To investigate this problem we consider the gauge non-invariant mMOM and MOMgggg-schemes. We demonstrate that in the mMOM scheme at the O({a}_s^3) level the β-factorization is valid for three values of the gauge parameter ξ only, namely for ξ = -3 , -1 and ξ = 0. In the O({a}_s^4) order of PT it remains valid only for case of the Landau gauge ξ = 0. The consideration of these two gauge-dependent schemes for the QCD GCR allows us to conclude that the factorization of RG β-function will always be implemented in any MOM-like renormalization schemes with linear covariant gauge at ξ = 0 and ξ = -3 at the O({a}_s^3) approximation. It is demonstrated that if factorization property for the MS-like schemes is true in all orders of PT, as theoretically indicated in the several works on the subject, then the factorization will also occur in the arbitrary MOM-like scheme in the Landau gauge in all orders of perturbation theory as well.
Hall, Miquette; Chattaway, Marie A.; Reuter, Sandra; Savin, Cyril; Strauch, Eckhard; Carniel, Elisabeth; Connor, Thomas; Van Damme, Inge; Rajakaruna, Lakshani; Rajendram, Dunstan; Jenkins, Claire; Thomson, Nicholas R.
2014-01-01
The genus Yersinia is a large and diverse bacterial genus consisting of human-pathogenic species, a fish-pathogenic species, and a large number of environmental species. Recently, the phylogenetic and population structure of the entire genus was elucidated through the genome sequence data of 241 strains encompassing every known species in the genus. Here we report the mining of this enormous data set to create a multilocus sequence typing-based scheme that can identify Yersinia strains to the species level to a level of resolution equal to that for whole-genome sequencing. Our assay is designed to be able to accurately subtype the important human-pathogenic species Yersinia enterocolitica to whole-genome resolution levels. We also report the validation of the scheme on 386 strains from reference laboratory collections across Europe. We propose that the scheme is an important molecular typing system to allow accurate and reproducible identification of Yersinia isolates to the species level, a process often inconsistent in nonspecialist laboratories. Additionally, our assay is the most phylogenetically informative typing scheme available for Y. enterocolitica. PMID:25339391
Build Your Own Particle Smasher: The Royal Society Partnership Grants Scheme
ERIC Educational Resources Information Center
Education in Science, 2012
2012-01-01
This article features the project, "Build Your Own Particle Smasher" and shares how to build a particle smasher project. A-level and AS-level students from Trinity Catholic School have built their own particle smashers, in collaboration with Nottingham Trent University, as part of The Royal Society's Partnership Grants Scheme. The…
TWO-LEVEL TIME MARCHING SCHEME USING SPLINES FOR SOLVING THE ADVECTION EQUATION. (R826371C004)
A new numerical algorithm using quintic splines is developed and analyzed: quintic spline Taylor-series expansion (QSTSE). QSTSE is an Eulerian flux-based scheme that uses quintic splines to compute space derivatives and Taylor series expansion to march in time. The new scheme...
Using Student Performance to Judge the Difficulty of Examinations
ERIC Educational Resources Information Center
Roegner, Katherine
2015-01-01
This contribution focuses on a scheme developed to characterize the level of difficulty of an examination in the course "Linear Algebra for Engineers" and on the transfer of the underlying idea to a similar scheme for examinations in the course "Analysis I for Engineers". Using these schemes, it is possible to define standards…
Sparse representation-based image restoration via nonlocal supervised coding
NASA Astrophysics Data System (ADS)
Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng
2016-10-01
Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.
Nuclear Data Sheets for A = 192
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baglin, Coral M.
2012-08-15
Experimental structure and decay data for all nuclei with mass A=192 (Ta, W, Re, Os, Ir, Pt, Au, Hg, Tl, Pb, Bi, Po, At) have been evaluated. This evaluation, covering data received by 15 June 2012, supersedes the 1998 evaluation by C. M. Baglin (Nuclear Data Sheets84, 717 (1998), literature cutoff August 1998) and the subsequent inclusion in the ENSDF database of the new nuclide {sup 192}At (C. M. Baglin, literature cutoff 16 May 2006). It also incorporates the current evaluation of superdeformed-band information by B. Singh. Since the last publication, {sup 192}Ta, {sup 192}W and {sup 192}At have beenmore » observed, and an isomeric state has been identified in {sup 192}Re. The {epsilon} decay of {sup 192}Au has been studied using a multidetector array resulting in an extensively revised level scheme for {sup 192}Pt.« less
The Rat Model in Microsurgery Education: Classical Exercises and New Horizons
Shurey, Sandra; Akelina, Yelena; Legagneux, Josette; Malzone, Gerardo; Jiga, Lucian
2014-01-01
Microsurgery is a precise surgical skill that requires an extensive training period and the supervision of expert instructors. The classical training schemes in microsurgery have started with multiday experimental courses on the rat model. These courses have offered a low threat supervised high fidelity laboratory setting in which students can steadily and rapidly progress. This simulated environment allows students to make and recognise mistakes in microsurgery techniques and thus shifts any related risks of the early training period from the operating room to the lab. To achieve a high level of skill acquisition before beginning clinical practice, students are trained on a comprehensive set of exercises the rat model can uniquely provide, with progressive complexity as competency improves. This paper presents the utility of the classical rat model in three of the earliest microsurgery training centres and the new prospects that this versatile and expansive training model offers. PMID:24883268
Hardware/software codesign for embedded RISC core
NASA Astrophysics Data System (ADS)
Liu, Peng
2001-12-01
This paper describes hardware/software codesign method of the extendible embedded RISC core VIRGO, which based on MIPS-I instruction set architecture. VIRGO is described by Verilog hardware description language that has five-stage pipeline with shared 32-bit cache/memory interface, and it is controlled by distributed control scheme. Every pipeline stage has one small controller, which controls the pipeline stage status and cooperation among the pipeline phase. Since description use high level language and structure is distributed, VIRGO core has highly extension that can meet the requirements of application. We take look at the high-definition television MPEG2 MPHL decoder chip, constructed the hardware/software codesign virtual prototyping machine that can research on VIRGO core instruction set architecture, and system on chip memory size requirements, and system on chip software, etc. We also can evaluate the system on chip design and RISC instruction set based on the virtual prototyping machine platform.
NASA Astrophysics Data System (ADS)
Zhang, Chunxi; Wang, Yuqing
2018-01-01
The sensitivity of simulated tropical cyclones (TCs) to the choice of cumulus parameterization (CP) scheme in the advanced Weather Research and Forecasting Model (WRF-ARW) version 3.5 is analyzed based on ten seasonal simulations with 20-km horizontal grid spacing over the western North Pacific. Results show that the simulated frequency and intensity of TCs are very sensitive to the choice of the CP scheme. The sensitivity can be explained well by the difference in the low-level circulation in a height and sorted moisture space. By transporting moist static energy from dry to moist region, the low-level circulation is important to convective self-aggregation which is believed to be related to genesis of TC-like vortices (TCLVs) and TCs in idealized settings. The radiative and evaporative cooling associated with low-level clouds and shallow convection in dry regions is found to play a crucial role in driving the moisture-sorted low-level circulation. With shallow convection turned off in a CP scheme, relatively strong precipitation occurs frequently in dry regions. In this case, the diabatic cooling can still drive the low-level circulation but its strength is reduced and thus TCLV/TC genesis is suppressed. The inclusion of the cumulus momentum transport (CMT) in a CP scheme can considerably suppress genesis of TCLVs/TCs, while changes in the moisture-sorted low-level circulation and horizontal distribution of precipitation are trivial, indicating that the CMT modulates the TCLVs/TCs activities in the model by mechanisms other than the horizontal transport of moist static energy.
Parker, Matthew D; Jones, Lynette A; Hunter, Ian W; Taberner, A J; Nash, M P; Nielsen, P M F
2017-01-01
A triaxial force-sensitive microrobot was developed to dynamically perturb skin in multiple deformation modes, in vivo. Wiener static nonlinear identification was used to extract the linear dynamics and static nonlinearity of the force-displacement behavior of skin. Stochastic input forces were applied to the volar forearm and thenar eminence of the hand, producing probe tip perturbations in indentation and tangential extension. Wiener static nonlinear approaches reproduced the resulting displacements with variances accounted for (VAF) ranging 94-97%, indicating a good fit to the data. These approaches provided VAF improvements of 0.1-3.4% over linear models. Thenar eminence stiffness measures were approximately twice those measured on the forearm. Damping was shown to be significantly higher on the palm, whereas the perturbed mass typically was lower. Coefficients of variation (CVs) for nonlinear parameters were assessed within and across individuals. Individual CVs ranged from 2% to 11% for indentation and from 2% to 19% for extension. Stochastic perturbations with incrementally increasing mean amplitudes were applied to the same test areas. Differences between full-scale and incremental reduced-scale perturbations were investigated. Different incremental preloading schemes were investigated. However, no significant difference in parameters was found between different incremental preloading schemes. Incremental schemes provided depth-dependent estimates of stiffness and damping, ranging from 300 N/m and 2 Ns/m, respectively, at the surface to 5 kN/m and 50 Ns/m at greater depths. The device and techniques used in this research have potential applications in areas, such as evaluating skincare products, assessing skin hydration, or analyzing wound healing.
ERIC Educational Resources Information Center
Chauhan, U.; Kontopantelis, E.; Campbell, S.; Jarrett, H.; Lester, H.
2010-01-01
Background: Routine health checks have gained prominence as a way of detecting unmet need in primary care for adults with intellectual disabilities (ID) and general practitioners are being incentivised in the UK to carry out health checks for many conditions through an incentivisation scheme known as the Quality and Outcomes Framework (QOF).…
ERIC Educational Resources Information Center
Kirton, Stewart B.; Al-Ahmad, Abdullah; Fergus, Suzanne
2014-01-01
Increase in tuition fees means there will be renewed pressure on universities to provide "value for money" courses that provide extensive training in both subject-specific and generic skills. For graduates of chemistry this includes embedding the generic, practical, and laboratory-based skills associated with industrial research as an…
ERIC Educational Resources Information Center
Wabwoba, Franklin; Mwakondo, Fullgence M.
2011-01-01
Every year, the Joint Admission Board (JAB) is tasked to determine those students who are expected to join various Kenyan public universities under the government sponsorship scheme. This exercise is usually extensive because of the large number of qualified students compared to the very limited number of slots at various institutions and the…
ERIC Educational Resources Information Center
Tregear, Angela
2011-01-01
In the now extensive literature on alternative food networks (AFNs) (e.g. farmers' markets, community supported agriculture, box schemes), a body of work has pointed to socio-economic problems with such systems, which run counter to headline claims in the literature. This paper argues that rather than being a reflection of inherent complexities in…
Convergence acceleration of viscous flow computations
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1982-01-01
A multiple-grid convergence acceleration technique introduced for application to the solution of the Euler equations by means of Lax-Wendroff algorithms is extended to treat compressible viscous flow. Computational results are presented for the solution of the thin-layer version of the Navier-Stokes equations using the explicit MacCormack algorithm, accelerated by a convective coarse-grid scheme. Extensions and generalizations are mentioned.
Mathematical modeling of high-pH chemical flooding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhuyan, D.; Lake, L.W.; Pope, G.A.
1990-05-01
This paper describes a generalized compositional reservoir simulator for high-pH chemical flooding processes. This simulator combines the reaction chemistry associated with these processes with the extensive physical- and flow-property modeling schemes of an existing micellar/polymer flood simulator, UTCHEM. Application of the model is illustrated for cases from a simple alkaline preflush to surfactant-enhanced alkaline-polymer flooding.
Visual adaptation and face perception
Webster, Michael A.; MacLeod, Donald I. A.
2011-01-01
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555
Visual adaptation and face perception.
Webster, Michael A; MacLeod, Donald I A
2011-06-12
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces.
Zhang, L L; Yang, H; Xiao, H P; Lu, J M; Sha, W; Zhang, Q
2016-06-01
In order to detect the in vitro synergistic effect of 4 drugs-pasiniazid (PA), moxifloxacin, rifabutin and rifapentini on multidrug-resistant mycobacterium tuberculosis (MDR-MTB) and extensively drug-resistant mycobacterium tuberculosis(XDR-MTB), which were core drugs of"The program of retreatment research of tuberculosis". The checkerboard method was used to detect the minimum inhibitory concentration (MIC) of antituberculosis drug combination schemes (moxifloxacin-PA, moxifloxacin-PA-rifabutin and moxifloxacin-PA-rifapentini) to 40 strains of clinical drug resistant MTB(20 strains of MDR-MTB and 20 XDR-MTB) and the standard strain H37Rv, by calculating the fractional inhibitory concentration index of joint action in vitro to judge the combined effect, with fractional inhibitory concentration index(FICI)≤0.5 and FICI≤0.75 as the basis of 2 drugs and 3 drugs showing synergy. The FICI of moxifloxacin-PA scheme for DR-MTB was 0.125 to 1.000, only 5 strains with a FICI ≤0.5, showing synergistic effect. The FICI of moxifloxacin-Pa-rifabutin scheme with 20 strains of MDR-MTB ranged from 0.310 to 1.260, 10 strains with a FICI≤0.75, showing synergistic effect. The FICI of moxifloxacin-PA-rifabutin scheme with 20 strains of XDR-MTB ranged from 0.215 to 1.250, 11 strains with a FICI≤0.75, showing synergistic effect. The FICI of moxifloxacin-PA-rifapentini scheme with 20 strains of MDR-MTB ranged from 0.150 to 0.780, 19 strains with a FICI≤0.75, showing synergistic effect. The FICI of moxifloxacin-PA-rifapentini scheme with 20 strains of XDR-MTB ranged from 0.200 to 1.280, 16 strains with a FICI≤0.75, showing synergistic effect. The synergistic effect of moxifloxacin-PA scheme was poor, but showing better synergy when further combined with rifabutin or rifapentini. Rifabutin showed better effect than rifapentini, but the synergistic effect of moxifloxacin-PA-rifabutin combination scheme was poor than that of moxifloxacin-PA-rifapentini combination scheme.
Distance learning in discriminative vector quantization.
Schneider, Petra; Biehl, Michael; Hammer, Barbara
2009-10-01
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.
Kossert, K; Cassette, Ph; Carles, A Grau; Jörg, G; Gostomski, Christroph Lierse V; Nähle, O; Wolf, Ch
2014-05-01
The triple-to-double coincidence ratio (TDCR) method is frequently used to measure the activity of radionuclides decaying by pure β emission or electron capture (EC). Some radionuclides with more complex decays have also been studied, but accurate calculations of decay branches which are accompanied by many coincident γ transitions have not yet been investigated. This paper describes recent extensions of the model to make efficiency computations for more complex decay schemes possible. In particular, the MICELLE2 program that applies a stochastic approach of the free parameter model was extended. With an improved code, efficiencies for β(-), β(+) and EC branches with up to seven coincident γ transitions can be calculated. Moreover, a new parametrization for the computation of electron stopping powers has been implemented to compute the ionization quenching function of 10 commercial scintillation cocktails. In order to demonstrate the capabilities of the TDCR method, the following radionuclides are discussed: (166m)Ho (complex β(-)/γ), (59)Fe (complex β(-)/γ), (64)Cu (β(-), β(+), EC and EC/γ) and (229)Th in equilibrium with its progenies (decay chain with many α, β and complex β(-)/γ transitions). © 2013 Published by Elsevier Ltd.
Model identification and vision-based H∞ position control of 6-DoF cable-driven parallel robots
NASA Astrophysics Data System (ADS)
Chellal, R.; Cuvillon, L.; Laroche, E.
2017-04-01
This paper presents methodologies for the identification and control of 6-degrees of freedom (6-DoF) cable-driven parallel robots (CDPRs). First a two-step identification methodology is proposed to accurately estimate the kinematic parameters independently and prior to the dynamic parameters of a physics-based model of CDPRs. Second, an original control scheme is developed, including a vision-based position controller tuned with the H∞ methodology and a cable tension distribution algorithm. The position is controlled in the operational space, making use of the end-effector pose measured by a motion-tracking system. A four-block H∞ design scheme with adjusted weighting filters ensures good trajectory tracking and disturbance rejection properties for the CDPR system, which is a nonlinear-coupled MIMO system with constrained states. The tension management algorithm generates control signals that maintain the cables under feasible tensions. The paper makes an extensive review of the available methods and presents an extension of one of them. The presented methodologies are evaluated by simulations and experimentally on a redundant 6-DoF INCA 6D CDPR with eight cables, equipped with a motion-tracking system.
Ozone formation during an episode over Europe: A 3-D chemical/transport model simulation
NASA Technical Reports Server (NTRS)
Berntsen, Terje; Isaksen, Ivar S. A.
1994-01-01
A 3-D regional photochemical tracer/transport model for Europe and the Eastern Atlantic has been developed based on the NASA/GISS CTM. The model resolution is 4x5 degrees latitude and longitude with 9 layers in the vertical (7 in the troposphere). Advective winds, convection statistics and other meteorological data from the NASA/GISS GCM are used. An extensive gas-phase chemical scheme based on the scheme used in our global 2D model has been incorporated in the 3D model. In this work ozone formation in the troposphere is studied with the 3D model during a 5 day period starting June 30. Extensive local ozone production is found and the relationship between the source regions and the downwind areas are discussed. Variations in local ozone formation as a function of total emission rate, as well as the composition of the emissions (HC/NO(x)) ratio and isoprene emissions) are elucidated. An important vertical transport process in the troposphere is by convective clouds. The 3D model includes an explicit parameterization of this process. It is shown that this process has significant influence on the calculated surface ozone concentrations.
Matrix crack extension at a frictionally constrained fiber
DOE Office of Scientific and Technical Information (OSTI.GOV)
Selvadurai, A.P.S.
1994-07-01
The paper presents the application of a boundary element scheme to the study of the behavior of a penny-shaped matrix crack which occurs at an isolated fiber which is frictionally constrained. An incremental technique is used to examine the progression of self similar extension of the matrix crack due to the axial straining of the composite region. The extension of the crack occurs at the attainment of the critical stress intensity factor in the crack opening mode. Iterative techniques are used to determine the extent to crack enlargement and the occurrence of slip and locked regions in the frictional fiber-matrixmore » interface. The studies illustrate the role of fiber-matrix interface friction on the development of stable cracks in such frictionally constrained zones. The methodologies are applied to typical isolated fiber configurations of interest to fragmentation tests.« less
Improved configuration control for redundant robots
NASA Technical Reports Server (NTRS)
Seraji, H.; Colbaugh, R.
1990-01-01
This article presents a singularity-robust task-prioritized reformulation of the configuration control scheme for redundant robot manipulators. This reformulation suppresses large joint velocities near singularities, at the expense of small task trajectory errors. This is achieved by optimally reducing the joint velocities to induce minimal errors in the task performance by modifying the task trajectories. Furthermore, the same framework provides a means for assignment of priorities between the basic task of end-effector motion and the user-defined additional task for utilizing redundancy. This allows automatic relaxation of the additional task constraints in favor of the desired end-effector motion, when both cannot be achieved exactly. The improved configuration control scheme is illustrated for a variety of additional tasks, and extensive simulation results are presented.
Distributed Efficient Similarity Search Mechanism in Wireless Sensor Networks
Ahmed, Khandakar; Gregory, Mark A.
2015-01-01
The Wireless Sensor Network similarity search problem has received considerable research attention due to sensor hardware imprecision and environmental parameter variations. Most of the state-of-the-art distributed data centric storage (DCS) schemes lack optimization for similarity queries of events. In this paper, a DCS scheme with metric based similarity searching (DCSMSS) is proposed. DCSMSS takes motivation from vector distance index, called iDistance, in order to transform the issue of similarity searching into the problem of an interval search in one dimension. In addition, a sector based distance routing algorithm is used to efficiently route messages. Extensive simulation results reveal that DCSMSS is highly efficient and significantly outperforms previous approaches in processing similarity search queries. PMID:25751081
A Gas-Kinetic Scheme for Multimaterial Flows and Its Application in Chemical Reaction
NASA Technical Reports Server (NTRS)
Lian, Yongsheng; Xu, Kun
1999-01-01
This paper concerns the extension of the multicomponent gas-kinetic BGK-type scheme to multidimensional chemical reactive flow calculations. In the kinetic model, each component satisfies its individual gas-kinetic BGK equation and the equilibrium states of both components are coupled in space and time due to the momentum and energy exchange in the course of particle collisions. At the same time, according to the chemical reaction rule one component can be changed into another component with the release of energy, where the reactant and product could have different gamma. Many numerical test cases are included in this paper, which show the robustness and accuracy of kinetic approach in the description of multicomponent reactive flows.
Flow solution on a dual-block grid around an airplane
NASA Technical Reports Server (NTRS)
Eriksson, Lars-Erik
1987-01-01
The compressible flow around a complex fighter-aircraft configuration (fuselage, cranked delta wing, canard, and inlet) is simulated numerically using a novel grid scheme and a finite-volume Euler solver. The patched dual-block grid is generated by an algebraic procedure based on transfinite interpolation, and the explicit Runge-Kutta time-stepping Euler solver is implemented with a high degree of vectorization on a Cyber 205 processor. Results are presented in extensive graphs and diagrams and characterized in detail. The concentration of grid points near the wing apex in the present scheme is shown to facilitate capture of the vortex generated by the leading edge at high angles of attack and modeling of its interaction with the canard wake.
Hybrid and Constrained Resolution-of-Identity Techniques for Coulomb Integrals.
Duchemin, Ivan; Li, Jing; Blase, Xavier
2017-03-14
The introduction of auxiliary bases to approximate molecular orbital products has paved the way to significant savings in the evaluation of four-center two-electron Coulomb integrals. We present a generalized dual space strategy that sheds a new light on variants over the standard density and Coulomb-fitting schemes, including the possibility of introducing minimization constraints. We improve in particular the charge- or multipole-preserving strategies introduced respectively by Baerends and Van Alsenoy that we compare to a simple scheme where the Coulomb metric is used for lowest angular momentum auxiliary orbitals only. We explore the merits of these approaches on the basis of extensive Hartree-Fock and MP2 calculations over a standard set of medium size molecules.
Critical study of higher order numerical methods for solving the boundary-layer equations
NASA Technical Reports Server (NTRS)
Wornom, S. F.
1978-01-01
A fourth order box method is presented for calculating numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations. The method, which is the natural extension of the second order box scheme to fourth order, was demonstrated with application to the incompressible, laminar and turbulent, boundary layer equations. The efficiency of the present method is compared with two point and three point higher order methods, namely, the Keller box scheme with Richardson extrapolation, the method of deferred corrections, a three point spline method, and a modified finite element method. For equivalent accuracy, numerical results show the present method to be more efficient than higher order methods for both laminar and turbulent flows.
ANALYSIS OF SEEING-INDUCED POLARIZATION CROSS-TALK AND MODULATION SCHEME PERFORMANCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casini, R.; De Wijn, A. G.; Judge, P. G.
2012-09-20
We analyze the generation of polarization cross-talk in Stokes polarimeters by atmospheric seeing, and its effects on the noise statistics of spectropolarimetric measurements for both single-beam and dual-beam instruments. We investigate the time evolution of seeing-induced correlations between different states of one modulation cycle and compare the response to these correlations of two popular polarization modulation schemes in a dual-beam system. Extension of the formalism to encompass an arbitrary number of modulation cycles enables us to compare our results with earlier work. Even though we discuss examples pertinent to solar physics, the general treatment of the subject and its fundamentalmore » results might be useful to a wider community.« less
[Enhanced Recovery after Surgery from Theory to Practice What do We Need to Do?
Che, Guowei; Liu, Lunxu; Zhou, Qinghua
2017-04-20
Enhanced recovery after surgery (ERAS) is a paradigm shift in perioperative care, resulting in substantial improvements in clinical outcomes, shorter length of hospital stay and cost savings. But the current ERAS either by application of breadth or depth is not enough, why? The main reason is the lack of "operability, evaluation, repetition" ERAS protocol and suitable for clinical extensive application protocol. How to form the clinical available protocol? Operational mainly refers to the clinical scheme is simple and feasible, and protocol compliance is good; Evaluate refers to the methods used before, during and after are the objective evaluation criteria and plan; Repeatable is clinical scheme repeatability in the process of single or multiple center.
The minimal residual QR-factorization algorithm for reliably solving subset regression problems
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.
NASA Astrophysics Data System (ADS)
Douglass, D. H.; Kalnay, E.; Li, H.; Cai, M.
2005-05-01
Carbon monoxide (CO) is present in the troposphere as a product of fossil fuel combustion, biomass burning and the oxidation of volatile hydrocarbons. It is the principal sink of the hydroxyl radical (OH), thereby affecting the concentrations of greenhouse gases such as CH4 and O3. In addition, CO has a lifetime of 1-3 months, making it a good tracer for studying the long range transport of pollution. Satellite observations present a valuable tool in the investigation of tropospheric CO. The Atmospheric InfraRed Sounder (AIRS), onboard the Aqua satellite, is sensitive to tropospheric CO in a number of its 2378 channels. This sensitivity to CO, combined with the daily global coverage provided by AIRS, makes AIRS a potentially useful instrument for observing CO sources and transport. A maximum a posteriori (MAP) retrieval scheme (Rodgers 2000) has been developed for AIRS, to provide CO profiles from near-surface altitudes to around 150 hPa. An extensive validation data set, consisting of over 50 in-situ aircraft CO profiles, has been constructed. This data set combines CO data from a number of independent aircraft campaigns. Results from this validation study and comparisons with the AIRS level 2 CO product will be presented. Rodgers, C. D. (2000), Inverse Methods for Atmospheric Sounding : Theory and Practice, World Scientific, Singapore.
Muntaner, Carles; Borrell, Carme; Solà, Judit; Marí-Dell'Olmo, Marc; Chung, Haejoo; Rodríguez-Sanz, Maica; Benach, Joan; Rocha, Kátia B; Ng, Edwin
2011-01-01
The aim of this study is to test the effects of neo-Marxian social class and potential mediators such as labor market position, work organization, material deprivation, and health behaviors on all-cause mortality. The authors use longitudinal data from the Barcelona 2000 Health Interview Survey (N=7526), with follow-up interviews through the municipal census in 2008 (95.97% response rate). Using data on relations of property, organizational power, and education, the study groups social classes according to Wright's scheme: capitalists, petit bourgeoisie, managers, supervisors, and skilled, semi-skilled, and unskilled workers. Findings indicate that social class, measured as relations of control over productive assets, is an important predictor of mortality among working-class men but not women. Workers (hazard ratio = 1.60; 95% confidence interval, 1.10-2.35) but also managers and small employers had a higher risk of death compared with capitalists. The extensive use of conventional gradient measures of social stratification has neglected sociological measures of social class conceptualized as relations of control over productive assets. This concept is capable of explaining how social inequalities are generated. To confirm the protective effect of the capitalist class position and the "contradictory class location hypothesis," additional efforts are needed to properly measure class among low-level supervisors, capitalists, managers, and small employers.
Robust PBPK/PD-Based Model Predictive Control of Blood Glucose.
Schaller, Stephan; Lippert, Jorg; Schaupp, Lukas; Pieber, Thomas R; Schuppert, Andreas; Eissing, Thomas
2016-07-01
Automated glucose control (AGC) has not yet reached the point where it can be applied clinically [3]. Challenges are accuracy of subcutaneous (SC) glucose sensors, physiological lag times, and both inter- and intraindividual variability. To address above issues, we developed a novel scheme for MPC that can be applied to AGC. An individualizable generic whole-body physiology-based pharmacokinetic and dynamics (PBPK/PD) model of the glucose, insulin, and glucagon metabolism has been used as the predictive kernel. The high level of mechanistic detail represented by the model takes full advantage of the potential of MPC and may make long-term prediction possible as it captures at least some relevant sources of variability [4]. Robustness against uncertainties was increased by a control cascade relying on proportional-integrative derivative-based offset control. The performance of this AGC scheme was evaluated in silico and retrospectively using data from clinical trials. This analysis revealed that our approach handles sensor noise with a MARD of 10%-14%, and model uncertainties and disturbances. The results suggest that PBPK/PD models are well suited for MPC in a glucose control setting, and that their predictive power in combination with the integrated database-driven (a priori individualizable) model framework will help overcome current challenges in the development of AGC systems. This study provides a new, generic, and robust mechanistic approach to AGC using a PBPK platform with extensive a priori (database) knowledge for individualization.
One lens optical correlation: application to face recognition.
Jridi, Maher; Napoléon, Thibault; Alfalou, Ayman
2018-03-20
Despite its extensive use, the traditional 4f Vander Lugt Correlator optical setup can be further simplified. We propose a lightweight correlation scheme where the decision is taken in the Fourier plane. For this purpose, the Fourier plane is adapted and used as a decision plane. Then, the offline phase and the decision metric are re-examined in order to keep a reasonable recognition rate. The benefits of the proposed approach are numerous: (1) it overcomes the constraints related to the use of a second lens; (2) the optical correlation setup is simplified; (3) the multiplication with the correlation filter can be done digitally, which offers a higher adaptability according to the application. Moreover, the digital counterpart of the correlation scheme is lightened since with the proposed scheme we get rid of the inverse Fourier transform (IFT) calculation (i.e., decision directly in the Fourier domain without resorting to IFT). To assess the performance of the proposed approach, an insight into digital hardware resources saving is provided. The proposed method involves nearly 100 times fewer arithmetic operators. Moreover, from experimental results in the context of face verification-based correlation, we demonstrate that the proposed scheme provides comparable or better accuracy than the traditional method. One interesting feature of the proposed scheme is that it could greatly outperform the traditional scheme for face identification application in terms of sensitivity to face orientation. The proposed method is found to be digital/optical implementation-friendly, which facilitates its integration on a very broad range of scenarios.
Final Report for''Numerical Methods and Studies of High-Speed Reactive and Non-Reactive Flows''
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwendeman, D W
2002-11-20
The work carried out under this subcontract involved the development and use of an adaptive numerical method for the accurate calculation of high-speed reactive flows on overlapping grids. The flow is modeled by the reactive Euler equations with an assumed equation of state and with various reaction rate models. A numerical method has been developed to solve the nonlinear hyperbolic partial differential equations in the model. The method uses an unsplit, shock-capturing scheme, and uses a Godunov-type scheme to compute fluxes and a Runge-Kutta error control scheme to compute the source term modeling the chemical reactions. An adaptive mesh refinementmore » (AMR) scheme has been implemented in order to locally increase grid resolution. The numerical method uses composite overlapping grids to handle complex flow geometries. The code is part of the ''Overture-OverBlown'' framework of object-oriented codes [1, 2], and the development has occurred in close collaboration with Bill Henshaw and David Brown, and other members of the Overture team within CASC. During the period of this subcontract, a number of tasks were accomplished, including: (1) an extension of the numerical method to handle ''ignition and grow'' reaction models and a JWL equations of state; (2) an improvement in the efficiency of the AMR scheme and the error estimator; (3) an addition of a scheme of numerical dissipation designed to suppress numerical oscillations/instabilities near expanding detonations and along grid overlaps; and (4) an exploration of the evolution to detonation in an annulus and of detonation failure in an expanding channel.« less
NASA Astrophysics Data System (ADS)
Temimi, Marouane; Chaouch, Naira; Weston, Michael; Ghedira, Hosni
2017-04-01
This study covers five fog events reported in 2014 at Abu Dhabi International Airport in the United Arab Emirates (UAE). We assess the performance of WRF-ARW model during fog conditions and we intercompare seven different PBL schemes and assess their impact on the performance of the simulations. Seven PBL schemes, namely, Yonsei University (YSU), Mellor-Yamada-Janjic (MYJ), Moller-Yamada Nakanishi and Niino (MYNN) level 2.5, Quasi-Normal Scale Elimination (QNSE-EDMF), Asymmetric Convective Model (ACM2), Grenier-Bretherton-McCaa (GBM) and MYNN level 3 were tested. Radiosonde data from the Abu Dhabi International Airport and surface measurements of relative humidity (RH), dew point temperature, wind speed, and temperature profiles were used to assess the performance of the model. All PBL schemes showed comparable skills with relatively higher performance with the QNSE scheme. The average RH Root Mean Square Error (RMSE) and BIAS for all PBLs were 15.75 % and -9.07 %, respectively, whereas the obtained RMSE and BIAS when QNSE was used were 14.65 % and -6.3 % respectively. Comparable skills were obtained for the rest of the variables. Local PBL schemes showed better performance than non-local schemes. Discrepancies between simulated and observed values were higher at the surface level compared to high altitude values. The sensitivity to lead time showed that best simulation performances were obtained when the lead time varies between 12 and 18 hours. In addition, the results of the simulations show that better performance is obtained when the starting condition is dry.
2012-01-01
The purpose of this paper is to analyze the German diagnosis related groups (G-DRG) cost accounting scheme by assessing its resource allocation at hospital level and its tariff calculation at national level. First, the paper reviews and assesses the three steps in the G-DRG resource allocation scheme at hospital level: (1) the groundwork; (2) cost-center accounting; and (3) patient-level costing. Second, the paper reviews and assesses the three steps in G-DRG national tariff calculation: (1) plausibility checks; (2) inlier calculation; and (3) the “one hospital” approach. The assessment is based on the two main goals of G-DRG introduction: improving transparency and efficiency. A further empirical assessment attests high costing quality. The G-DRG cost accounting scheme shows high system quality in resource allocation at hospital level, with limitations concerning a managerially relevant full cost approach and limitations in terms of advanced activity-based costing at patient-level. However, the scheme has serious flaws in national tariff calculation: inlier calculation is normative, and the “one hospital” model causes cost bias, adjustment and representativeness issues. The G-DRG system was designed for reimbursement calculation, but developed to a standard with strategic management implications, generalized by the idea of adapting a hospital’s cost structures to DRG revenues. This combination causes problems in actual hospital financing, although resource allocation is advanced at hospital level. PMID:22935314
Vogl, Matthias
2012-08-30
The purpose of this paper is to analyze the German diagnosis related groups (G-DRG) cost accounting scheme by assessing its resource allocation at hospital level and its tariff calculation at national level. First, the paper reviews and assesses the three steps in the G-DRG resource allocation scheme at hospital level: (1) the groundwork; (2) cost-center accounting; and (3) patient-level costing. Second, the paper reviews and assesses the three steps in G-DRG national tariff calculation: (1) plausibility checks; (2) inlier calculation; and (3) the "one hospital" approach. The assessment is based on the two main goals of G-DRG introduction: improving transparency and efficiency. A further empirical assessment attests high costing quality. The G-DRG cost accounting scheme shows high system quality in resource allocation at hospital level, with limitations concerning a managerially relevant full cost approach and limitations in terms of advanced activity-based costing at patient-level. However, the scheme has serious flaws in national tariff calculation: inlier calculation is normative, and the "one hospital" model causes cost bias, adjustment and representativeness issues. The G-DRG system was designed for reimbursement calculation, but developed to a standard with strategic management implications, generalized by the idea of adapting a hospital's cost structures to DRG revenues. This combination causes problems in actual hospital financing, although resource allocation is advanced at hospital level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scime, Earl E.
The magnitude and spatial dependence of neutral density in magnetic confinement fusion experiments is a key physical parameter, particularly in the plasma edge. Modeling codes require precise measurements of the neutral density to calculate charge-exchange power losses and drag forces on rotating plasmas. However, direct measurements of the neutral density are problematic. In this work, we proposed to construct a laser-based diagnostic capable of providing spatially resolved measurements of the neutral density in the edge of plasma in the DIII-D tokamak. The diagnostic concept is based on two-photon absorption laser induced fluorescence (TALIF). By injecting two beams of 205 nmmore » light (co or counter propagating), ground state hydrogen (or deuterium or tritium) can be excited from the n = 1 level to the n = 3 level at the location where the two beams intersect. Individually, the beams experience no absorption, and therefore have no difficulty penetrating even dense plasmas. After excitation, a fraction of the hydrogen atoms decay from the n = 3 level to the n = 2 level and emit photons at 656 nm (the H α line). Calculations based on the results of previous TALIF experiments in magnetic fusion devices indicated that a laser pulse energy of approximately 3 mJ delivered in 5 ns would provide sufficient signal-to-noise for detection of the fluorescence. In collaboration with the DIII-D engineering staff and experts in plasma edge diagnostics for DIII-D from Oak Ridge National Laboratory (ORNL), WVU researchers designed a TALIF system capable of providing spatially resolved measurements of neutral deuterium densities in the DIII-D edge plasma. The laser systems were specified, purchased, and assembled at WVU. The TALIF system was tested on a low-power hydrogen discharge at WVU and the plan was to move the instrument to DIII-D for installation in collaboration with ORNL researchers. After budget cuts at DIII-D, the DIII-D facility declined to support installation on their tokamak. Instead, after a no-cost extension, the apparatus was moved to the University of Washington-Seattle and successfully tested on the HIT-SI3 spheromak experiment. As a result of this project, TALIF measurements of the absolutely calibrated neutral density hydrogen and deuterium were obtained in a helicon source and in a spheromak, designs were developed for installation of a TALIF system on a tokamak, and a new, xenon-based calibration scheme was proposed and demonstrated. The xenon-calibration scheme eliminates significant problems that were identified with the standard krypton calibration scheme.« less
Exact density functional and wave function embedding schemes based on orbital localization
NASA Astrophysics Data System (ADS)
Hégely, Bence; Nagy, Péter R.; Ferenczy, György G.; Kállay, Mihály
2016-08-01
Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qing; Berg, Larry K.; Pekour, Mikhail
The WRF model version 3.3 is used to simulate near hub-height winds and power ramps utilizing three commonly used planetary boundary-layer (PBL) schemes: Mellor-Yamada-Janjic (MYJ), University of Washington (UW), and Yonsei University (YSU). The predicted winds have small mean biases compared with observations. Power ramps and step changes (changes within an hour) consistently show that the UW scheme performed better in predicting up ramps under stable conditions with higher prediction accuracy and capture rates. Both YSU and UW scheme show good performance predicting up- and down- ramps under unstable conditions with YSU being slightly better for ramp durations longer thanmore » an hour. MYJ is the most successful simulating down-ramps under stable conditions. The high wind speed and large shear associated with low-level jets are frequently associated with power ramps, and the biases in predicted low-level jet explain some of the shown differences in ramp predictions among different PBL schemes. Low-level jets were observed as low as ~200 m in altitude over the Columbia Basin Wind Energy Study (CBWES) site, located in an area of complex terrain. The shear, low-level peak wind speeds, as well as the height of maximum wind speed are not well predicted. Model simulations with 3 PBL schemes show the largest variability among them under stable conditions.« less
Biodiversity of Environmental Leptospira: Improving Identification and Revisiting the Diagnosis.
Thibeaux, Roman; Girault, Dominique; Bierque, Emilie; Soupé-Gilbert, Marie-Estelle; Rettinger, Anna; Douyère, Anthony; Meyer, Michael; Iraola, Gregorio; Picardeau, Mathieu; Goarant, Cyrille
2018-01-01
Leptospirosis is an important environmental disease and a major threat to human health causing at least 1 million clinical infections annually. There has recently been a growing interest in understanding the environmental lifestyle of Leptospira . However, Leptospira isolation from complex environmental samples is difficult and time-consuming and few tools are available to identify Leptospira isolates at the species level. Here, we propose a polyphasic isolation and identification scheme, which might prove useful to recover and identify environmental isolates and select those to be submitted to whole-genome sequencing. Using this approach, we recently described 12 novel Leptospira species for which we propose names. We also show that MALDI-ToF MS allows rapid and reliable identification and provide an extensive database of Leptospira MALDI-ToF mass spectra, which will be valuable to researchers in the leptospirosis community for species identification. Lastly, we also re-evaluate some of the current techniques for the molecular diagnosis of leptospirosis taking into account the extensive and recently revealed biodiversity of Leptospira in the environment. In conclusion, we describe our method for isolating Leptospira from the environment, confirm the usefulness of mass spectrometry for species identification and propose names for 12 novel species. This also offers the opportunity to refine current molecular diagnostic tools.
NASA Astrophysics Data System (ADS)
El-Shafai, W.; El-Rabaie, S.; El-Halawany, M.; Abd El-Samie, F. E.
2018-03-01
Three-Dimensional Video-plus-Depth (3DV + D) comprises diverse video streams captured by different cameras around an object. Therefore, there is a great need to fulfill efficient compression to transmit and store the 3DV + D content in compressed form to attain future resource bounds whilst preserving a decisive reception quality. Also, the security of the transmitted 3DV + D is a critical issue for protecting its copyright content. This paper proposes an efficient hybrid watermarking scheme for securing the 3DV + D transmission, which is the homomorphic transform based Singular Value Decomposition (SVD) in Discrete Wavelet Transform (DWT) domain. The objective of the proposed watermarking scheme is to increase the immunity of the watermarked 3DV + D to attacks and achieve adequate perceptual quality. Moreover, the proposed watermarking scheme reduces the transmission-bandwidth requirements for transmitting the color-plus-depth 3DV over limited-bandwidth wireless networks through embedding the depth frames into the color frames of the transmitted 3DV + D. Thus, it saves the transmission bit rate and subsequently it enhances the channel bandwidth-efficiency. The performance of the proposed watermarking scheme is compared with those of the state-of-the-art hybrid watermarking schemes. The comparisons depend on both the subjective visual results and the objective results; the Peak Signal-to-Noise Ratio (PSNR) of the watermarked frames and the Normalized Correlation (NC) of the extracted watermark frames. Extensive simulation results on standard 3DV + D sequences have been conducted in the presence of attacks. The obtained results confirm that the proposed hybrid watermarking scheme is robust in the presence of attacks. It achieves not only very good perceptual quality with appreciated PSNR values and saving in the transmission bit rate, but also high correlation coefficient values in the presence of attacks compared to the existing hybrid watermarking schemes.
Exercise Programming for Cardiacs--A New Direction for Physical Therapists.
ERIC Educational Resources Information Center
Gutin, Bernard
This speech begins with the presentation of a conceptual scheme of the physical working capacity of a person starting a training program. The scheme shows that after exercise, when recovery begins and sufficient time elapses, the individual recovers and adapts to a level of physical working capacity which is higher than his starting level. From…
Multigrid method for the equilibrium equations of elasticity using a compact scheme
NASA Technical Reports Server (NTRS)
Taasan, S.
1986-01-01
A compact difference scheme is derived for treating the equilibrium equations of elasticity. The scheme is inconsistent and unstable. A multigrid method which takes into account these properties is described. The solution of the discrete equations, up to the level of discretization errors, is obtained by this method in just two multigrid cycles.
The Effect of a Monitoring Scheme on Tutorial Attendance and Assignment Submission
ERIC Educational Resources Information Center
Burke, Grainne; Mac an Bhaird, Ciaran; O'Shea, Ann
2013-01-01
We report on the implementation of a monitoring scheme by the Department of Mathematics and Statistics at the National University of Ireland Maynooth. The scheme was introduced in an attempt to increase the level and quality of students' engagement with certain aspects of their undergraduate course. It is well documented that students with higher…
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Chen, Weiwei; Yan, Xinyu; Wang, Yunqian
2018-06-01
In order to obtain higher encryption efficiency, a bit-level quantum color image encryption scheme by exploiting quantum cross-exchange operation and a 5D hyper-chaotic system is designed. Additionally, to enhance the scrambling effect, the quantum channel swapping operation is employed to swap the gray values of corresponding pixels. The proposed color image encryption algorithm has larger key space and higher security since the 5D hyper-chaotic system has more complex dynamic behavior, better randomness and unpredictability than those based on low-dimensional hyper-chaotic systems. Simulations and theoretical analyses demonstrate that the presented bit-level quantum color image encryption scheme outperforms its classical counterparts in efficiency and security.
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.
Méndez-López, María Elena; García-Frapolli, Eduardo; Pritchard, Diana J; Sánchez González, María Consuelo; Ruiz-Mallén, Isabel; Porter-Bolland, Luciana; Reyes-Garcia, Victoria
2014-12-01
In Mexico, biodiversity conservation is primarily implemented through three schemes: 1) protected areas, 2) payment-based schemes for environmental services, and 3) community-based conservation, officially recognized in some cases as Indigenous and Community Conserved Areas. In this paper we compare levels of local participation across conservation schemes. Through a survey applied to 670 households across six communities in Southeast Mexico, we document local participation during the creation, design, and implementation of the management plan of different conservation schemes. To analyze the data, we first calculated the frequency of participation at the three different stages mentioned, then created a participation index that characterizes the presence and relative intensity of local participation for each conservation scheme. Results showed that there is a low level of local participation across all the conservation schemes explored in this study. Nonetheless, the payment for environmental services had the highest local participation while the protected areas had the least. Our findings suggest that local participation in biodiversity conservation schemes is not a predictable outcome of a specific (community-based) model, thus implying that other factors might be important in determining local participation. This has implications on future strategies that seek to encourage local involvement in conservation. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Viswanath, Anjitha; Kumar Jain, Virander; Kar, Subrat
2017-12-01
We investigate the error performance of an earth-to-satellite free space optical uplink using transmitter spatial diversity in presence of turbulence and weather conditions, using gamma-gamma distribution and Beer-Lambert law, respectively, for on-off keying (OOK), M-ary pulse position modulation (M-PPM) and M-ary differential PPM (M-DPPM) schemes. Weather conditions such as moderate, light and thin fog cause additional degradation, while dense or thick fog and clouds may lead to link failure. The bit error rate reduces with increase in the number of transmitters for all the schemes. However, beyond a certain number of transmitters, the reduction becomes marginal. Diversity gain remains almost constant for various weather conditions but increases with increase in ground-level turbulence or zenith angle. Further, the number of transmitters required to improve the performance to a desired level is less for M-PPM scheme than M-DPPM and OOK schemes.
Energy efficient cooperation in underlay RFID cognitive networks for a water smart home.
Nasir, Adnan; Hussain, Syed Imtiaz; Soong, Boon-Hee; Qaraqe, Khalid
2014-09-30
Shrinking water resources all over the world and increasing costs of water consumption have prompted water users and distribution companies to come up with water conserving strategies. We have proposed an energy-efficient smart water monitoring application in [1], using low power RFIDs. In the home environment, there exist many primary interferences within a room, such as cell-phones, Bluetooth devices, TV signals, cordless phones and WiFi devices. In order to reduce the interference from our proposed RFID network for these primary devices, we have proposed a cooperating underlay RFID cognitive network for our smart application on water. These underlay RFIDs should strictly adhere to the interference thresholds to work in parallel with the primary wireless devices [2]. This work is an extension of our previous ventures proposed in [2,3], and we enhanced the previous efforts by introducing a new system model and RFIDs. Our proposed scheme is mutually energy efficient and maximizes the signal-to-noise ratio (SNR) for the RFID link, while keeping the interference levels for the primary network below a certain threshold. A closed form expression for the probability density function (pdf) of the SNR at the destination reader/writer and outage probability are derived. Analytical results are verified through simulations. It is also shown that in comparison to non-cognitive selective cooperation, this scheme performs better in the low SNR region for cognitive networks. Moreover, the hidden Markov model's (HMM) multi-level variant hierarchical hidden Markov model (HHMM) approach is used for pattern recognition and event detection for the data received for this system [4]. Using this model, a feedback and decision algorithm is also developed. This approach has been applied to simulated water pressure data from RFID motes, which were embedded in metallic water pipes.
Energy Efficient Cooperation in Underlay RFID Cognitive Networks for a Water Smart Home
Nasir, Adnan; Hussain, Syed Imtiaz; Soong, Boon-Hee; Qaraqe, Khalid
2014-01-01
Shrinking water resources all over the world and increasing costs of water consumption have prompted water users and distribution companies to come up with water conserving strategies. We have proposed an energy-efficient smart water monitoring application in [1], using low power RFIDs. In the home environment, there exist many primary interferences within a room, such as cell-phones, Bluetooth devices, TV signals, cordless phones and WiFi devices. In order to reduce the interference from our proposed RFID network for these primary devices, we have proposed a cooperating underlay RFID cognitive network for our smart application on water. These underlay RFIDs should strictly adhere to the interference thresholds to work in parallel with the primary wireless devices [2]. This work is an extension of our previous ventures proposed in [2,3], and we enhanced the previous efforts by introducing a new system model and RFIDs. Our proposed scheme is mutually energy efficient and maximizes the signal-to-noise ratio (SNR) for the RFID link, while keeping the interference levels for the primary network below a certain threshold. A closed form expression for the probability density function (pdf) of the SNR at the destination reader/writer and outage probability are derived. Analytical results are verified through simulations. It is also shown that in comparison to non-cognitive selective cooperation, this scheme performs better in the low SNR region for cognitive networks. Moreover, the hidden Markov model’s (HMM) multi-level variant hierarchical hidden Markov model (HHMM) approach is used for pattern recognition and event detection for the data received for this system [4]. Using this model, a feedback and decision algorithm is also developed. This approach has been applied to simulated water pressure data from RFID motes, which were embedded in metallic water pipes. PMID:25271565
Hologram representation of design data in an expert system knowledge base
NASA Technical Reports Server (NTRS)
Shiva, S. G.; Klon, Peter F.
1988-01-01
A novel representational scheme for design object descriptions is presented. An abstract notion of modules and signals is developed as a conceptual foundation for the scheme. This abstraction relates the objects to the meaning of system descriptions. Anchored on this abstraction, a representational model which incorporates dynamic semantics for these objects is presented. This representational model is called a hologram scheme since it represents dual level information, namely, structural and semantic. The benefits of this scheme are presented.
EMPIRE: Nuclear Reaction Model Code System for Data Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herman, M.; Capote, R.; Carlson, B.V.
EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions ({approx} keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approachmore » (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with {gamma}-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and {gamma}-ray strength functions. The results can be converted into ENDF-6 formatted files using the accompanying code EMPEND and completed with neutron resonances extracted from the existing evaluations. The package contains the full EXFOR (CSISRS) library of experimental reaction data that are automatically retrieved during the calculations. Publication quality graphs can be obtained using the powerful and flexible plotting package ZVView. The graphic user interface, written in Tcl/Tk, provides for easy operation of the system. This paper describes the capabilities of the code, outlines physical models and indicates parameter libraries used by EMPIRE to predict reaction cross sections and spectra, mainly for nucleon-induced reactions. Selected applications of EMPIRE are discussed, the most important being an extensive use of the code in evaluations of neutron reactions for the new US library ENDF/B-VII.0. Future extensions of the system are outlined, including neutron resonance module as well as capabilities of generating covariances, using both KALMAN and Monte-Carlo methods, that are still being advanced and refined.« less
Review of the national external quality assessment (EQA) scheme for breast pathology in the UK.
Rakha, Emad A; Bennett, Rachel L; Coleman, Derek; Pinder, Sarah E; Ellis, Ian O
2017-01-01
The National Health Service Breast Screening Programme (NHSBSP; pathology) external quality assurance (EQA) scheme aims to provide a mechanism for examination and monitoring of concordance of pathology reporting within the UK. This study aims to review the breast EQA scheme performance data collected over a 24-year period following its introduction. Data on circulations, number of cases and diagnosis were collected. Detailed analyses with and without combinations of certain diagnostic entities, and over different time periods were performed. Overall, of 576 cases (172 benign, 11 atypical hyperplasia, 98 ductal carcinoma in situ/microinvasive and 295 invasive disease), consistency of assessment of diagnostic parameters was very high (overall k=0.80; k for benign diagnosis=0.79; k for invasive disease=0.91). For distinguishing benign versus malignant lesions, no further improvement is considered possible in view of the limitations of the scheme methodology. Although diagnostic consistency of atypical hyperplasia remains at a low level, combining it with the benign category results in a high level of agreement (k=0.93). The level of consistency of reporting prognostic information is variable and some items such as lymphovascular invasion and tumour size measurement may need further intervention to improve their reporting consistency. Although the level of consistency of reporting of histological grade remained at a moderate level overall (k=0.48), it was variable among cases and appears to have levelled off; no further significant improvement is expected and no significant impact of the previous publication of guidelines is observed. These results provide further evidence to indicate the value of the breast EQA scheme in monitoring performance and the identification of specific areas where improvement or new approaches are required. For most parameters, the concordance of reporting reached a plateaux a few years after the introduction of the EQA scheme. It is important to maintain this high level and also to tackle specific low-performance areas innovatively. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Aracena-Genao, Belkis; del Río-Zolezzi, Aurora
2016-01-01
Objective To analyze whether the changes observed in the level and distribution of resources for maternal health and family planning (MHFP) programs from 2003 to 2012 were consistent with the financial goals of the related policies. Materials and Methods A longitudinal descriptive analysis of the Mexican Reproductive Health Subaccounts 2003–2012 was performed by financing scheme and health function. Financing schemes included social security, government schemes, household out-of-pocket (OOP) payments, and private insurance plans. Functions were preventive care, including family planning, antenatal and puerperium health services, normal and cesarean deliveries, and treatment of complications. Changes in the financial imbalance indicators covered by MHFP policy were tracked: (a) public and OOP expenditures as percentages of total MHFP spending; (b) public expenditure per woman of reproductive age (WoRA, 15–49 years) by financing scheme; (c) public expenditure on treating complications as a percentage of preventive care; and (d) public expenditure on WoRA at state level. Statistical analyses of trends and distributions were performed. Results Public expenditure on government schemes grew by approximately 300%, and the financial imbalance between populations covered by social security and government schemes decreased. The financial burden on households declined, particularly among households without social security. Expenditure on preventive care grew by 16%, narrowing the financing gap between treatment of complications and preventive care. Finally, public expenditure per WoRA for government schemes nearly doubled at the state level, although considerable disparities persist. Conclusions Changes in the level and distribution of MHFP funding from 2003 to 2012 were consistent with the relevant policy goals. However, improving efficiency requires further analysis to ascertain the impact of investments on health outcomes. This, in turn, will require better financial data systems as a precondition for improving the monitoring and accountability functions in Mexico. PMID:26812646
Almanaseer, Naser; Sankarasubramanian, A.; Bales, Jerad
2014-01-01
Recent studies have found a significant association between climatic variability and basin hydroclimatology, particularly groundwater levels, over the southeast United States. The research reported in this paper evaluates the potential in developing 6-month-ahead groundwater-level forecasts based on the precipitation forecasts from ECHAM 4.5 General Circulation Model Forced with Sea Surface Temperature forecasts. Ten groundwater wells and nine streamgauges from the USGS Groundwater Climate Response Network and Hydro-Climatic Data Network were selected to represent groundwater and surface water flows, respectively, having minimal anthropogenic influences within the Flint River Basin in Georgia, United States. The writers employ two low-dimensional models [principle component regression (PCR) and canonical correlation analysis (CCA)] for predicting groundwater and streamflow at both seasonal and monthly timescales. Three modeling schemes are considered at the beginning of January to predict winter (January, February, and March) and spring (April, May, and June) streamflow and groundwater for the selected sites within the Flint River Basin. The first scheme (model 1) is a null model and is developed using PCR for every streamflow and groundwater site using previous 3-month observations (October, November, and December) available at that particular site as predictors. Modeling schemes 2 and 3 are developed using PCR and CCA, respectively, to evaluate the role of precipitation forecasts in improving monthly and seasonal groundwater predictions. Modeling scheme 3, which employs a CCA approach, is developed for each site by considering observed groundwater levels from nearby sites as predictands. The performance of these three schemes is evaluated using two metrics (correlation coefficient and relative RMS error) by developing groundwater-level forecasts based on leave-five-out cross-validation. Results from the research reported in this paper show that using precipitation forecasts in climate models improves the ability to predict the interannual variability of winter and spring streamflow and groundwater levels over the basin. However, significant conditional bias exists in all the three modeling schemes, which indicates the need to consider improved modeling schemes as well as the availability of longer time-series of observed hydroclimatic information over the basin.
NASA Astrophysics Data System (ADS)
Kaminski, J. W.; Semeniuk, K.; McConnell, J. C.; Lupu, A.; Mamun, A.
2012-12-01
The Global Environmental Multiscale model for Air Quality and climate change (GEM-AC) is a global general circulation model based on the GEM model developed by the Meteorological Service of Canada for operational weather forecasting. It can be run with a global uniform (GU) grid or a global variable (GV) grid where the core has uniform grid spacing and the exterior grid expands. With a GV grid high resolution regional runs can be accomplished without a concern for boundary conditions. The work described here uses GEM version 3.3.2. The gas-phase chemistry consists in detailed reactions of Ox, NOx, HOx, CO, CH4, NMVOCs, halocarbons, ClOx and BrO. We have recently added elements of the Global Modal-aerosol eXtension (GMXe) scheme to address aerosol microphysics and gas-aerosol partitioning. The evaluation of the MESSY GMXe aerosol scheme is addressed in another poster. The Canadian aerosol module (CAM) is also available. Tracers are advected using the semi-Lagrangian scheme native to GEM. The vertical transport includes parameterized subgrid scale turbulence and large scale convection. Dry deposition is implemented as a flux boundary condition in the vertical diffusion equation. For climate runs the GHGs CO2, CH4, N2O, CFCs in the radiation scheme are adjusted to the scenario considered. In GV regional mode at high resolutions a lake model, FLAKE is also included. Wet removal comprises both in-cloud and below-cloud scavenging. With the gas phase chemistry the model has been run for a series of ten year time slices on a 3°×3° global grid with 77 hybrid levels from the surface to 0.15 hPa. The tropospheric and stratospheric gas phase results are compared with satellite measurements including, ACE, MIPAS, MOPITT, and OSIRIS. Current evaluations of the ozone field and other stratospheric fields are encouraging and tropospheric lifetimes for CH4 and CH3CCl3 are in reasonable accord with tropospheric models. We will present results for current and future climate conditions forced by SST for 2050.
FeynArts model file for MSSM transition counterterms from DREG to DRED
NASA Astrophysics Data System (ADS)
Stöckinger, Dominik; Varšo, Philipp
2012-02-01
The FeynArts model file MSSMdreg2dred implements MSSM transition counterterms which can convert one-loop Green functions from dimensional regularization to dimensional reduction. They correspond to a slight extension of the well-known Martin/Vaughn counterterms, specialized to the MSSM, and can serve also as supersymmetry-restoring counterterms. The paper provides full analytic results for the counterterms and gives one- and two-loop usage examples. The model file can simplify combining MS¯-parton distribution functions with supersymmetric renormalization or avoiding the renormalization of ɛ-scalars in dimensional reduction. Program summaryProgram title:MSSMdreg2dred.mod Catalogue identifier: AEKR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL-License [1] No. of lines in distributed program, including test data, etc.: 7600 No. of bytes in distributed program, including test data, etc.: 197 629 Distribution format: tar.gz Programming language: Mathematica, FeynArts Computer: Any, capable of running Mathematica and FeynArts Operating system: Any, with running Mathematica, FeynArts installation Classification: 4.4, 5, 11.1 Subprograms used: Cat Id Title Reference ADOW_v1_0 FeynArts CPC 140 (2001) 418 Nature of problem: The computation of one-loop Feynman diagrams in the minimal supersymmetric standard model (MSSM) requires regularization. Two schemes, dimensional regularization and dimensional reduction are both common but have different pros and cons. In order to combine the advantages of both schemes one would like to easily convert existing results from one scheme into the other. Solution method: Finite counterterms are constructed which correspond precisely to the one-loop scheme differences for the MSSM. They are provided as a FeynArts [2] model file. Using this model file together with FeynArts, the (ultra-violet) regularization of any MSSM one-loop Green function is switched automatically from dimensional regularization to dimensional reduction. In particular the counterterms serve as supersymmetry-restoring counterterms for dimensional regularization. Restrictions: The counterterms are restricted to the one-loop level and the MSSM. Running time: A few seconds to generate typical Feynman graphs with FeynArts.
Tangcharoensathien, Viroj; Pitayarangsarit, Siriwan; Patcharanarumol, Walaiporn; Prakongsai, Phusit; Sumalee, Hathaichanok; Tosanguan, Jiraboon; Mills, Anne
2013-08-06
Empirical evidence demonstrates that the Thai Universal Coverage Scheme (UCS) has improved equity of health financing and provided a relatively high level of financial risk protection. Several UCS design features contribute to these outcomes: a tax-financed scheme, a comprehensive benefit package and gradual extension of coverage to illnesses that can lead to catastrophic household costs, and capacity of the National Health Security Office (NHSO) to mobilise adequate resources. This study assesses the policy processes related to making decisions on these features. The study employs qualitative methods including reviews of relevant documents, in-depth interviews of 25 key informants, and triangulation amongst information sources. Continued political and financial commitments to the UCS, despite political rivalry, played a key role. The Thai Rak Thai (TRT)-led coalition government introduced UCS; staying in power 8 of the 11 years between 2001 and 2011 was long enough to nurture and strengthen the UCS and overcome resistance from various opponents. Prime Minister Surayud's government, replacing the ousted TRT government, introduced universal renal replacement therapy, which deepened financial risk protection.Commitment to their manifesto and fiscal capacity pushed the TRT to adopt a general tax-financed universal scheme; collecting premiums from people engaged in the informal sector was neither politically palatable nor technically feasible. The relatively stable tenure of NHSO Secretary Generals and the chairs of the Financing and the Benefit Package subcommittees provided a platform for continued deepening of financial risk protection. NHSO exerted monopsonistic purchasing power to control prices, resulting in greater patient access and better systems efficiency than might have been the case with a different design.The approach of proposing an annual per capita budget changed the conventional line-item programme budgeting system by basing negotiations between the Bureau of Budget, the NHSO and other stakeholders on evidence of service utilization and unit costs. Future success of Thai UCS requires coverage of effective interventions that address primary and secondary prevention of non-communicable diseases and long-term care policies in view of epidemiologic and demographic transitions. Lessons for other countries include the importance of continued political support, evidence informed decisions, and a capable purchaser organization.
Effects of Planetary Boundary Layer Parameterizations on CWRF Regional Climate Simulation
NASA Astrophysics Data System (ADS)
Liu, S.; Liang, X.
2011-12-01
Planetary Boundary Layer (PBL) parameterizations incorporated in CWRF (Climate extension of the Weather Research and Forecasting model) are first evaluated by comparing simulated PBL heights with observations. Among the 10 evaluated PBL schemes, 2 (CAM, UW) are new in CWRF while the other 8 are original WRF schemes. MYJ, QNSE and UW determine the PBL heights based on turbulent kinetic energy (TKE) profiles, while others (YSU, ACM, GFS, CAM, TEMF) are from bulk Richardson criteria. All TKE-based schemes (MYJ, MYNN, QNSE, UW, Boulac) substantially underestimate convective or residual PBL heights from noon toward evening, while others (ACM, CAM, YSU) well capture the observed diurnal cycle except for the GFS with systematic overestimation. These differences among the schemes are representative over most areas of the simulation domain, suggesting systematic behaviors of the parameterizations. Lower PBL heights simulated by the QNSE and MYJ are consistent with their smaller Bowen ratios and heavier rainfalls, while higher PBL tops by the GFS correspond to warmer surface temperatures. Effects of PBL parameterizations on CWRF regional climate simulation are then compared. The QNSE PBL scheme yields systematically heavier rainfall almost everywhere and throughout the year; this is identified with a much greater surface Bowen ratio (smaller sensible versus larger latent heating) and wetter soil moisture than other PBL schemes. Its predecessor MYJ scheme shares the same deficiency to a lesser degree. For temperature, the performance of the QNSE and MYJ schemes remains poor, having substantially larger rms errors in all seasons. GFS PBL scheme also produces large warm biases. Pronounced sensitivities are also found to the PBL schemes in winter and spring over most areas except the southern U.S. (Southeast, Gulf States, NAM); excluding the outliers (QNSE, MYJ, GFS) that cause extreme biases of -6 to +3°C, the differences among the schemes are still visible (±2°C), where the CAM is generally more realistic. QNSE, MYJ, GFS and BouLac PBL parameterizations are identified as obvious outliers of overall performance in representing precipitation, surface air temperature or PBL height variations. Their poor performance may result from deficiencies in physical formulations, dependences on applicable scales, or trouble numerical implementations, requiring future detailed investigation to isolate the actual cause.
The alpha(3) Scheme - A Fourth-Order Neutrally Stable CESE Solver
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2007-01-01
The conservation element and solution element (CESE) development is driven by a belief that a solver should (i) enforce conservation laws in both space and time, and (ii) be built from a non-dissipative (i.e., neutrally stable) core scheme so that the numerical dissipation can be controlled effectively. To provide a solid foundation for a systematic CESE development of high order schemes, in this paper we describe a new 4th-order neutrally stable CESE solver of the advection equation Theta u/Theta + alpha Theta u/Theta x = 0. The space-time stencil of this two-level explicit scheme is formed by one point at the upper time level and three points at the lower time level. Because it is associated with three independent mesh variables u(sup n) (sub j), (u(sub x))(sup n) (sub j) , and (uxz)(sup n) (sub j) (the numerical analogues of u, Theta u/Theta x, and Theta(exp 2)u/Theta x(exp 2), respectively) and four equations per mesh point, the new scheme is referred to as the alpha(3) scheme. As in the case of other similar CESE neutrally stable solvers, the alpha(3) scheme enforces conservation laws in space-time locally and globally, and it has the basic, forward marching, and backward marching forms. These forms are equivalent and satisfy a space-time inversion (STI) invariant property which is shared by the advection equation. Based on the concept of STI invariance, a set of algebraic relations is developed and used to prove that the alpha(3) scheme must be neutrally stable when it is stable. Moreover it is proved rigorously that all three amplification factors of the alpha(3) scheme are of unit magnitude for all phase angles if |v| <= 1/2 (v = alpha delta t/delta x). This theoretical result is consistent with the numerical stability condition |v| <= 1/2. Through numerical experiments, it is established that the alpha(3) scheme generally is (i) 4th-order accurate for the mesh variables u(sup n) (sub j) and (ux)(sup n) (sub j); and 2nd-order accurate for (uxx)(sup n) (sub j). However, in some exceptional cases, the scheme can achieve perfect accuracy aside from round-off errors.
Comparison of different pairing fluctuation approaches to BCS-BEC crossover
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levin, Kathryn; Chen Qijin; Zhejiang Institute of Modern Physics and Department of Physics, Zhejiang University, Hangzhou, Zhejiang 310027
2010-02-15
The subject of BCS-Bose-Einstein condensation (BEC) crossover is particularly exciting because of its realization in ultracold atomic Fermi gases and its possible relevance to high temperature superconductors. In this paper we review the body of theoretical work on this subject, which represents a natural extension of the seminal papers by Leggett and by Nozieres and Schmitt-Rink (NSR). The former addressed only the ground state, now known as the 'BCS-Leggett' wave-function, and the key contributions of the latter pertain to calculations of the superfluid transition temperature T{sub c}. These two papers have given rise to two main and, importantly, distinct, theoreticalmore » schools in the BCS-BEC crossover literature. The first of these extends the BCS-Leggett ground state to finite temperature and the second extends the NSR scheme away from T{sub c} both in the superfluid and normal phases. It is now rather widely accepted that these extensions of NSR produce a different ground state than that first introduced by Leggett. This observation provides a central motivation for the present paper which seeks to clarify the distinctions in the two approaches. Our analysis shows how the NSR-based approach views the bosonic contributions more completely but treats the fermions as 'quasi-free'. By contrast, the BCS-Leggett based approach treats the fermionic contributions more completely but treats the bosons as 'quasi-free'. In a related fashion, the NSR-based schemes approach the crossover between BCS and BEC by starting from the BEC limit and the BCS-Leggett based scheme approaches this crossover by starting from the BCS limit. Ultimately, one would like to combine these two schemes. There are, however, many difficult problems to surmount in any attempt to bridge the gap in the two theory classes. In this paper we review the strengths and weaknesses of both approaches. The flexibility of the BCS-Leggett based approach and its ease of handling make it widely used in T=0 applications, although the NSR-based schemes tend to be widely used at T{ne}0. To reach a full understanding, it is important in the future to invest effort in investigating in more detail the T=0 aspects of NSR-based theory and at the same time the T{ne}0 aspects of BCS-Leggett theory.« less
NASA Astrophysics Data System (ADS)
Davies, J. S.; Guillaumont, B.; Tempera, F.; Vertino, A.; Beuck, L.; Ólafsdóttir, S. H.; Smith, C. J.; Fosså, J. H.; van den Beld, I. M. J.; Savini, A.; Rengstorf, A.; Bayle, C.; Bourillet, J.-F.; Arnaud-Haond, S.; Grehan, A.
2017-11-01
Cold-water corals (CWC) can form complex structures which provide refuge, nursery grounds and physical support for a diversity of other living organisms. However, irrespectively from such ecological significance, CWCs are still vulnerable to human pressures such as fishing, pollution, ocean acidification and global warming Providing coherent and representative conservation of vulnerable marine ecosystems including CWCs is one of the aims of the Marine Protected Areas networks being implemented across European seas and oceans under the EC Habitats Directive, the Marine Strategy Framework Directive and the OSPAR Convention. In order to adequately represent ecosystem diversity, these initiatives require a standardised habitat classification that organises the variety of biological assemblages and provides consistent and functional criteria to map them across European Seas. One such classification system, EUNIS, enables a broad level classification of the deep sea based on abiotic and geomorphological features. More detailed lower biotope-related levels are currently under-developed, particularly with regards to deep-water habitats (>200 m depth). This paper proposes a hierarchical CWC biotope classification scheme that could be incorporated by existing classification schemes such as EUNIS. The scheme was developed within the EU FP7 project CoralFISH to capture the variability of CWC habitats identified using a wealth of seafloor imagery datasets from across the Northeast Atlantic and Mediterranean. Depending on the resolution of the imagery being interpreted, this hierarchical scheme allows data to be recorded from broad CWC biotope categories down to detailed taxonomy-based levels, thereby providing a flexible yet valuable information level for management. The CWC biotope classification scheme identifies 81 biotopes and highlights the limitations of the classification framework and guidance provided by EUNIS, the EC Habitats Directive, OSPAR and FAO; which largely underrepresent CWC habitats.
The social security scheme in Thailand: what lessons can be drawn?
Tangcharoensathien, V; Supachutikul, A; Lertiendumrong, J
1999-04-01
The Social Security Scheme was launched in 1990, covering formal sector private employees for non-work related sickness, maternity and invalidity including cash benefits and funeral grants. The scheme is financed by tripartite contributions from government, employers and employees, each of 1.5% of payroll (total of 4.5%). The scheme decided to pay health care providers, whether public or private, on a flat rate capitation basis to cover both ambulatory and inpatient care. Registration of the insured with a contractor hospital was a necessary consequence of the chosen capitation payment system. The aim of this paper is to review the operation of the scheme, and to explore the implications of capitation payment and registration for utilisation levels and provider behaviour. A key weakness of the scheme's design is suggested to be the initial decision to give employers not employees the responsibility for choosing the registered hospitals. This was done for administrative reasons, but it contributed to low levels of use of the contractor hospitals. In addition, low levels of use were also probably the result of the potential for cream skimming, cost shifting from inpatient to ambulatory care and under-provision of patient care, though since monitoring mechanisms by the Social Security Office were weak, these effects are difficult to detect conclusively. Mechanisms to improve utilisation levels were gradually introduced, such as employee choice of registered hospitals and the formation of sub-contractor networks to improve access to care. A beneficial effect of the capitation payment system was that the Social Security Fund generated substantial reserves and expenditures on sickness benefits were well stabilised. The paper ends by recommending that future policy amendments should be guided by research and empirical findings and that tougher monitoring and enforcement of quality of care standards are required.
1997-09-30
research is multiscale , interdisciplinary and generic. The methods are applicable to an arbitrary region of the coastal and/or deep ocean and across the...dynamics. OBJECTIVES General objectives are: (I) To determine for the coastal and/or coupled deep ocean the multiscale processes which occur: i) in...Straits and the eastern basin; iii) extension and application of our balance of terms scheme (EVA) to multiscale , interdisciplinary fields with data
Brian J. Clough; Matthew B. Russell; Grant M. Domke; Christopher W. Woodall; Philip J. Radtke
2016-01-01
tEstimation of live tree biomass is an important task for both forest carbon accounting and studies of nutri-ent dynamics in forest ecosystems. In this study, we took advantage of an extensive felled-tree database(with 2885 foliage biomass observations) to compare different models and grouping schemes based onphylogenetic and geographic variation for predicting foliage...
A non-axisymmetric linearized supersonic wave drag analysis: Mathematical theory
NASA Technical Reports Server (NTRS)
Barnhart, Paul J.
1996-01-01
A Mathematical theory is developed to perform the calculations necessary to determine the wave drag for slender bodies of non-circular cross section. The derivations presented in this report are based on extensions to supersonic linearized small perturbation theory. A numerical scheme is presented utilizing Fourier decomposition to compute the pressure coefficient on and about a slender body of arbitrary cross section.
A winning formula for new laboratories.
Robinson, Tim
2012-11-01
High quality architecture and extensive stakeholder consultations have transformed the delivery of Laboratory Medicine within Sheffield Teaching Hospitals NHS Foundation Trust (NHSFT), explains Tim Robinson, senior architect at Race Cottam Associates, the architects on a scheme that has seen laboratory services from across the city consolidated within one modern, spacious, well-lit, and well-equipped building that should provide an extremely positive, 'future-proofed' working environment for staff.
Investigation of nonlinear motion simulator washout schemes
NASA Technical Reports Server (NTRS)
Riedel, S. A.; Hofmann, L. G.
1978-01-01
An overview is presented of some of the promising washout schemes which have been devised. The four schemes presented fall into two basic configurations; crossfeed and crossproduct. Various nonlinear modifications further differentiate the four schemes. One nonlinear scheme is discussed in detail. This washout scheme takes advantage of subliminal motions to speed up simulator cab centering. It exploits so-called perceptual indifference thresholds to center the simulator cab at a faster rate whenever the input to the simulator is below the perceptual indifference level. The effect is to reduce the angular and translational simulation motion by comparison with that for the linear washout case. Finally, the conclusions and implications for further research in the area of nonlinear washout filters are presented.
Multi-level optimization of a beam-like space truss utilizing a continuum model
NASA Technical Reports Server (NTRS)
Yates, K.; Gurdal, Z.; Thangjitham, S.
1992-01-01
A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.
Teleporting entanglements of cavity-field states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pires, Geisa; Baseia, B.; Almeida, N.G. de
2004-08-01
We present a scheme to teleport an entanglement of zero- and one-photon states from one cavity to another. The scheme, which has 100% success probability, relies on two perfect and identical bimodal cavities, a collection of two kinds of two-level atoms, a three-level atom in a ladder configuration driven by a classical field, Ramsey zones, and selective atomic-state detectors.
Energy levels scheme simulation of divalent cobalt doped bismuth germanate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreici, Emiliana-Laura, E-mail: andreicilaura@yahoo.com; Petkova, Petya; Avram, Nicolae M.
The aim of this paper is to simulate the energy levels scheme for Bismuth Germanate (BGO) doped with divalent cobalt, in order to give a reliable explanation for spectral experimental data. In the semiempirical crystal field theory we first modeled the Crystal Field Parameters (CFPs) of BGO:Cr{sup 2+} system, in the frame of Exchange Charge Model (ECM), with actually site symmetry of the impurity ions after doping. The values of CFPs depend on the geometry of doped host matrix and by parameter G of ECM. First, we optimized the geometry of undoped BGO host matrix and afterwards, that of dopedmore » BGO with divalent cobalt. The charges effect of ligands and covalence bonding between cobalt cations and oxygen anions, in the cluster approach, also were taken into account. With the obtained values of the CFPs we simulate the energy levels scheme of cobalt ions, by diagonalizing the matrix of the doped crystal Hamiltonian. Obviously, energy levels and estimated Racah parameters B and C were compared with the experimental spectroscopic data and discussed. Comparison of obtained results with experimental data shows quite satisfactory, which justify the model and simulation schemes used for the title system.« less
Adaptive threshold control for auto-rate fallback algorithm in IEEE 802.11 multi-rate WLANs
NASA Astrophysics Data System (ADS)
Wu, Qilin; Lu, Yang; Zhu, Xiaolin; Ge, Fangzhen
2012-03-01
The IEEE 802.11 standard supports multiple rates for data transmission in the physical layer. Nowadays, to improve network performance, a rate adaptation scheme called auto-rate fallback (ARF) is widely adopted in practice. However, ARF scheme suffers performance degradation in multiple contending nodes environments. In this article, we propose a novel rate adaptation scheme called ARF with adaptive threshold control. In multiple contending nodes environment, the proposed scheme can effectively mitigate the frame collision effect on rate adaptation decision by adaptively adjusting rate-up and rate-down threshold according to the current collision level. Simulation results show that the proposed scheme can achieve significantly higher throughput than the other existing rate adaptation schemes. Furthermore, the simulation results also demonstrate that the proposed scheme can effectively respond to the varying channel condition.
Talebi, H A; Khorasani, K; Tafazoli, S
2009-01-01
This paper presents a robust fault detection and isolation (FDI) scheme for a general class of nonlinear systems using a neural-network-based observer strategy. Both actuator and sensor faults are considered. The nonlinear system considered is subject to both state and sensor uncertainties and disturbances. Two recurrent neural networks are employed to identify general unknown actuator and sensor faults, respectively. The neural network weights are updated according to a modified backpropagation scheme. Unlike many previous methods developed in the literature, our proposed FDI scheme does not rely on availability of full state measurements. The stability of the overall FDI scheme in presence of unknown sensor and actuator faults as well as plant and sensor noise and uncertainties is shown by using the Lyapunov's direct method. The stability analysis developed requires no restrictive assumptions on the system and/or the FDI algorithm. Magnetorquer-type actuators and magnetometer-type sensors that are commonly employed in the attitude control subsystem (ACS) of low-Earth orbit (LEO) satellites for attitude determination and control are considered in our case studies. The effectiveness and capabilities of our proposed fault diagnosis strategy are demonstrated and validated through extensive simulation studies.
Laser cooling of molecules by zero-velocity selection and single spontaneous emission
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ooi, C. H. Raymond
2010-11-15
A laser-cooling scheme for molecules is presented based on repeated cycle of zero-velocity selection, deceleration, and irreversible accumulation. Although this scheme also employs a single spontaneous emission as in [Raymond Ooi, Marzlin, and Audretsch, Eur. Phys. J. D 22, 259 (2003)], in order to circumvent the difficulty of maintaining closed pumping cycles in molecules, there are two distinct features which make the cooling process of this scheme faster and more practical. First, the zero-velocity selection creates a narrow velocity-width population with zero mean velocity, such that no further deceleration (with many stimulated Raman adiabatic passage (STIRAP) pulses) is required. Second,more » only two STIRAP processes are required to decelerate the remaining hot molecular ensemble to create a finite population around zero velocity for the next cycle. We present a setup to realize the cooling process in one dimension with trapping in the other two dimensions using a Stark barrel. Numerical estimates of the cooling parameters and simulations with density matrix equations using OH molecules show the applicability of the cooling scheme. For a gas at temperature T=1 K, the estimated cooling time is only 2 ms, with phase-space density increased by about 30 times. The possibility of extension to three-dimensional cooling via thermalization is also discussed.« less
Truthful Channel Sharing for Self Coexistence of Overlapping Medical Body Area Networks
Dutkiewicz, Eryk; Zheng, Guanglou
2016-01-01
As defined by IEEE 802.15.6 standard, channel sharing is a potential method to coordinate inter-network interference among Medical Body Area Networks (MBANs) that are close to one another. However, channel sharing opens up new vulnerabilities as selfish MBANs may manipulate their online channel requests to gain unfair advantage over others. In this paper, we address this issue by proposing a truthful online channel sharing algorithm and a companion protocol that allocates channel efficiently and truthfully by punishing MBANs for misreporting their channel request parameters such as time, duration and bid for the channel. We first present an online channel sharing scheme for unit-length channel requests and prove that it is truthful. We then generalize our model to settings with variable-length channel requests, where we propose a critical value based channel pricing and preemption scheme. A bid adjustment procedure prevents unbeneficial preemption by artificially raising the ongoing winner’s bid controlled by a penalty factor λ. Our scheme can efficiently detect selfish behaviors by monitoring a trust parameter α of each MBAN and punish MBANs from cheating by suspending their requests. Our extensive simulation results show our scheme can achieve a total profit that is more than 85% of the offline optimum method in the typical MBAN settings. PMID:26844888
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.
Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T
2016-12-01
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.
An accurate front capturing scheme for tumor growth models with a free boundary limit
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Tang, Min; Wang, Li; Zhou, Zhennan
2018-07-01
We consider a class of tumor growth models under the combined effects of density-dependent pressure and cell multiplication, with a free boundary model as its singular limit when the pressure-density relationship becomes highly nonlinear. In particular, the constitutive law connecting pressure p and density ρ is p (ρ) = m/m-1 ρ m - 1, and when m ≫ 1, the cell density ρ may evolve its support according to a pressure-driven geometric motion with sharp interface along its boundary. The nonlinearity and degeneracy in the diffusion bring great challenges in numerical simulations. Prior to the present paper, there is lack of standard mechanism to numerically capture the front propagation speed as m ≫ 1. In this paper, we develop a numerical scheme based on a novel prediction-correction reformulation that can accurately approximate the front propagation even when the nonlinearity is extremely strong. We show that the semi-discrete scheme naturally connects to the free boundary limit equation as m → ∞. With proper spatial discretization, the fully discrete scheme has improved stability, preserves positivity, and can be implemented without nonlinear solvers. Finally, extensive numerical examples in both one and two dimensions are provided to verify the claimed properties in various applications.
Development of Three-Dimensional DRAGON Grid Technology
NASA Technical Reports Server (NTRS)
Zheng, Yao; Kiou, Meng-Sing; Civinskas, Kestutis C.
1999-01-01
For a typical three dimensional flow in a practical engineering device, the time spent in grid generation can take 70 percent of the total analysis effort, resulting in a serious bottleneck in the design/analysis cycle. The present research attempts to develop a procedure that can considerably reduce the grid generation effort. The DRAGON grid, as a hybrid grid, is created by means of a Direct Replacement of Arbitrary Grid Overlapping by Nonstructured grid. The DRAGON grid scheme is an adaptation to the Chimera thinking. The Chimera grid is a composite structured grid, composing a set of overlapped structured grids, which are independently generated and body-fitted. The grid is of high quality and amenable for efficient solution schemes. However, the interpolation used in the overlapped region between grids introduces error, especially when a sharp-gradient region is encountered. The DRAGON grid scheme is capable of completely eliminating the interpolation and preserving the conservation property. It maximizes the advantages of the Chimera scheme and adapts the strengths of the unstructured and while at the same time keeping its weaknesses minimal. In the present paper, we describe the progress towards extending the DRAGON grid technology into three dimensions. Essential and programming aspects of the extension, and new challenges for the three-dimensional cases, are addressed.
Reducing the PAPR in FBMC-OQAM systems with low-latency trellis-based SLM technique
NASA Astrophysics Data System (ADS)
Bulusu, S. S. Krishna Chaitanya; Shaiek, Hmaied; Roviras, Daniel
2016-12-01
Filter-bank multi-carrier (FBMC) modulations, and more specifically FBMC-offset quadrature amplitude modulation (OQAM), are seen as an interesting alternative to orthogonal frequency division multiplexing (OFDM) for the 5th generation radio access technology. In this paper, we investigate the problem of peak-to-average power ratio (PAPR) reduction for FBMC-OQAM signals. Recently, it has been shown that FBMC-OQAM with trellis-based selected mapping (TSLM) scheme not only is superior to any scheme based on symbol-by-symbol approach but also outperforms that of the OFDM with classical SLM scheme. This paper is an extension of that work, where we analyze the TSLM in terms of computational complexity, required hardware memory, and latency issues. We have proposed an improvement to the TSLM, which requires very less hardware memory, compared to the originally proposed TSLM, and also have low latency. Additionally, the impact of the time duration of partial PAPR on the performance of TSLM is studied, and its lower bound has been identified by proposing a suitable time duration. Also, a thorough and fair comparison of performance has been done with an existing trellis-based scheme proposed in literature. The simulation results show that the proposed low-latency TSLM yields better PAPR reduction performance with relatively less hardware memory requirements.
Direct adaptive control of a PUMA 560 industrial robot
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Lee, Thomas; Delpech, Michel
1989-01-01
The implementation and experimental validation of a new direct adaptive control scheme on a PUMA 560 industrial robot is described. The testbed facility consists of a Unimation PUMA 560 six-jointed robot and controller, and a DEC MicroVAX II computer which hosts the Robot Control C Library software. The control algorithm is implemented on the MicroVAX which acts as a digital controller for the PUMA robot, and the Unimation controller is effectively bypassed and used merely as an I/O device to interface the MicroVAX to the joint motors. The control algorithm for each robot joint consists of an auxiliary signal generated by a constant-gain Proportional plus Integral plus Derivative (PID) controller, and an adaptive position-velocity (PD) feedback controller with adjustable gains. The adaptive independent joint controllers compensate for the inter-joint couplings and achieve accurate trajectory tracking without the need for the complex dynamic model and parameter values of the robot. Extensive experimental results on PUMA joint control are presented to confirm the feasibility of the proposed scheme, in spite of strong interactions between joint motions. Experimental results validate the capabilities of the proposed control scheme. The control scheme is extremely simple and computationally very fast for concurrent processing with high sampling rates.
Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.
Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann
2017-01-01
Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.
APC-PC Combined Scheme in Gilbert Two State Model: Proposal and Study
NASA Astrophysics Data System (ADS)
Bulo, Yaka; Saring, Yang; Bhunia, Chandan Tilak
2017-04-01
In an automatic repeat request (ARQ) scheme, a packet is retransmitted if it gets corrupted due to transmission errors caused by the channel. However, an erroneous packet may contain both erroneous bits and correct bits and hence it may still contain useful information. The receiver may be able to combine this information from multiple erroneous copies to recover the correct packet. Packet combining (PC) is a simple and elegant scheme of error correction in transmitted packet, in which two received copies are XORed to obtain the bit location of erroneous bits. Thereafter, the packet is corrected by bit inversion of bit located as erroneous. Aggressive packet combining (APC) is a logic extension of PC primarily designed for wireless communication with objective of correcting error with low latency. PC offers higher throughput than APC, but PC does not correct double bit errors if occur in same bit location of erroneous copies of the packet. A hybrid technique is proposed to utilize the advantages of both APC and PC while attempting to remove the limitation of both. In the proposed technique, applications of APC-PC on Gilbert two state model has been studied. The simulation results show that the proposed technique offers better throughput than the conventional APC and lesser packet error rate than PC scheme.
Some Aspects of Essentially Nonoscillatory (ENO) Formulations for the Euler Equations, Part 3
NASA Technical Reports Server (NTRS)
Chakravarthy, Sukumar R.
1990-01-01
An essentially nonoscillatory (ENO) formulation is described for hyperbolic systems of conservation laws. ENO approaches are based on smart interpolation to avoid spurious numerical oscillations. ENO schemes are a superset of Total Variation Diminishing (TVD) schemes. In the recent past, TVD formulations were used to construct shock capturing finite difference methods. At extremum points of the solution, TVD schemes automatically reduce to being first-order accurate discretizations locally, while away from extrema they can be constructed to be of higher order accuracy. The new framework helps construct essentially non-oscillatory finite difference methods without recourse to local reductions of accuracy to first order. Thus arbitrarily high orders of accuracy can be obtained. The basic general ideas of the new approach can be specialized in several ways and one specific implementation is described based on: (1) the integral form of the conservation laws; (2) reconstruction based on the primitive functions; (3) extension to multiple dimensions in a tensor product fashion; and (4) Runge-Kutta time integration. The resulting method is fourth-order accurate in time and space and is applicable to uniform Cartesian grids. The construction of such schemes for scalar equations and systems in one and two space dimensions is described along with several examples which illustrate interesting aspects of the new approach.
Multiply scaled constrained nonlinear equation solvers. [for nonlinear heat conduction problems
NASA Technical Reports Server (NTRS)
Padovan, Joe; Krishna, Lala
1986-01-01
To improve the numerical stability of nonlinear equation solvers, a partitioned multiply scaled constraint scheme is developed. This scheme enables hierarchical levels of control for nonlinear equation solvers. To complement the procedure, partitioned convergence checks are established along with self-adaptive partitioning schemes. Overall, such procedures greatly enhance the numerical stability of the original solvers. To demonstrate and motivate the development of the scheme, the problem of nonlinear heat conduction is considered. In this context the main emphasis is given to successive substitution-type schemes. To verify the improved numerical characteristics associated with partitioned multiply scaled solvers, results are presented for several benchmark examples.
NASA Astrophysics Data System (ADS)
Wu, Xing-Gang; Shen, Jian-Ming; Du, Bo-Lun; Brodsky, Stanley J.
2018-05-01
As a basic requirement of the renormalization group invariance, any physical observable must be independent of the choice of both the renormalization scheme and the initial renormalization scale. In this paper, we show that by using the newly suggested C -scheme coupling, one can obtain a demonstration that the principle of maximum conformality prediction is scheme-independent to all-orders for any renormalization schemes, thus satisfying all of the conditions of the renormalization group invariance. We illustrate these features for the nonsinglet Adler function and for τ decay to ν + hadrons at the four-loop level.
One-dimensional high-order compact method for solving Euler's equations
NASA Astrophysics Data System (ADS)
Mohamad, M. A. H.; Basri, S.; Basuno, B.
2012-06-01
In the field of computational fluid dynamics, many numerical algorithms have been developed to simulate inviscid, compressible flows problems. Among those most famous and relevant are based on flux vector splitting and Godunov-type schemes. Previously, this system was developed through computational studies by Mawlood [1]. However the new test cases for compressible flows, the shock tube problems namely the receding flow and shock waves were not investigated before by Mawlood [1]. Thus, the objective of this study is to develop a high-order compact (HOC) finite difference solver for onedimensional Euler equation. Before developing the solver, a detailed investigation was conducted to assess the performance of the basic third-order compact central discretization schemes. Spatial discretization of the Euler equation is based on flux-vector splitting. From this observation, discretization of the convective flux terms of the Euler equation is based on a hybrid flux-vector splitting, known as the advection upstream splitting method (AUSM) scheme which combines the accuracy of flux-difference splitting and the robustness of flux-vector splitting. The AUSM scheme is based on the third-order compact scheme to the approximate finite difference equation was completely analyzed consequently. In one-dimensional problem for the first order schemes, an explicit method is adopted by using time integration method. In addition to that, development and modification of source code for the one-dimensional flow is validated with four test cases namely, unsteady shock tube, quasi-one-dimensional supersonic-subsonic nozzle flow, receding flow and shock waves in shock tubes. From these results, it was also carried out to ensure that the definition of Riemann problem can be identified. Further analysis had also been done in comparing the characteristic of AUSM scheme against experimental results, obtained from previous works and also comparative analysis with computational results generated by van Leer, KFVS and AUSMPW schemes. Furthermore, there is a remarkable improvement with the extension of the AUSM scheme from first-order to third-order accuracy in terms of shocks, contact discontinuities and rarefaction waves.
NASA Astrophysics Data System (ADS)
Chaouch, Naira; Temimi, Marouane; Weston, Michael; Ghedira, Hosni
2017-05-01
In this study, we intercompare seven different PBL schemes in WRF in the United Arab Emirates (UAE) and we assess their impact on the performance of the simulations. The study covered five fog events reported in 2014 at Abu Dhabi International Airport. The analysis of Synoptic conditions indicated that during all examined events, the UAE was under a high geopotential pressure and light wind that does not exceed 7 m/s at 850 hPa ( 1.5 km). Seven PBL schemes, namely, Yonsei University (YSU), Mellor-Yamada-Janjic (MYJ), Moller-Yamada Nakanishi and Niino (MYNN) level 2.5, Quasi-Normal Scale Elimination (QNSE-EDMF), Asymmetric Convective Model (ACM2), Grenier-Bretherton-McCaa (GBM) and MYNN level 3 were tested. In situ observations used in the model's assessment included radiosonde data from the Abu Dhabi International Airport and surface measurements of relative humidity (RH), dew point temperature, wind speed, and temperature profiles. Overall, all the tested PBL schemes showed comparable skills with relatively higher performance with the QNSE scheme. The average RH Root Mean Square Error (RMSE) and BIAS for all PBLs were 15.75% and - 9.07%, respectively, whereas the obtained RMSE and BIAS when QNSE was used were 14.65% and - 6.3% respectively. Comparable skills were obtained for the rest of the variables. Local PBL schemes showed better performance than non-local schemes. Discrepancies between simulated and observed values were higher at the surface level compared to high altitude values. The sensitivity to lead time showed that best simulation performances were obtained when the lead time varies between 12 and 18 h. In addition, the results of the simulations show that better performance is obtained when the starting condition is dry.
A hybrid deep learning approach to predict malignancy of breast lesions using mammograms
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Heidari, Morteza; Mirniaharikandehei, Seyedehnafiseh; Gong, Jing; Qian, Wei; Qiu, Yuchen; Zheng, Bin
2018-03-01
Applying deep learning technology to medical imaging informatics field has been recently attracting extensive research interest. However, the limited medical image dataset size often reduces performance and robustness of the deep learning based computer-aided detection and/or diagnosis (CAD) schemes. In attempt to address this technical challenge, this study aims to develop and evaluate a new hybrid deep learning based CAD approach to predict likelihood of a breast lesion detected on mammogram being malignant. In this approach, a deep Convolutional Neural Network (CNN) was firstly pre-trained using the ImageNet dataset and serve as a feature extractor. A pseudo-color Region of Interest (ROI) method was used to generate ROIs with RGB channels from the mammographic images as the input to the pre-trained deep network. The transferred CNN features from different layers of the CNN were then obtained and a linear support vector machine (SVM) was trained for the prediction task. By applying to a dataset involving 301 suspicious breast lesions and using a leave-one-case-out validation method, the areas under the ROC curves (AUC) = 0.762 and 0.792 using the traditional CAD scheme and the proposed deep learning based CAD scheme, respectively. An ensemble classifier that combines the classification scores generated by the two schemes yielded an improved AUC value of 0.813. The study results demonstrated feasibility and potentially improved performance of applying a new hybrid deep learning approach to develop CAD scheme using a relatively small dataset of medical images.
Just Noticeable Distortion Model and Its Application in Color Image Watermarking
NASA Astrophysics Data System (ADS)
Liu, Kuo-Cheng
In this paper, a perceptually adaptive watermarking scheme for color images is proposed in order to achieve robustness and transparency. A new just noticeable distortion (JND) estimator for color images is first designed in the wavelet domain. The key issue of the JND model is to effectively integrate visual masking effects. The estimator is an extension to the perceptual model that is used in image coding for grayscale images. Except for the visual masking effects given coefficient by coefficient by taking into account the luminance content and the texture of grayscale images, the crossed masking effect given by the interaction between luminance and chrominance components and the effect given by the variance within the local region of the target coefficient are investigated such that the visibility threshold for the human visual system (HVS) can be evaluated. In a locally adaptive fashion based on the wavelet decomposition, the estimator applies to all subbands of luminance and chrominance components of color images and is used to measure the visibility of wavelet quantization errors. The subband JND profiles are then incorporated into the proposed color image watermarking scheme. Performance in terms of robustness and transparency of the watermarking scheme is obtained by means of the proposed approach to embed the maximum strength watermark while maintaining the perceptually lossless quality of the watermarked color image. Simulation results show that the proposed scheme with inserting watermarks into luminance and chrominance components is more robust than the existing scheme while retaining the watermark transparency.
Crystal collimator systems for high energy frontier
NASA Astrophysics Data System (ADS)
Sytov, A. I.; Tikhomirov, V. V.; Lobko, A. S.
2017-07-01
Crystalline collimators can potentially considerably improve the cleaning performance of the presently used collimator systems using amorphous collimators. A crystal-based collimation scheme which relies on the channeling particle deflection in bent crystals has been proposed and extensively studied both theoretically and experimentally. However, since the efficiency of particle capture into the channeling regime does not exceed ninety percent, this collimation scheme partly suffers from the same leakage problems as the schemes using amorphous collimators. To improve further the cleaning efficiency of the crystal-based collimation system to meet the requirements of the FCC, we suggest here a double crystal-based collimation scheme, to which the second crystal is introduced to enhance the deflection of the particles escaping the capture to the channeling regime in its first crystal. The application of the effect of multiple volume reflection in one bent crystal and of the same in a sequence of crystals is simulated and compared for different crystal numbers and materials at the energy of 50 TeV. To enhance also the efficiency of use of the first crystal of the suggested double crystal-based scheme, we propose: the method of increase of the probability of particle capture into the channeling regime at the first crystal passage by means of fabrication of a crystal cut and the method of the amplification of nonchanneled particle deflection through the multiple volume reflection in one bent crystal, accompanying the particle channeling by a skew plane. We simulate both of these methods for the 50 TeV FCC energy.
Quality of Recovery Evaluation of the Protection Schemes for Fiber-Wireless Access Networks
NASA Astrophysics Data System (ADS)
Fu, Minglei; Chai, Zhicheng; Le, Zichun
2016-03-01
With the rapid development of fiber-wireless (FiWi) access network, the protection schemes have got more and more attention due to the risk of huge data loss when failures occur. However, there are few studies on the performance evaluation of the FiWi protection schemes by the unified evaluation criterion. In this paper, quality of recovery (QoR) method was adopted to evaluate the performance of three typical protection schemes (MPMC scheme, OBOF scheme and RPMF scheme) against the segment-level failure in FiWi access network. The QoR models of the three schemes were derived in terms of availability, quality of backup path, recovery time and redundancy. To compare the performance of the three protection schemes comprehensively, five different classes of network services such as emergency service, prioritized elastic service, conversational service, etc. were utilized by means of assigning different QoR weights. Simulation results showed that, for the most service cases, RPMF scheme was proved to be the best solution to enhance the survivability when planning the FiWi access network.
NASA Astrophysics Data System (ADS)
Korobov, A. E.; Golovastov, S. V.
2015-11-01
Influence of an ejector nozzle extension on gas flow at a pulse detonation engine was investigated numerically and experimentally. Detonation formation was organized in stoichiometric hydrogen-oxygen mixture in cylindrical detonation tube. Cylindrical ejector was constructed and mounted at the open end of the tube. Thrust, air consumption and parameters of the detonation were measured in single and multiple regimes of operation. Axisymmetric model was used in numerical investigation. Equations of Navies-Stokes were solved using a finite-difference scheme Roe of second order of accuracy. Initial conditions were estimated on a base of experimental data. Numerical results were validated with experiments data.
He, Alex Jingwei; Wu, Shaolong
2017-12-01
China's remarkable progress in building a comprehensive social health insurance (SHI) system was swift and impressive. Yet the country's decentralized and incremental approach towards universal coverage has created a fragmented SHI system under which a series of structural deficiencies have emerged with negative impacts. First, contingent on local conditions and financing capacity, benefit packages vary considerably across schemes, leading to systematic inequity. Second, the existence of multiple schemes, complicated by massive migration, has resulted in weak portability of SHI, creating further barriers to access. Third, many individuals are enrolled on multiple schemes, which causes inefficient use of government subsidies. Moral hazard and adverse selection are not effectively managed. The Chinese government announced its blueprint for integrating the urban and rural resident schemes in early 2016, paving the way for the ultimate consolidation of all SHI schemes and equal benefits for all. This article proposes three policy alternatives to inform the consolidation: (1) a single-pool system at the prefectural level with significant government subsidies, (2) a dual-pool system at the prefectural level with risk-equalization mechanisms, and (3) a household approach without merging existing pools. Vertical integration to the provincial level is unlikely to happen in the near future. Two caveats are raised to inform this transition towards universal health coverage.
NASA Astrophysics Data System (ADS)
Chen, Dechao; Zhang, Yunong
2017-10-01
Dual-arm redundant robot systems are usually required to handle primary tasks, repetitively and synchronously in practical applications. In this paper, a jerk-level synchronous repetitive motion scheme is proposed to remedy the joint-angle drift phenomenon and achieve the synchronous control of a dual-arm redundant robot system. The proposed scheme is novelly resolved at jerk level, which makes the joint variables, i.e. joint angles, joint velocities and joint accelerations, smooth and bounded. In addition, two types of dynamics algorithms, i.e. gradient-type (G-type) and zeroing-type (Z-type) dynamics algorithms, for the design of repetitive motion variable vectors, are presented in detail with the corresponding circuit schematics. Subsequently, the proposed scheme is reformulated as two dynamical quadratic programs (DQPs) and further integrated into a unified DQP (UDQP) for the synchronous control of a dual-arm robot system. The optimal solution of the UDQP is found by the piecewise-linear projection equation neural network. Moreover, simulations and comparisons based on a six-degrees-of-freedom planar dual-arm redundant robot system substantiate the operation effectiveness and tracking accuracy of the robot system with the proposed scheme for repetitive motion and synchronous control.
Mang, Andreas; Biros, George
2017-01-01
We propose an efficient numerical algorithm for the solution of diffeomorphic image registration problems. We use a variational formulation constrained by a partial differential equation (PDE), where the constraints are a scalar transport equation. We use a pseudospectral discretization in space and second-order accurate semi-Lagrangian time stepping scheme for the transport equations. We solve for a stationary velocity field using a preconditioned, globalized, matrix-free Newton-Krylov scheme. We propose and test a two-level Hessian preconditioner. We consider two strategies for inverting the preconditioner on the coarse grid: a nested preconditioned conjugate gradient method (exact solve) and a nested Chebyshev iterative method (inexact solve) with a fixed number of iterations. We test the performance of our solver in different synthetic and real-world two-dimensional application scenarios. We study grid convergence and computational efficiency of our new scheme. We compare the performance of our solver against our initial implementation that uses the same spatial discretization but a standard, explicit, second-order Runge-Kutta scheme for the numerical time integration of the transport equations and a single-level preconditioner. Our improved scheme delivers significant speedups over our original implementation. As a highlight, we observe a 20 × speedup for a two dimensional, real world multi-subject medical image registration problem.
NASA Technical Reports Server (NTRS)
Harten, A.; Tal-Ezer, H.
1981-01-01
An implicit finite difference method of fourth order accuracy in space and time is introduced for the numerical solution of one-dimensional systems of hyperbolic conservation laws. The basic form of the method is a two-level scheme which is unconditionally stable and nondissipative. The scheme uses only three mesh points at level t and three mesh points at level t + delta t. The dissipative version of the basic method given is conditionally stable under the CFL (Courant-Friedrichs-Lewy) condition. This version is particularly useful for the numerical solution of problems with strong but nonstiff dynamic features, where the CFL restriction is reasonable on accuracy grounds. Numerical results are provided to illustrate properties of the proposed method.
Decentralising Zimbabwe’s water management: The case of Guyu-Chelesa irrigation scheme
NASA Astrophysics Data System (ADS)
Tambudzai, Rashirayi; Everisto, Mapedza; Gideon, Zhou
Smallholder irrigation schemes are largely supply driven such that they exclude the beneficiaries on the management decisions and the choice of the irrigation schemes that would best suit their local needs. It is against this background that the decentralisation framework and the Dublin Principles on Integrated Water Resource Management (IWRM) emphasise the need for a participatory approach to water management. The Zimbabwean government has gone a step further in decentralising the management of irrigation schemes, that is promoting farmer managed irrigation schemes so as to ensure effective management of scarce community based land and water resources. The study set to investigate the way in which the Guyu-Chelesa irrigation scheme is managed with specific emphasis on the role of the Irrigation Management Committee (IMC), the level of accountability and the powers devolved to the IMC. Merrey’s 2008 critique of IWRM also informs this study which views irrigation as going beyond infrastructure by looking at how institutions and decision making processes play out at various levels including at the irrigation scheme level. The study was positioned on the hypothesis that ‘decentralised or autonomous irrigation management enhances the sustainability and effectiveness of irrigation schemes’. To validate or falsify the stated hypothesis, data was gathered using desk research in the form of reviewing articles, documents from within the scheme and field research in the form of questionnaire surveys, key informant interviews and field observation. The Statistical Package for Social Sciences was used to analyse data quantitatively, whilst content analysis was utilised to analyse qualitative data whereby data was analysed thematically. Comparative analysis was carried out as Guyu-Chelesa irrigation scheme was compared with other smallholder irrigation scheme’s experiences within Zimbabwe and the Sub Saharan African region at large. The findings were that whilst the scheme is a model of a decentralised entity whose importance lies at improving food security and employment creation within the community, it falls short in representing a downwardly accountable decentralised irrigation scheme. The scheme is faced with various challenges which include its operation which is below capacity utilisation, absence of specialised technical human personnel to address infrastructural breakdowns, uneven distribution of water pressure, incapacitated Irrigation Management Committee (IMC), absence of a locally legitimate constitution, compromised beneficiary participation and unclear lines of communication between various institutions involved in water management. Understanding decentralization is important since one of the key tenets of IWRM is stakeholder participation which the decentralization framework interrogates.
Ecrh on Asdex Upgrade - System Extension, New Modes of Operation, Plasma Physics Results
NASA Astrophysics Data System (ADS)
Stober, J.; Wagner, D.; Giannone, L.; Leuterer, F.; Marascheck, M.; Mlynek, A.; Monaco, F.; Münich, M.; Poli, E.; Reich, M.; Schmid-Lorch, D.; Schütz, H.; Schweinzer, J.; Treutterer, W.; Zohm, H.; Meier, A.; Scherer, Th.; Flamm, J.; Thumm, M.; Höhnle, H.; Kasparek, W.; Stroth, U.; Chirkov, A. V.; Denisov, G. G.; Litvak, A.; Malygin, S. A.; Myasnikov, V. E.; Nichiporenko, V. O.; Popov, L. G.; Soluyanova, E. A.; Tai, E. M.
2011-02-01
The ECRH system at ASDEX Upgrade is currently extended from 1.6 MW to 5 MW. The extension so far consists of 2-frequency units, which use single diamond-disk vacuum-windows to transmit power at the natural resonances of these disks (105 & 140 GHz). For the last unit of this extension two additional intermediate non-resonant frequencies are foreseen, requiring new window concepts. For the torus a polarisation-independent double-disk window has been developed. For the gyrotron a grooved diamond disk is actually favoured, for which the grooved surfaces act as anti-reflective coating. Since ASDEX Upgrade operates with completely W-covered plasma facing components, central ECRH is often applied to suppresses W-accumulation in the plasma center. In order to extend the operational range for central ECRH, X3- and O2-heating schemes were developed. Both are characterized by incomplete single-path absorption. For X3 heating, the X2 resonance at the pedestal on the high field side is used as a 'beam-dump', for the O2 scheme a specific reflector tile on the inner heat shield enforces a second path through the plasma center. The geometry for NTM control had to be modified to allow simultaneous central heating. In real-time the ECRH position can be determined either by ray-tracing based on real-time equilibria and density profiles or from ECE for modulated ECRH power. Fast real-time ECE also allows to determine the NTM position. Further major physics applications of the system are summarized.
Efficient cooling of quantized vibrations using a four-level configuration
NASA Astrophysics Data System (ADS)
Yan, Lei-Lei; Zhang, Jian-Qi; Zhang, Shuo; Feng, Mang
2016-12-01
Cooling vibrational degrees of freedom down to ground states is essential to observation of quantum properties of systems with mechanical vibration. We propose two cooling schemes employing four internal levels of the systems, which achieve the ground-state cooling in an efficient fashion by completely deleting the carrier and first-order blue-sideband transitions. The schemes, based on quantum interference and Stark-shift gates, are robust to fluctuations of laser intensity and frequency. The feasibility of the schemes is justified using current laboratory technology. In practice, our proposal readily applies to a nanodiamond nitrogen-vacancy center levitated in an optical trap or attached to a cantilever.
Validation of Microphysical Schemes in a CRM Using TRMM Satellite
NASA Astrophysics Data System (ADS)
Li, X.; Tao, W.; Matsui, T.; Liu, C.; Masunaga, H.
2007-12-01
The microphysical scheme in the Goddard Cumulus Ensemble (GCE) model has been the most heavily developed component in the past decade. The cloud-resolving model now has microphysical schemes ranging from the original Lin type bulk scheme, to improved bulk schemes, to a two-moment scheme, to a detailed bin spectral scheme. Even with the most sophisticated bin scheme, many uncertainties still exist, especially in ice phase microphysics. In this study, we take advantages of the long-term TRMM observations, especially the cloud profiles observed by the precipitation radar (PR), to validate microphysical schemes in the simulations of Mesoscale Convective Systems (MCSs). Two contrasting cases, a midlatitude summertime continental MCS with leading convection and trailing stratiform region, and an oceanic MCS in tropical western Pacific are studied. The simulated cloud structures and particle sizes are fed into a forward radiative transfer model to simulate the TRMM satellite sensors, i.e., the PR, the TRMM microwave imager (TMI) and the visible and infrared scanner (VIRS). MCS cases that match the structure and strength of the simulated systems over the 10-year period are used to construct statistics of different sensors. These statistics are then compared with the synthetic satellite data obtained from the forward radiative transfer calculations. It is found that the GCE model simulates the contrasts between the continental and oceanic case reasonably well, with less ice scattering in the oceanic case comparing with the continental case. However, the simulated ice scattering signals for both PR and TMI are generally stronger than the observations, especially for the bulk scheme and at the upper levels in the stratiform region. This indicates larger, denser snow/graupel particles at these levels. Adjusting microphysical schemes in the GCE model according the observations, especially the 3D cloud structure observed by TRMM PR, result in a much better agreement.
A fast efficient implicit scheme for the gasdynamic equations using a matrix reduction technique
NASA Technical Reports Server (NTRS)
Barth, T. J.; Steger, J. L.
1985-01-01
An efficient implicit finite-difference algorithm for the gasdynamic equations utilizing matrix reduction techniques is presented. A significant reduction in arithmetic operations is achieved without loss of the stability characteristics generality found in the Beam and Warming approximate factorization algorithm. Steady-state solutions to the conservative Euler equations in generalized coordinates are obtained for transonic flows and used to show that the method offers computational advantages over the conventional Beam and Warming scheme. Existing Beam and Warming codes can be retrofit with minimal effort. The theoretical extension of the matrix reduction technique to the full Navier-Stokes equations in Cartesian coordinates is presented in detail. Linear stability, using a Fourier stability analysis, is demonstrated and discussed for the one-dimensional Euler equations.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.
The Osher scheme for non-equilibrium reacting flows
NASA Technical Reports Server (NTRS)
Suresh, Ambady; Liou, Meng-Sing
1992-01-01
An extension of the Osher upwind scheme to nonequilibrium reacting flows is presented. Owing to the presence of source terms, the Riemann problem is no longer self-similar and therefore its approximate solution becomes tedious. With simplicity in mind, a linearized approach which avoids an iterative solution is used to define the intermediate states and sonic points. The source terms are treated explicitly. Numerical computations are presented to demonstrate the feasibility, efficiency and accuracy of the proposed method. The test problems include a ZND (Zeldovich-Neumann-Doring) detonation problem for which spurious numerical solutions which propagate at mesh speed have been observed on coarse grids. With the present method, a change of limiter causes the solution to change from the physically correct CJ detonation solution to the spurious weak detonation solution.
A non-oscillatory energy-splitting method for the computation of compressible multi-fluid flows
NASA Astrophysics Data System (ADS)
Lei, Xin; Li, Jiequan
2018-04-01
This paper proposes a new non-oscillatory energy-splitting conservative algorithm for computing multi-fluid flows in the Eulerian framework. In comparison with existing multi-fluid algorithms in the literature, it is shown that the mass fraction model with isobaric hypothesis is a plausible choice for designing numerical methods for multi-fluid flows. Then we construct a conservative Godunov-based scheme with the high order accurate extension by using the generalized Riemann problem solver, through the detailed analysis of kinetic energy exchange when fluids are mixed under the hypothesis of isobaric equilibrium. Numerical experiments are carried out for the shock-interface interaction and shock-bubble interaction problems, which display the excellent performance of this type of schemes and demonstrate that nonphysical oscillations are suppressed around material interfaces substantially.
The Adaptive Biasing Force Method: Everything You Always Wanted To Know but Were Afraid To Ask
2014-01-01
In the host of numerical schemes devised to calculate free energy differences by way of geometric transformations, the adaptive biasing force algorithm has emerged as a promising route to map complex free-energy landscapes. It relies upon the simple concept that as a simulation progresses, a continuously updated biasing force is added to the equations of motion, such that in the long-time limit it yields a Hamiltonian devoid of an average force acting along the transition coordinate of interest. This means that sampling proceeds uniformly on a flat free-energy surface, thus providing reliable free-energy estimates. Much of the appeal of the algorithm to the practitioner is in its physically intuitive underlying ideas and the absence of any requirements for prior knowledge about free-energy landscapes. Since its inception in 2001, the adaptive biasing force scheme has been the subject of considerable attention, from in-depth mathematical analysis of convergence properties to novel developments and extensions. The method has also been successfully applied to many challenging problems in chemistry and biology. In this contribution, the method is presented in a comprehensive, self-contained fashion, discussing with a critical eye its properties, applicability, and inherent limitations, as well as introducing novel extensions. Through free-energy calculations of prototypical molecular systems, many methodological aspects are examined, from stratification strategies to overcoming the so-called hidden barriers in orthogonal space, relevant not only to the adaptive biasing force algorithm but also to other importance-sampling schemes. On the basis of the discussions in this paper, a number of good practices for improving the efficiency and reliability of the computed free-energy differences are proposed. PMID:25247823
Megavoltage irradiation of neoplasms of the nasal and paranasal cavities in 77 dogs.
Théon, A P; Madewell, B R; Harb, M F; Dungworth, D L
1993-05-01
Seventy-seven dogs with malignant tumors of the nasal and paranasal cavities were treated by use of radiotherapy. The tumors included carcinomas (58) and sarcomas (19). Radiographic findings, including site of involvement and tumor extension, were the basis of clinical staging. Staging was performed according to the tumor, node, metastasis staging of the World Health Organization, and a modified staging scheme based on prognostic factors that seemed to correlate best with response to treatment. All irradiations were done with a telecobalt 60 unit. Fifty-six dogs were treated with irradiation alone, and 21 had partial tumor resection prior to radiotherapy. Treatment dose was 48 Gy (minimal tumor dose) administered on a Monday-Wednesday-Friday basis at 4 Gy/fraction over 4 weeks. The irradiation technique emphasized rostral field with a generous treatment volume. Duration of follow-up after irradiation ranged from 1 month to 61 months. The 1- and 2-year overall survival rates were 60.3% and 25%, respectively, and the 1- and 2-year relapse-free survival rates were 38.2% and 17.6%, respectively. Results of histologic examination and our modified staging scheme were significant (P = 0.02 and P = 0.04, respectively) prognostic factors of relapse-free survival. Conversely, tumor site, tumor extension, World Health Organization clinical stage, and cytoreductive surgery prior to irradiation did not affect the outcome of treatment. According to our modified staging scheme, dogs with stage-2- disease have a poorer prognosis than dogs with stage-1 disease, with a relative risk of relapse 2.3-fold higher. Dogs with carcinoma had a poorer prognosis than dogs with sarcoma (predominantly chondrosarcoma) with a relative risk of relapse 3.3-fold higher.(ABSTRACT TRUNCATED AT 250 WORDS)
Wang, Jiajun; Li, Xiaoting; You, Ya; Xintong, Yang; Wang, Ying; Li, Qunxiang
2018-06-21
Mimicking the natural photosynthesis in green plants, artificial Z-scheme photocatalysis enables more efficient utilization of solar energy for photocatalytic water splitting. Most currently designed g-C3N4-based Z-scheme heterojunctions are usually based on metal-containing semiconductor photocatalysts, thus exploiting metal-free photocatalysts for Z-scheme water splitting is of huge interest. Herein, we propose two metal-free C3N/g-C3N4 heterojunctions with the C3N monolayer covering g-C3N4 sheet (monolayer or bilayer) and systematically explore their electronic structures, charge distributions and photocatalytic properties by performing extensive hybrid density functional calculations. We clearly reveal that the relative strong built-in electric fields around their respective interface regions, caused by the charge transfer from C3N monolayer to g-C3N4 monolayer or bilayer, result in the bands bending, renders the transfer of photogenerated carriers in these two heterojunctions following the Z-scheme instead of the type-II pathway. Moreover, the photogenerated electrons and holes in these two C3N/g-C3N4 heterojunctions not only can be efficiently separated, but also have strong redox abilities for water oxidation and reduction. Compared with the isolated g-C3N4 sheets, the light absorption in visible to near-infrared region are significantly enhanced in these proposed heterojunctions. These theoretical findings suggest that these proposed metal-free C3N/g-C3N4 heterojunctions are promising direct Z-scheme photocatalysts for solar water splitting. © 2018 IOP Publishing Ltd.
A Stereo Music Preprocessing Scheme for Cochlear Implant Users.
Buyens, Wim; van Dijk, Bas; Wouters, Jan; Moonen, Marc
2015-10-01
Listening to music is still one of the more challenging aspects of using a cochlear implant (CI) for most users. Simple musical structures, a clear rhythm/beat, and lyrics that are easy to follow are among the top factors contributing to music appreciation for CI users. Modifying the audio mix of complex music potentially improves music enjoyment in CI users. A stereo music preprocessing scheme is described in which vocals, drums, and bass are emphasized based on the representation of the harmonic and the percussive components in the input spectrogram, combined with the spatial allocation of instruments in typical stereo recordings. The scheme is assessed with postlingually deafened CI subjects (N = 7) using pop/rock music excerpts with different complexity levels. The scheme is capable of modifying relative instrument level settings, with the aim of improving music appreciation in CI users, and allows individual preference adjustments. The assessment with CI subjects confirms the preference for more emphasis on vocals, drums, and bass as offered by the preprocessing scheme, especially for songs with higher complexity. The stereo music preprocessing scheme has the potential to improve music enjoyment in CI users by modifying the audio mix in widespread (stereo) music recordings. Since music enjoyment in CI users is generally poor, this scheme can assist the music listening experience of CI users as a training or rehabilitation tool.
Gonioscopy in the dog: inter-examiner variability and the search for a grading scheme.
Oliver, J A C; Cottrell, B C; Newton, J R; Mellersh, C S
2017-11-01
To investigate inter-examiner variability in gonioscopic evaluation of pectinate ligament abnormality in dogs and to assess level of inter-examiner agreement for four different gonioscopy grading schemes. Two examiners performed gonioscopy in 98 eyes of 49 Welsh springer spaniel dogs and estimated the percentage circumference of iridocorneal angle affected by pectinate ligament abnormality to the nearest 5%. Percentage scores assigned to each eye by the two examiners were compared. Inter-examiner agreement was assessed following assignment of the percentage scores to each of four grading schemes by Cohen's kappa statistic. There was a strong positive correlation between the results of the two examiners (R=0·91). In general, Examiner 1 scored individual eyes higher than Examiner 2, especially for eyes in which both examiners diagnosed pectinate ligament abnormality. A "good" level of agreement could only be achieved with a gonioscopy grading scheme of no more than three categories and with a relatively large intermediate bandwidth (κ=0·68). A three-tiered grading scheme might represent an improvement on hereditary eye disease schemes which simply classify dogs to be either "affected" or "unaffected" for pectinate ligament abnormality. However, the large intermediate bandwidth of this scheme would only allow for the additional detection of those dogs with marked progression of pectinate ligament abnormality which would be considered most at risk of primary closed-angle glaucoma. © 2017 British Small Animal Veterinary Association.
Method for generating maximally entangled states of multiple three-level atoms in cavity QED
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin Guangsheng; Li Shushen; Feng Songlin
2004-03-01
We propose a scheme to generate maximally entangled states (MESs) of multiple three-level atoms in microwave cavity QED based on the resonant atom-cavity interaction. In the scheme, multiple three-level atoms initially in their ground states are sequently sent through two suitably prepared cavities. After a process of appropriate atom-cavity interaction, a subsequent measurement on the second cavity field projects the atoms onto the MESs. The practical feasibility of this method is also discussed.
A fuzzy structural matching scheme for space robotics vision
NASA Technical Reports Server (NTRS)
Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka
1994-01-01
In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.