Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes
NASA Astrophysics Data System (ADS)
van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.
2017-12-01
Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.
Double dissociation of value computations in orbitofrontal and anterior cingulate neurons
Kennerley, Steven W.; Behrens, Timothy E. J.; Wallis, Jonathan D.
2011-01-01
Damage to prefrontal cortex (PFC) impairs decision-making, but the underlying value computations that might cause such impairments remain unclear. Here we report that value computations are doubly dissociable within PFC neurons. While many PFC neurons encoded chosen value, they used opponent encoding schemes such that averaging the neuronal population eliminated value coding. However, a special population of neurons in anterior cingulate cortex (ACC) - but not orbitofrontal cortex (OFC) - multiplex chosen value across decision parameters using a unified encoding scheme, and encoded reward prediction errors. In contrast, neurons in OFC - but not ACC - encoded chosen value relative to the recent history of choice values. Together, these results suggest complementary valuation processes across PFC areas: OFC neurons dynamically evaluate current choices relative to recent choice values, while ACC neurons encode choice predictions and prediction errors using a common valuation currency reflecting the integration of multiple decision parameters. PMID:22037498
Cryptanalysis of SFLASH with Slightly Modified Parameters
NASA Astrophysics Data System (ADS)
Dubois, Vivien; Fouque, Pierre-Alain; Stern, Jacques
SFLASH is a signature scheme which belongs to a family of multivariate schemes proposed by Patarin et al. in 1998 [9]. The SFLASH scheme itself has been designed in 2001 [8] and has been selected in 2003 by the NESSIE European Consortium [6] as the best known solution for implementation on low cost smart cards. In this paper, we show that slight modifications of the parameters of SFLASH within the general family initially proposed renders the scheme insecure. The attack uses simple linear algebra, and allows to forge a signature for an arbitrary message in a question of minutes for practical parameters, using only the public key. Although SFLASH itself is not amenable to our attack, it is worrying to observe that no rationale was ever offered for this "lucky" choice of parameters.
Provably secure identity-based identification and signature schemes from code assumptions
Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940
Provably secure identity-based identification and signature schemes from code assumptions.
Song, Bo; Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.
A perturbative solution to metadynamics ordinary differential equation
NASA Astrophysics Data System (ADS)
Tiwary, Pratyush; Dama, James F.; Parrinello, Michele
2015-12-01
Metadynamics is a popular enhanced sampling scheme wherein by periodic application of a repulsive bias, one can surmount high free energy barriers and explore complex landscapes. Recently, metadynamics was shown to be mathematically well founded, in the sense that the biasing procedure is guaranteed to converge to the true free energy surface in the long time limit irrespective of the precise choice of biasing parameters. A differential equation governing the post-transient convergence behavior of metadynamics was also derived. In this short communication, we revisit this differential equation, expressing it in a convenient and elegant Riccati-like form. A perturbative solution scheme is then developed for solving this differential equation, which is valid for any generic biasing kernel. The solution clearly demonstrates the robustness of metadynamics to choice of biasing parameters and gives further confidence in the widely used method.
A perturbative solution to metadynamics ordinary differential equation.
Tiwary, Pratyush; Dama, James F; Parrinello, Michele
2015-12-21
Metadynamics is a popular enhanced sampling scheme wherein by periodic application of a repulsive bias, one can surmount high free energy barriers and explore complex landscapes. Recently, metadynamics was shown to be mathematically well founded, in the sense that the biasing procedure is guaranteed to converge to the true free energy surface in the long time limit irrespective of the precise choice of biasing parameters. A differential equation governing the post-transient convergence behavior of metadynamics was also derived. In this short communication, we revisit this differential equation, expressing it in a convenient and elegant Riccati-like form. A perturbative solution scheme is then developed for solving this differential equation, which is valid for any generic biasing kernel. The solution clearly demonstrates the robustness of metadynamics to choice of biasing parameters and gives further confidence in the widely used method.
An explicit scheme for ohmic dissipation with smoothed particle magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Tsukamoto, Yusuke; Iwasaki, Kazunari; Inutsuka, Shu-ichiro
2013-09-01
In this paper, we present an explicit scheme for Ohmic dissipation with smoothed particle magnetohydrodynamics (SPMHD). We propose an SPH discretization of Ohmic dissipation and solve Ohmic dissipation part of induction equation with the super-time-stepping method (STS) which allows us to take a longer time step than Courant-Friedrich-Levy stability condition. Our scheme is second-order accurate in space and first-order accurate in time. Our numerical experiments show that optimal choice of the parameters of STS for Ohmic dissipation of SPMHD is νsts ˜ 0.01 and Nsts ˜ 5.
TCP performance in ATM networks: ABR parameter tuning and ABR/UBR comparisons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chien Fang; Lin, A.
1996-02-27
This paper explores two issues on TOP performance over ATM networks: ABR parameter tuning and performance comparison of binary mode ABR with enhanced UBR services. Of the fifteen parameters defined for ABR, two parameters dominate binary mode ABR performance: Rate Increase Factor (RIF) and Rate Decrease Factor (RDF). Using simulations, we study the effects of these two parameters on TOP over ABR performance. We compare TOP performance with different ABR parameter settings in terms of through-puts and fairness. The effects of different buffer sizes and LAN/WAN distances are also examined. We then compare TOP performance with the best ABR parametermore » setting with corresponding UBR service enhanced with Early Packet Discard and also with a fair buffer allocation scheme. The results show that TOP performance over binary mode ABR is very sensitive to parameter value settings, and that a poor choice of parameters can result in ABR performance worse than that of the much less expensive UBR-EPD scheme.« less
Quadratic trigonometric B-spline for image interpolation using GA
Abbas, Samreen; Irshad, Misbah
2017-01-01
In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation. PMID:28640906
Quadratic trigonometric B-spline for image interpolation using GA.
Hussain, Malik Zawwar; Abbas, Samreen; Irshad, Misbah
2017-01-01
In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
Distributed database kriging for adaptive sampling (D²KAS)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; ...
2015-03-18
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
Precision calculations for h → WW/ZZ → 4 fermions in the Two-Higgs-Doublet Model with Prophecy4f
NASA Astrophysics Data System (ADS)
Altenkamp, Lukas; Dittmaier, Stefan; Rzehak, Heidi
2018-03-01
We have calculated the next-to-leading-order electroweak and QCD corrections to the decay processes h → WW/ZZ → 4 fermions of the light CP-even Higgs boson h of various types of Two-Higgs-Doublet Models (Types I and II, "lepton-specific" and "flipped" models). The input parameters are defined in four different renormalization schemes, where parameters that are not directly accessible by experiments are defined in the \\overline{MS} scheme. Numerical results are presented for the corrections to partial decay widths for various benchmark scenarios previously motivated in the literature, where we investigate the dependence on the \\overline{MS} renormalization scale and on the choice of the renormalization scheme in detail. We find that it is crucial to be precise with these issues in parameter analyses, since parameter conversions between different schemes can involve sizeable or large corrections, especially in scenarios that are close to experimental exclusion limits or theoretical bounds. It even turns out that some renormalization schemes are not applicable in specific regions of parameter space. Our investigation of differential distributions shows that corrections beyond the Standard Model are mostly constant offsets induced by the mixing between the light and heavy CP-even Higgs bosons, so that differential analyses of h→4 f decay observables do not help to identify Two-Higgs-Doublet Models. Moreover, the decay widths do not significantly depend on the specific type of those models. The calculations are implemented in the public Monte Carlo generator Prophecy4f and ready for application.
Scheme Variations of the QCD Coupling and Hadronic τ Decays
NASA Astrophysics Data System (ADS)
Boito, Diogo; Jamin, Matthias; Miravitllas, Ramon
2016-10-01
The quantum chromodynamics (QCD) coupling αs is not a physical observable of the theory, since it depends on conventions related to the renormalization procedure. We introduce a definition of the QCD coupling, denoted by α^s, whose running is explicitly renormalization scheme invariant. The scheme dependence of the new coupling α^s is parametrized by a single parameter C , related to transformations of the QCD scale Λ . It is demonstrated that appropriate choices of C can lead to substantial improvements in the perturbative prediction of physical observables. As phenomenological applications, we study e+e- scattering and decays of the τ lepton into hadrons, both being governed by the QCD Adler function.
Avoided-Level-Crossing Spectroscopy with Dressed Matter Waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckardt, Andre; Holthaus, Martin
2008-12-12
We devise a method for probing resonances of macroscopic matter waves in shaken optical lattices by monitoring their response to slow parameter changes, and show that such resonances can be disabled by particular choices of the driving amplitude. The theoretical analysis of this scheme reveals far-reaching analogies between dressed atoms and time periodically forced matter waves.
Avoided-Level-Crossing Spectroscopy with Dressed Matter Waves
NASA Astrophysics Data System (ADS)
Eckardt, André; Holthaus, Martin
2008-12-01
We devise a method for probing resonances of macroscopic matter waves in shaken optical lattices by monitoring their response to slow parameter changes, and show that such resonances can be disabled by particular choices of the driving amplitude. The theoretical analysis of this scheme reveals far-reaching analogies between dressed atoms and time periodically forced matter waves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokolov, I M
2015-10-31
Formation of a coherent population trapping (CPT) resonance is studied in the interaction of a beam of {sup 87}Rb atoms with two spatially separated domains of the dichromatic field. Various resonance excitation schemes are compared depending on the choice of operation transitions and type of the polarisation scheme. In the case of a single-velocity atomic beam, the dependence of the CPT resonance profile is studied as a function of principal parameters of the system: beam velocity, distance between optical fields, laser beam dimensions and intensities, and applied permanent magnetic field. Influence of the atomic beam angular divergence and residual beammore » velocity spread on the resonance quality parameter is estimated. (atomic beams)« less
NASA Astrophysics Data System (ADS)
Manzanero, Juan; Rueda-Ramírez, Andrés M.; Rubio, Gonzalo; Ferrer, Esteban
2018-06-01
In the discontinuous Galerkin (DG) community, several formulations have been proposed to solve PDEs involving second-order spatial derivatives (e.g. elliptic problems). In this paper, we show that, when the discretisation is restricted to the usage of Gauss-Lobatto points, there are important similarities between two common choices: the Bassi-Rebay 1 (BR1) method, and the Symmetric Interior Penalty (SIP) formulation. This equivalence enables the extrapolation of properties from one scheme to the other: a sharper estimation of the minimum penalty parameter for the SIP stability (compared to the more general estimate proposed by Shahbazi [1]), more efficient implementations of the BR1 scheme, and the compactness of the BR1 method for straight quadrilateral and hexahedral meshes.
The FODA-TDMA satellite access scheme - Presentation, study of the system, and results
NASA Astrophysics Data System (ADS)
Celandroni, Nedo; Ferro, Erina
1991-12-01
A description is given of FODA-TDMA, a satellite access scheme designed for mixed traffic. The study of the system is presented and the choice of some parameters is justified. A simplified analytic solution is found, describing the steady-state behavior of the system. Some results of the simulation tests for an already existing hardware environment are also presented for the channel speeds of 2 and 8 Mb/s, considering both the stationary and the transient cases. The results of the experimentation at 2 Mb/s on the satellite Eutelsat-F2 are also presented and compared with the results of the simulation.
The minimal number of parameters in triclinic crystal-field potentials
NASA Astrophysics Data System (ADS)
Mulak, J.
2003-09-01
The optimal parametrization schemes of the crystal-field (CF) potential in fitting procedures are those based on the smallest numbers of parameters. The surplus parametrizations usually lead to artificial and non-physical solutions. Therefore, the symmetry adapted reference systems are commonly used. Instead of them, however, the coordinate systems with the z-axis directed along the principal axes of the CF multipoles (2 k-poles) can be applied successfully, particularly for triclinic CF potentials. Due to the irreducibility of the D(k) representations such a choice can reduce the number of the k-order parameters by 2 k: from 2 k+1 (in the most general case) to only 1 (the axial one). Unfortunately, in general, the numbers of other order CF parameters stay then unrestricted. In this way, the number of parameters for the k-even triclinic CF potentials can be reduced by 4, 8 or 12, for k=2,4 or 6, respectively. Hence, the parametrization schemes based on maximum 14 parameters can be in use solely. For higher point symmetries this number is usually greater than that for the symmetry adapted systems. Nonetheless, many instructive correlations between the multipole contributions to the CF interaction are attainable in this way.
Decoy-state quantum key distribution with biased basis choice
Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng
2013-01-01
We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states. PMID:23948999
Decoy-state quantum key distribution with biased basis choice.
Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng
2013-01-01
We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states.
Li, Zhenyu; Wang, Bin; Liu, Hong
2016-08-30
Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.
Li, Zhenyu; Wang, Bin; Liu, Hong
2016-01-01
Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748
Constraint damping for the Z4c formulation of general relativity
NASA Astrophysics Data System (ADS)
Weyhausen, Andreas; Bernuzzi, Sebastiano; Hilditch, David
2012-01-01
One possibility for avoiding constraint violation in numerical relativity simulations adopting free-evolution schemes is to modify the continuum evolution equations so that constraint violations are damped away. Gundlach et al. demonstrated that such a scheme damps low-amplitude, high-frequency constraint-violating modes exponentially for the Z4 formulation of general relativity. Here we analyze the effect of the damping scheme in numerical applications on a conformal decomposition of Z4. After reproducing the theoretically predicted damping rates of constraint violations in the linear regime, we explore numerical solutions not covered by the theoretical analysis. In particular we examine the effect of the damping scheme on low-frequency and on high-amplitude perturbations of flat spacetime as well and on the long-term dynamics of puncture and compact star initial data in the context of spherical symmetry. We find that the damping scheme is effective provided that the constraint violation is resolved on the numerical grid. On grid noise the combination of artificial dissipation and damping helps to suppress constraint violations. We find that care must be taken in choosing the damping parameter in simulations of puncture black holes. Otherwise the damping scheme can cause undesirable growth of the constraints, and even qualitatively incorrect evolutions. In the numerical evolution of a compact static star we find that the choice of the damping parameter is even more delicate, but may lead to a small decrease of constraint violation. For a large range of values it results in unphysical behavior.
Images as embedding maps and minimal surfaces: Movies, color, and volumetric medical images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimmel, R.; Malladi, R.; Sochen, N.
A general geometrical framework for image processing is presented. The authors consider intensity images as surfaces in the (x,I) space. The image is thereby a two dimensional surface in three dimensional space for gray level images. The new formulation unifies many classical schemes, algorithms, and measures via choices of parameters in a {open_quote}master{close_quotes} geometrical measure. More important, it is a simple and efficient tool for the design of natural schemes for image enhancement, segmentation, and scale space. Here the authors give the basic motivation and apply the scheme to enhance images. They present the concept of an image as amore » surface in dimensions higher than the three dimensional intuitive space. This will help them handle movies, color, and volumetric medical images.« less
Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina
2017-06-13
Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.
NASA Astrophysics Data System (ADS)
Zhuo, Zhao; Cai, Shi-Min; Tang, Ming; Lai, Ying-Cheng
2018-04-01
One of the most challenging problems in network science is to accurately detect communities at distinct hierarchical scales. Most existing methods are based on structural analysis and manipulation, which are NP-hard. We articulate an alternative, dynamical evolution-based approach to the problem. The basic principle is to computationally implement a nonlinear dynamical process on all nodes in the network with a general coupling scheme, creating a networked dynamical system. Under a proper system setting and with an adjustable control parameter, the community structure of the network would "come out" or emerge naturally from the dynamical evolution of the system. As the control parameter is systematically varied, the community hierarchies at different scales can be revealed. As a concrete example of this general principle, we exploit clustered synchronization as a dynamical mechanism through which the hierarchical community structure can be uncovered. In particular, for quite arbitrary choices of the nonlinear nodal dynamics and coupling scheme, decreasing the coupling parameter from the global synchronization regime, in which the dynamical states of all nodes are perfectly synchronized, can lead to a weaker type of synchronization organized as clusters. We demonstrate the existence of optimal choices of the coupling parameter for which the synchronization clusters encode accurate information about the hierarchical community structure of the network. We test and validate our method using a standard class of benchmark modular networks with two distinct hierarchies of communities and a number of empirical networks arising from the real world. Our method is computationally extremely efficient, eliminating completely the NP-hard difficulty associated with previous methods. The basic principle of exploiting dynamical evolution to uncover hidden community organizations at different scales represents a "game-change" type of approach to addressing the problem of community detection in complex networks.
Consistent parameter fixing in the quark-meson model with vacuum fluctuations
NASA Astrophysics Data System (ADS)
Carignano, Stefano; Buballa, Michael; Elkamhawy, Wael
2016-08-01
We revisit the renormalization prescription for the quark-meson model in an extended mean-field approximation, where vacuum quark fluctuations are included. At a given cutoff scale the model parameters are fixed by fitting vacuum quantities, typically including the sigma-meson mass mσ and the pion decay constant fπ. In most publications the latter is identified with the expectation value of the sigma field, while for mσ the curvature mass is taken. When quark loops are included, this prescription is however inconsistent, and the correct identification involves the renormalized pion decay constant and the sigma pole mass. In the present article we investigate the influence of the parameter-fixing scheme on the phase structure of the model at finite temperature and chemical potential. Despite large differences between the model parameters in the two schemes, we find that in homogeneous matter the effect on the phase diagram is relatively small. For inhomogeneous phases, on the other hand, the choice of the proper renormalization prescription is crucial. In particular, we show that if renormalization effects on the pion decay constant are not considered, the model does not even present a well-defined renormalized limit when the cutoff is sent to infinity.
Truncation effect on Taylor-Aris dispersion in lattice Boltzmann schemes: Accuracy towards stability
NASA Astrophysics Data System (ADS)
Ginzburg, Irina; Roux, Laetitia
2015-10-01
The Taylor dispersion in parabolic velocity field provides a well-known benchmark for advection-diffusion (ADE) schemes and serves as a first step towards accurate modeling of the high-order non-Gaussian effects in heterogeneous flow. While applying the Lattice Boltzmann ADE two-relaxation-times (TRT) scheme for a transport with given Péclet number (Pe) one should select six free-tunable parameters, namely, (i) molecular-diffusion-scale, equilibrium parameter; (ii) three families of equilibrium weights, assigned to the terms of mass, velocity and numerical-diffusion-correction, and (iii) two relaxation rates. We analytically and numerically investigate the respective roles of all these degrees of freedom in the accuracy and stability in the evolution of a Gaussian plume. For this purpose, the third- and fourth-order transient multi-dimensional analysis of the recurrence equations of the TRT ADE scheme is extended for a spatially-variable velocity field. The key point is in the coupling of the truncation and Taylor dispersion analysis which allows us to identify the second-order numerical correction δkT to Taylor dispersivity coefficient kT. The procedure is exemplified for a straight Poiseuille flow where δkT is given in a closed analytical form in equilibrium and relaxation parameter spaces. The predicted longitudinal dispersivity is in excellent agreement with the numerical experiments over a wide parameter range. In relatively small Pe-range, the relative dispersion error increases with Péclet number. This deficiency reduces in the intermediate and high Pe-range where it becomes Pe-independent and velocity-amplitude independent. Eliminating δkT by a proper parameter choice and employing specular reflection for zero flux condition on solid boundaries, the d2Q9 TRT ADE scheme may reproduce the Taylor-Aris result quasi-exactly, from very coarse to fine grids, and from very small to arbitrarily high Péclet numbers. Since free-tunable product of two eigenfunctions also controls stability of the model, the validity of the analytically established von Neumann stability diagram is examined in Poiseuille profile. The simplest coordinate-stencil subclass, which is the d2Q5 TRT bounce-back scheme, demonstrates the best performance and achieves the maximum accuracy for most stable relaxation parameters.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
Monte Carlo methods and their analysis for Coulomb collisions in multicomponent plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobylev, A.V., E-mail: alexander.bobylev@kau.se; Potapenko, I.F., E-mail: firena@yandex.ru
2013-08-01
Highlights: •A general approach to Monte Carlo methods for multicomponent plasmas is proposed. •We show numerical tests for the two-component (electrons and ions) case. •An optimal choice of parameters for speeding up the computations is discussed. •A rigorous estimate of the error of approximation is proved. -- Abstract: A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau–Fokker–Planck equations by Boltzmann equations of quasi-Maxwellian kind. It means that the total collision frequency for the corresponding Boltzmann equation does not depend on the velocities. This allows to make the simulation processmore » very simple since the collision pairs can be chosen arbitrarily, without restriction. It is shown that this approach includes the well-known methods of Takizuka and Abe (1977) [12] and Nanbu (1997) as particular cases, and generalizes the approach of Bobylev and Nanbu (2000). The numerical scheme of this paper is simpler than the schemes by Takizuka and Abe [12] and by Nanbu. We derive it for the general case of multicomponent plasmas and show some numerical tests for the two-component (electrons and ions) case. An optimal choice of parameters for speeding up the computations is also discussed. It is also proved that the order of approximation is not worse than O(√(ε)), where ε is a parameter of approximation being equivalent to the time step Δt in earlier methods. A similar estimate is obtained for the methods of Takizuka and Abe and Nanbu.« less
Integrator Windup Protection-Techniques and a STOVL Aircraft Engine Controller Application
NASA Technical Reports Server (NTRS)
KrishnaKumar, K.; Narayanaswamy, S.
1997-01-01
Integrators are included in the feedback loop of a control system to eliminate the steady state errors in the commanded variables. The integrator windup problem arises if the control actuators encounter operational limits before the steady state errors are driven to zero by the integrator. The typical effects of windup are large system oscillations, high steady state error, and a delayed system response following the windup. In this study, methods to prevent the integrator windup are examined to provide Integrator Windup Protection (IW) for an engine controller of a Short Take-Off and Vertical Landing (STOVL) aircraft. An unified performance index is defined to optimize the performance of the Conventional Anti-Windup (CAW) and the Modified Anti-Windup (MAW) methods. A modified Genetic Algorithm search procedure with stochastic parameter encoding is implemented to obtain the optimal parameters of the CAW scheme. The advantages and drawbacks of the CAW and MAW techniques are discussed and recommendations are made for the choice of the IWP scheme, given some characteristics of the system.
Coordinated interaction of two hydraulic cylinders when moving large-sized objects
NASA Astrophysics Data System (ADS)
Kreinin, G. V.; Misyurin, S. Yu; Lunev, A. V.
2017-12-01
The problem of the choice of parameters and the control scheme of the dynamics system for the coordinated displacement of a large mass object by two hydraulic piston type engines is considered. As a first stage, the problem is solved with respect to a system in which a heavy load of relatively large geometric dimensions is lifted or lowered in the progressive motion by two unidirectional hydraulic cylinders while maintaining the plane of the lifted object in a strictly horizontal position.
Educational Choice and Marketization in Hong Kong: The Case of Direct Subsidy Scheme Schools
ERIC Educational Resources Information Center
Zhou, Yisu; Wong, Yi-Lee; Li, Wei
2015-01-01
Direct subsidy scheme (DSS) schools are a product of Hong Kong's market-oriented educational reform, mirroring global reform that champions parental choice and school marketization. Such schools have greater autonomy in matters of curricula, staffing, and student admission. Although advocates of the DSS credit it with increasing educational…
Operator mixing in the ɛ -expansion: Scheme and evanescent-operator independence
NASA Astrophysics Data System (ADS)
Di Pietro, Lorenzo; Stamou, Emmanuel
2018-03-01
We consider theories with fermionic degrees of freedom that have a fixed point of Wilson-Fisher type in noninteger dimension d =4 -2 ɛ . Due to the presence of evanescent operators, i.e., operators that vanish in integer dimensions, these theories contain families of infinitely many operators that can mix with each other under renormalization. We clarify the dependence of the corresponding anomalous-dimension matrix on the choice of renormalization scheme beyond leading order in ɛ -expansion. In standard choices of scheme, we find that eigenvalues at the fixed point cannot be extracted from a finite-dimensional block. We illustrate in examples a truncation approach to compute the eigenvalues. These are observable scaling dimensions, and, indeed, we find that the dependence on the choice of scheme cancels. As an application, we obtain the IR scaling dimension of four-fermion operators in QED in d =4 -2 ɛ at order O (ɛ2).
Inversion Schemes to Retrieve Atmospheric and Oceanic Parameters from SeaWiFS Data
NASA Technical Reports Server (NTRS)
Deschamps, P.-Y.; Frouin, R.
1997-01-01
The investigation focuses on two key issues in satellite ocean color remote sensing, namely the presence of whitecaps on the sea surface and the validity of the aerosol models selected for the atmospheric correction of SeaWiFS data. Experiments were designed and conducted at the Scripps Institution of Oceanography to measure the optical properties of whitecaps and to study the aerosol optical properties in a typical mid-latitude coastal environment. CIMEL Electronique sunphotometers, now integrated in the AERONET network, were also deployed permanently in Bermuda and in Lanai, calibration/validation sites for SeaWiFS and MODIS. Original results were obtained on the spectral reflectance of whitecaps and on the choice of aerosol models for atmospheric correction schemes and the type of measurements that should be made to verify those schemes. Bio-optical algorithms to remotely sense primary productivity from space were also evaluated, as well as current algorithms to estimate PAR at the earth's surface.
NASA Astrophysics Data System (ADS)
Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.
2017-04-01
We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.
De la Salle, Barbara
2017-02-15
The complete blood count (CBC) is one of the most frequently requested tests in laboratory medicine, performed in a range of healthcare situations. The provision of an ideal assay material for external quality assessment is confounded by the fragility of the cellular components of blood, the lack of commutability of stabilised whole blood material and the lack of certified reference materials and methods to which CBC results can be traced. The choice of assay material between fresh blood, extended life assay material and fully stabilised, commercially prepared, whole blood material depends upon the scope and objectives of the EQA scheme. The introduction of new technologies in blood counting and the wider clinical application of parameters from the extended CBC will bring additional challenges for the EQA provider.
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-01-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464
School Choice with Education Vouchers: An Empirical Case Study from Hong Kong
ERIC Educational Resources Information Center
Lee, Amelia N. Y.; Bagley, Carl
2016-01-01
This paper seeks to question what impact education vouchers have on the process of school choice. The context examined in the paper is the Pre-primary Education Voucher Scheme ("Voucher Scheme") introduced in 2007 in Hong Kong. Using a Straussian grounded theory method, data collected from 40 parent interviews are coded, analysed and…
Anokye, Nana; de Bekker-Grob, Esther W.; Higgins, Ailish; Relton, Clare; Strong, Mark; Fox-Rushby, Julia
2018-01-01
Background Increasing breastfeeding rates have been associated with reductions in disease in babies and mothers as well as in related costs. ‘Nourishing Start for Health (NoSH)’, a financial incentive scheme has been proposed as a potentially effective way to increase both the number of mothers breastfeeding and duration of breastfeeding. Aims To establish women’s relative preferences for different aspects of a financial incentive scheme for breastfeeding and to identify importance of scheme characteristics on probability on participation in an incentive scheme. Methods A discrete choice experiment (DCE) obtained information on alternative specifications of the NoSH scheme designed to promote continued breastfeeding duration until at least 6 weeks after birth. Four attributes framed alternative scheme designs: value of the incentive; minimum breastfeeding duration required to receive incentive; method of verifying breastfeeding; type of incentive. Three versions of the DCE questionnaire, each containing 8 different choice sets, provided 24 choice sets for analysis. The questionnaire was mailed to 2,531 women in the South Yorkshire Cohort (SYC) aged 16–45 years in IMD quintiles 3–5. The analytic approach considered conditional and mixed effects logistic models to account for preference heterogeneity that may be associated with a variation in effects mediated by respondents’ characteristics. Results 564 women completed the questionnaire and a response rate of 22% was achieved. Most of the included attributes were found to affect utility and therefore the probability to participate in the incentive scheme. Higher rewards were preferred, although the type of incentive significantly affected women’s preferences on average. We found evidence for preference heterogeneity based on individual characteristics that mediated preferences for an incentive scheme.Conclusions Although participants’ opinion in our sample was mixed, financial incentives for breastfeeding may be an acceptable and effective instrument to change behaviour. However, individual characteristics could mediate the effect and should therefore be considered when developing and targeting future interventions. PMID:29649245
New Approaches to Coding Information using Inverse Scattering Transform
NASA Astrophysics Data System (ADS)
Frumin, L. L.; Gelash, A. A.; Turitsyn, S. K.
2017-06-01
Remarkable mathematical properties of the integrable nonlinear Schrödinger equation (NLSE) can offer advanced solutions for the mitigation of nonlinear signal distortions in optical fiber links. Fundamental optical soliton, continuous, and discrete eigenvalues of the nonlinear spectrum have already been considered for the transmission of information in fiber-optic channels. Here, we propose to apply signal modulation to the kernel of the Gelfand-Levitan-Marchenko equations that offers the advantage of a relatively simple decoder design. First, we describe an approach based on exploiting the general N -soliton solution of the NLSE for simultaneous coding of N symbols involving 4 ×N coding parameters. As a specific elegant subclass of the general schemes, we introduce a soliton orthogonal frequency division multiplexing (SOFDM) method. This method is based on the choice of identical imaginary parts of the N -soliton solution eigenvalues, corresponding to equidistant soliton frequencies, making it similar to the conventional OFDM scheme, thus, allowing for the use of the efficient fast Fourier transform algorithm to recover the data. Then, we demonstrate how to use this new approach to control signal parameters in the case of the continuous spectrum.
Study of the s - s bar asymmetry in the proton
NASA Astrophysics Data System (ADS)
Goharipour, Muhammad
2018-05-01
The study of s - s bar asymmetry is essential to better understand of the structure of nucleon and also the perturbative and nonperturbative mechanisms for sea quark generation. Actually, the nature and dynamical origins of this asymmetry have always been an interesting subject to research both experimentally and theoretically. One of the most powerful models can lead to s - s bar asymmetry is the meson-baryon model (MBM). In this work, using a simplified configuration of this model suggested by Pumplin, we calculate the s - s bar asymmetry for different values of cutoff parameter Λ, to study the dependence of model to this parameter and also to estimate the theoretical uncertainty imposed on the results due to its uncertainty. Then, we study the evolution of distributions obtained both at next-to-leading order (NLO) and next-to-next-to-leading order (NNLO) using different evolution schemes. It is shown that the evolution of the intrinsic quark distributions from a low initial scale, as suggested by Chang and Pang, is not a good choice at NNLO using variable flavor number scheme (VFNS).
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Moroz, I.; Palmer, T.
2015-12-01
It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.
NASA Astrophysics Data System (ADS)
Lipson, Mathew J.; Hart, Melissa A.; Thatcher, Marcus
2017-03-01
Intercomparison studies of models simulating the partitioning of energy over urban land surfaces have shown that the heat storage term is often poorly represented. In this study, two implicit discrete schemes representing heat conduction through urban materials are compared. We show that a well-established method of representing conduction systematically underestimates the magnitude of heat storage compared with exact solutions of one-dimensional heat transfer. We propose an alternative method of similar complexity that is better able to match exact solutions at typically employed resolutions. The proposed interface conduction scheme is implemented in an urban land surface model and its impact assessed over a 15-month observation period for a site in Melbourne, Australia, resulting in improved overall model performance for a variety of common material parameter choices and aerodynamic heat transfer parameterisations. The proposed scheme has the potential to benefit land surface models where computational constraints require a high level of discretisation in time and space, for example at neighbourhood/city scales, and where realistic material properties are preferred, for example in studies investigating impacts of urban planning changes.
On regularizing the MCTDH equations of motion
NASA Astrophysics Data System (ADS)
Meyer, Hans-Dieter; Wang, Haobin
2018-03-01
The Multiconfiguration Time-Dependent Hartree (MCTDH) approach leads to equations of motion (EOM) which become singular when there are unoccupied so-called single-particle functions (SPFs). Starting from a Hartree product, all SPFs, except the first one, are unoccupied initially. To solve the MCTDH-EOMs numerically, one therefore has to remove the singularity by a regularization procedure. Usually the inverse of a density matrix is regularized. Here we argue and show that regularizing the coefficient tensor, which in turn regularizes the density matrix as well, leads to an improved performance of the EOMs. The initially unoccupied SPFs are rotated faster into their "correct direction" in Hilbert space and the final results are less sensitive to the choice of the value of the regularization parameter. For a particular example (a spin-boson system studied with a transformed Hamiltonian), we could even show that only with the new regularization scheme could one obtain correct results. Finally, in Appendix A, a new integration scheme for the MCTDH-EOMs developed by Lubich and co-workers is discussed. It is argued that this scheme does not solve the problem of the unoccupied natural orbitals because this scheme ignores the latter and does not propagate them at all.
Helfer, Peter; Shultz, Thomas R
2014-12-01
The widespread availability of calorie-dense food is believed to be a contributing cause of an epidemic of obesity and associated diseases throughout the world. One possible countermeasure is to empower consumers to make healthier food choices with useful nutrition labeling. An important part of this endeavor is to determine the usability of existing and proposed labeling schemes. Here, we report an experiment on how four different labeling schemes affect the speed and nutritional value of food choices. We then apply decision field theory, a leading computational model of human decision making, to simulate the experimental results. The psychology experiment shows that quantitative, single-attribute labeling schemes have greater usability than multiattribute and binary ones, and that they remain effective under moderate time pressure. The computational model simulates these psychological results and provides explanatory insights into them. This work shows how experimental psychology and computational modeling can contribute to the evaluation and improvement of nutrition-labeling schemes. © 2014 New York Academy of Sciences.
The renormalization scale-setting problem in QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Xing-Gang; Brodsky, Stanley J.; Mojaza, Matin
2013-09-01
A key problem in making precise perturbative QCD predictions is to set the proper renormalization scale of the running coupling. The conventional scale-setting procedure assigns an arbitrary range and an arbitrary systematic error to fixed-order pQCD predictions. In fact, this ad hoc procedure gives results which depend on the choice of the renormalization scheme, and it is in conflict with the standard scale-setting procedure used in QED. Predictions for physical results should be independent of the choice of the scheme or other theoretical conventions. We review current ideas and points of view on how to deal with the renormalization scalemore » ambiguity and show how to obtain renormalization scheme- and scale-independent estimates. We begin by introducing the renormalization group (RG) equation and an extended version, which expresses the invariance of physical observables under both the renormalization scheme and scale-parameter transformations. The RG equation provides a convenient way for estimating the scheme- and scale-dependence of a physical process. We then discuss self-consistency requirements of the RG equations, such as reflexivity, symmetry, and transitivity, which must be satisfied by a scale-setting method. Four typical scale setting methods suggested in the literature, i.e., the Fastest Apparent Convergence (FAC) criterion, the Principle of Minimum Sensitivity (PMS), the Brodsky–Lepage–Mackenzie method (BLM), and the Principle of Maximum Conformality (PMC), are introduced. Basic properties and their applications are discussed. We pay particular attention to the PMC, which satisfies all of the requirements of RG invariance. Using the PMC, all non-conformal terms associated with the β-function in the perturbative series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. The PMC provides the principle underlying the BLM method, since it gives the general rule for extending BLM up to any perturbative order; in fact, they are equivalent to each other through the PMC–BLM correspondence principle. Thus, all the features previously observed in the BLM literature are also adaptable to the PMC. The PMC scales and the resulting finite-order PMC predictions are to high accuracy independent of the choice of the initial renormalization scale, and thus consistent with RG invariance. The PMC is also consistent with the renormalization scale-setting procedure for QED in the zero-color limit. The use of the PMC thus eliminates a serious systematic scale error in perturbative QCD predictions, greatly improving the precision of empirical tests of the Standard Model and their sensitivity to new physics.« less
Choices for Whom? The Rhetoric and Reality of the Direct Subsidy Scheme in Hong Kong (1988-2006)
ERIC Educational Resources Information Center
Tse, Thomas Kwan-choi
2008-01-01
School choice programs have proliferated around the world since the 1980s. Following this international trend, the Direct Subsidy Scheme (DSS) was launched in 1991 to revitalize Hong Kong's private school sector. DSS schools receive a similar subsidy per student to that received by aided schools, but they may charge fees and have greater control…
Optimal Bayesian Adaptive Design for Test-Item Calibration.
van der Linden, Wim J; Ren, Hao
2015-06-01
An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.
Cruza, Norberto Sotelo; Fierros, Luis E
2006-01-01
The present study was done at the internal medicine service oft he Hospital lnfantil in the State of Sonora, Mexico. We tried to address the question of the use of conceptual schemes and mind maps and its impact on the teaching-learning-evaluation process among medical residents. Analyze the effects of conceptual schemes, and mind maps as a teaching and evaluation tool and compare them with multiple choice exams among Pediatric residents. Twenty two residents (RI, RII, RIII)on service rotation during six months were assessed initially, followed by a lecture on a medical subject. Conceptual schemes and mind maps were then introduced as a teaching-learning-evaluation instrument. Comprehension impact and comparison with a standard multiple choice evaluation was done. The statistical package (JMP version 5, SAS inst. 2004) was used. We noted that when we used conceptual schemes and mind mapping, learning improvement was noticeable among the three groups of residents (P < 0.001) and constitutes a better evaluation tool when compared with multiple choice exams (P < 0.0005). Based on our experience we recommend the use of this educational technique for medical residents in training.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batiy, V.G.; Stojanov, A.I.; Schmieman, E.
2007-07-01
Methodological approach of optimization of schemes of solid radwaste management of the Object Shelter (Shelter) and ChNPP industrial site during transformation to the ecologically safe system was developed. On the basis of the conducted models researches the ALARA-analysis was carried out for the choice of optimum variant of schemes and technologies of solid radwaste management. The criteria of choice of optimum schemes, which are directed on optimization of doses and financial expenses, minimization of amount of the formed radwaste etc, were developed for realization of this ALARA-analysis. (authors)
Sensitivity of Age-of-Air Calculations to the Choice of Advection Scheme
NASA Technical Reports Server (NTRS)
Eluszkiewicz, Janusz; Hemler, Richard S.; Mahlman, Jerry D.; Bruhwiler, Lori; Takacs, Lawrence L.
2000-01-01
The age of air has recently emerged as a diagnostic of atmospheric transport unaffected by chemical parameterizations, and the features in the age distributions computed in models have been interpreted in terms of the models' large-scale circulation field. This study shows, however, that in addition to the simulated large-scale circulation, three-dimensional age calculations can also be affected by the choice of advection scheme employed in solving the tracer continuity equation, Specifically, using the 3.0deg latitude X 3.6deg longitude and 40 vertical level version of the Geophysical Fluid Dynamics Laboratory SKYHI GCM and six online transport schemes ranging from Eulerian through semi-Lagrangian to fully Lagrangian, it will be demonstrated that the oldest ages are obtained using the nondiffusive centered-difference schemes while the youngest ages are computed with a semi-Lagrangian transport (SLT) scheme. The centered- difference schemes are capable of producing ages older than 10 years in the mesosphere, thus eliminating the "young bias" found in previous age-of-air calculations. At this stage, only limited intuitive explanations can be advanced for this sensitivity of age-of-air calculations to the choice of advection scheme, In particular, age distributions computed online with the National Center for Atmospheric Research Community Climate Model (MACCM3) using different varieties of the SLT scheme are substantially older than the SKYHI SLT distribution. The different varieties, including a noninterpolating-in-the-vertical version (which is essentially centered-difference in the vertical), also produce a narrower range of age distributions than the suite of advection schemes employed in the SKYHI model. While additional MACCM3 experiments with a wider range of schemes would be necessary to provide more definitive insights, the older and less variable MACCM3 age distributions can plausibly be interpreted as being due to the semi-implicit semi-Lagrangian dynamics employed in the MACCM3. This type of dynamical core (employed with a 60-min time step) is likely to reduce SLT's interpolation errors that are compounded by the short-term variability characteristic of the explicit centered-difference dynamics employed in the SKYHI model (time step of 3 min). In the extreme case of a very slowly varying circulation, the choice of advection scheme has no effect on two-dimensional (latitude-height) age-of-air calculations, owing to the smooth nature of the transport circulation in 2D models. These results suggest that nondiffusive schemes may be the preferred choice for multiyear simulations of tracers not overly sensitive to the requirement of monotonicity (this category includes many greenhouse gases). At the same time, age-of-air calculations offer a simple quantitative diagnostic of a scheme's long-term diffusive properties and may help in the evaluation of dynamical cores in multiyear integrations. On the other hand, the sensitivity of the computed ages to the model numerics calls for caution in using age of air as a diagnostic of a GCM's large-scale circulation field.
Landry, C; Garant, D; Duchesne, P; Bernatchez, L
2001-06-22
According to the theory of mate choice based on heterozygosity, mates should choose each other in order to increase the heterozygosity of their offspring. In this study, we tested the 'good genes as heterozygosity' hypothesis of mate choice by documenting the mating patterns of wild Atlantic salmon (Salmo salar) using both major histocompatibility complex (MHC) and microsatellite loci. Specifically, we tested the null hypotheses that mate choice in Atlantic salmon is not dependent on the relatedness between potential partners or on the MHC similarity between mates. Three parameters were assessed: (i) the number of shared alleles between partners (x and y) at the MHC (M(xy)), (ii) the MHC amino-acid genotypic distance between mates' genotypes (AA(xy)), and (iii) genetic relatedness between mates (r(xy)). We found that Atlantic salmon choose their mates in order to increase the heterozygosity of their offspring at the MHC and, more specifically, at the peptide-binding region, presumably in order to provide them with better defence against parasites and pathogens. This was supported by a significant difference between the observed and expected AA(xy) (p = 0.0486). Furthermore, mate choice was not a mechanism of overall inbreeding avoidance as genetic relatedness supported a random mating scheme (p = 0.445). This study provides the first evidence that MHC genes influence mate choice in fish.
NASA Astrophysics Data System (ADS)
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George
2017-03-01
Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.
2.1 THz quantum-cascade laser operating up to 144 K based on a scattering-assisted injection design.
Khanal, Sudeep; Reno, John L; Kumar, Sushil
2015-07-27
A 2.1 THz quantum cascade laser (QCL) based on a scattering-assisted injection and resonant-phonon depopulation design scheme is demonstrated. The QCL is based on a four-well period implemented in the GaAs/Al0.15Ga0.85As material system. The QCL operates up to a heat-sink temperature of 144 K in pulsed-mode, which is considerably higher than that achieved for previously reported THz QCLs operating around the frequency of 2 THz. At 46 K, the threshold current-density was measured as ∼ 745 A/cm2 with a peak-power output of ∼10 mW. Electrically stable operation in a positive differential-resistance regime is achieved by a careful choice of design parameters. The results validate the robustness of scattering-assisted injection schemes for development of low-frequency (ν < 2.5 THz) QCLs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoustrup, Jakob; Pommer, Christian; Kliem, Wolfhard
2015-10-31
This paper deals with two stability aspects of linear systems of the form I ¨ x +B˙ x +Cx = 0 given by the triple (I;B;C). A general transformation scheme is given for a structure and Jordan form preserving transformation of the triple. We investigate how a system can be transformed by suitable choices of the transformation parameters into a new system (I;B1;C1) with a symmetrizable matrix C1. This procedure facilitates stability investigations. We also consider systems with a Hamiltonian spectrum which discloses marginal stability after a Jordan form preserving transformation.
Bouchez, A; Goffinet, B
1990-02-01
Selection indices can be used to predict one trait from information available on several traits in order to improve the prediction accuracy. Plant or animal breeders are interested in selecting only the best individuals, and need to compare the efficiency of different trait combinations in order to choose the index ensuring the best prediction quality for individual values. As the usual tools for index evaluation do not remain unbiased in all cases, we propose a robust way of evaluation by means of an estimator of the mean-square error of prediction (EMSEP). This estimator remains valid even when parameters are not known, as usually assumed, but are estimated. EMSEP is applied to the choice of an indirect multitrait selection index at the F5 generation of a classical breeding scheme for soybeans. Best predictions for precocity are obtained by means of indices using only part of the available information.
Sequential Feedback Scheme Outperforms the Parallel Scheme for Hamiltonian Parameter Estimation.
Yuan, Haidong
2016-10-14
Measurement and estimation of parameters are essential for science and engineering, where the main quest is to find the highest achievable precision with the given resources and design schemes to attain it. Two schemes, the sequential feedback scheme and the parallel scheme, are usually studied in the quantum parameter estimation. While the sequential feedback scheme represents the most general scheme, it remains unknown whether it can outperform the parallel scheme for any quantum estimation tasks. In this Letter, we show that the sequential feedback scheme has a threefold improvement over the parallel scheme for Hamiltonian parameter estimations on two-dimensional systems, and an order of O(d+1) improvement for Hamiltonian parameter estimation on d-dimensional systems. We also show that, contrary to the conventional belief, it is possible to simultaneously achieve the highest precision for estimating all three components of a magnetic field, which sets a benchmark on the local precision limit for the estimation of a magnetic field.
Mao, Hongwei; Yuan, Yuan; Si, Jennie
2015-01-01
Animals learn to choose a proper action among alternatives to improve their odds of success in food foraging and other activities critical for survival. Through trial-and-error, they learn correct associations between their choices and external stimuli. While a neural network that underlies such learning process has been identified at a high level, it is still unclear how individual neurons and a neural ensemble adapt as learning progresses. In this study, we monitored the activity of single units in the rat medial and lateral agranular (AGm and AGl, respectively) areas as rats learned to make a left or right side lever press in response to a left or right side light cue. We noticed that rat movement parameters during the performance of the directional choice task quickly became stereotyped during the first 2–3 days or sessions. But learning the directional choice problem took weeks to occur. Accompanying rats' behavioral performance adaptation, we observed neural modulation by directional choice in recorded single units. Our analysis shows that ensemble mean firing rates in the cue-on period did not change significantly as learning progressed, and the ensemble mean rate difference between left and right side choices did not show a clear trend of change either. However, the spatiotemporal firing patterns of the neural ensemble exhibited improved discriminability between the two directional choices through learning. These results suggest a spatiotemporal neural coding scheme in a motor cortical neural ensemble that may be responsible for and contributing to learning the directional choice task. PMID:25798093
NASA Astrophysics Data System (ADS)
David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera
2017-04-01
This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.
NASA Astrophysics Data System (ADS)
Bloembergen, Pieter; Dong, Wei; Bai, Cheng-Yu; Wang, Tie-Jun
2011-12-01
In this paper, impurity parameters m i and k i have been calculated for a range of impurities I as detected in the eutectics Co-C and Pt-C, by means of the software package Thermo-Calc within the ternary phase spaces Co-C- I and Pt-C- I. The choice of the impurities is based upon a selection out of the results of impurity analyses performed for a representative set of samples for each of the eutectics in study. The analyses in question are glow discharge mass spectrometry (GDMS) or inductively coupled plasma mass spectrometry (ICP-mass). Tables and plots of the impurity parameters against the atomic number Z i of the impurities will be presented, as well as plots demonstrating the validity of van't Hoff's law, the cornerstone to this study, for both eutectics. For the eutectics in question, the uncertainty u( T E - T liq ) in the correction T E - T liq will be derived, where T E and T liq refer to the transition temperature of the pure system and to the liquidus temperature in the limit of zero growth rate of the solid phase during solidification of the actual system, respectively. Uncertainty estimates based upon the current scheme SIE-OME, combining the sum of individual estimates (SIE) and the overall maximum estimate (OME) are compared with two alternative schemes proposed in this paper, designated as IE-IRE, combining individual estimates (IE) and individual random estimates (IRE), and the hybrid scheme SIE-IE-IRE, combining SIE, IE, and IRE.
Method of gas emission control for safe working of flat gassy coal seams
NASA Astrophysics Data System (ADS)
Vinogradov, E. A.; Yaroshenko, V. V.; Kislicyn, M. S.
2017-10-01
The main problems at intensive flat gassy coal seam longwall mining are considered. For example, mine Kotinskaja JSC “SUEK-Kuzbass” shows that when conducting the work on the gassy coal seams, methane emission control by means of ventilation, degassing and insulated drain of methane-air mixture is not effective and stable enough. It is not always possible to remove the coal production restrictions by the gas factor, which leads to financial losses because of incomplete using of longwall equipment and the reduction of the technical and economic indicators of mining. To solve the problems, the authors used a complex method that includes the compilation and analysis of the theory and practice of intensive flat gassy coal seam longwall mining. Based on the results of field and numerical researches, the effect of parameters of technological schemes on efficiency of methane emission control on longwall panels, the non-linear dependence of the permissible according to gas factor longwall productivity on parameters of technological schemes, ventilation and degassing during intensive mining flat gassy coal seams was established. The number of recommendations on the choice of the location and the size of the intermediate section of coal heading to control gassing in the mining extracted area, and guidelines for choosing the parameters of ventilation of extracted area with the help of two air supply entries and removal of isolated methane-air mixture are presented in the paper. The technological scheme, using intermediate entry for fresh air intake, ensuring effective management gassing and allowing one to refuse from drilling wells from the surface to the mined-out space for mining gas-bearing coal seams, was developed.
The generalized scheme-independent Crewther relation in QCD
NASA Astrophysics Data System (ADS)
Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang; Brodsky, Stanley J.
2017-07-01
The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton-nucleon scattering times the Adler function, defined from the cross section for electron-positron annihilation into hadrons, has no pQCD radiative corrections. The ;Generalized Crewther Relation; relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (Dns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (CBjp) at leading twist. A scheme-dependent ΔCSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both Dns and the inverse coefficient CBjp-1 have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, αˆd (Q) =∑i≥1 αˆg1 i (Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on the choice of the initial scale is negligible. Similar scale-fixed commensurate scale relations also connect other physical observables at their physical momentum scales, thus providing convention-independent, fundamental precision tests of QCD.
NASA Astrophysics Data System (ADS)
Boumehrez, Farouk; Brai, Radhia; Doghmane, Noureddine; Mansouri, Khaled
2018-01-01
Recently, video streaming has attracted much attention and interest due to its capability to process and transmit large data. We propose a quality of experience (QoE) model relying on high efficiency video coding (HEVC) encoder adaptation scheme, in turn based on the multiple description coding (MDC) for video streaming. The main contributions of the paper are (1) a performance evaluation of the new and emerging video coding standard HEVC/H.265, which is based on the variation of quantization parameter (QP) values depending on different video contents to deduce their influence on the sequence to be transmitted, (2) QoE support multimedia applications in wireless networks are investigated, so we inspect the packet loss impact on the QoE of transmitted video sequences, (3) HEVC encoder parameter adaptation scheme based on MDC is modeled with the encoder parameter and objective QoE model. A comparative study revealed that the proposed MDC approach is effective for improving the transmission with a peak signal-to-noise ratio (PSNR) gain of about 2 to 3 dB. Results show that a good choice of QP value can compensate for transmission channel effects and improve received video quality, although HEVC/H.265 is also sensitive to packet loss. The obtained results show the efficiency of our proposed method in terms of PSNR and mean-opinion-score.
Watson, Wendy L; Kelly, Bridget; Hector, Debra; Hughes, Clare; King, Lesley; Crawford, Jennifer; Sergeant, John; Chapman, Kathy
2014-01-01
There is evidence that easily accessible, comprehensible and consistent nutrient information on the front of packaged foods could assist shoppers to make healthier food choices. This study used an online questionnaire of 4357 grocery shoppers to examine Australian shoppers' ability to use a range of front-of-pack labels to identify healthier food products. Seven different front-of-pack labelling schemes comprising variants of the Traffic Light labelling scheme and the Percentage Daily Intake scheme, and a star rating scheme, were applied to nine pairs of commonly purchased food products. Participants could also access a nutrition information panel for each product. Participants were able to identify the healthier product in each comparison over 80% of the time using any of the five schemes that provided information on multiple nutrients. No individual scheme performed significantly better in terms of shoppers' ability to determine the healthier product, shopper reliance on the 'back-of-pack' nutrition information panel, and speed of use. The scheme that provided information about energy only and a scheme with limited numerical information of nutrient type or content performed poorly, as did the nutrition information panel alone (control). Further consumer testing is necessary to determine the optimal format and content of an interpretive front-of-pack nutrition labelling scheme. Copyright © 2013 Elsevier Ltd. All rights reserved.
mRM - multiscale Routing Model for Land Surface and Hydrologic Models
NASA Astrophysics Data System (ADS)
Cuntz, M.; Thober, S.; Mai, J.; Samaniego, L. E.; Gochis, D. J.; Kumar, R.
2015-12-01
Routing streamflow through a river network is a basic step within any distributed hydrologic model. It integrates the generated runoff and allows comparison with observed discharge at the outlet of a catchment. The Muskingum routing is a textbook river routing scheme that has been implemented in Earth System Models (e.g., WRF-HYDRO), stand-alone routing schemes (e.g., RAPID), and hydrologic models (e.g., the mesoscale Hydrologic Model). Most implementations suffer from a high computational demand because the spatial routing resolution is fixed to that of the elevation model irrespective of the hydrologic modeling resolution. This is because the model parameters are scale-dependent and cannot be used at other resolutions without re-estimation. Here, we present the multiscale Routing Model (mRM) that allows for a flexible choice of the routing resolution. mRM exploits the Multiscale Parameter Regionalization (MPR) included in the open-source mesoscale Hydrologic Model (mHM, www.ufz.de/mhm) that relates model parameters to physiographic properties and allows to estimate scale-independent model parameters. mRM is currently coupled to mHM and is presented here as stand-alone Free and Open Source Software (FOSS). The mRM source code is highly modular and provides a subroutine for internal re-use in any land surface scheme. mRM is coupled in this work to the state-of-the-art land surface model Noah-MP. Simulation results using mRM are compared with those available in WRF-HYDRO for the Red River during the period 1990-2000. mRM allows to increase the routing resolution from 100m to more than 10km without deteriorating the model performance. Therefore, it speeds up model calculation by reducing the contribution of routing to total runtime from over 80% to less than 5% in the case of WRF-HYDRO. mRM thus makes discharge data available to land surface modeling with only little extra calculations.
Parametrization and Optimization of Gaussian Non-Markovian Unravelings for Open Quantum Dynamics
NASA Astrophysics Data System (ADS)
Megier, Nina; Strunz, Walter T.; Viviescas, Carlos; Luoma, Kimmo
2018-04-01
We derive a family of Gaussian non-Markovian stochastic Schrödinger equations for the dynamics of open quantum systems. The different unravelings correspond to different choices of squeezed coherent states, reflecting different measurement schemes on the environment. Consequently, we are able to give a single shot measurement interpretation for the stochastic states and microscopic expressions for the noise correlations of the Gaussian process. By construction, the reduced dynamics of the open system does not depend on the squeezing parameters. They determine the non-Hermitian Gaussian correlation, a wide range of which are compatible with the Markov limit. We demonstrate the versatility of our results for quantum information tasks in the non-Markovian regime. In particular, by optimizing the squeezing parameters, we can tailor unravelings for improving entanglement bounds or for environment-assisted entanglement protection.
A simplified satellite navigation system for an autonomous Mars roving vehicle.
NASA Technical Reports Server (NTRS)
Janosko, R. E.; Shen, C. N.
1972-01-01
The use of a retroflecting satellite and a laser rangefinder to navigate a Martian roving vehicle is considered in this paper. It is shown that a simple system can be employed to perform this task. An error analysis is performed on the navigation equations and it is shown that the error inherent in the scheme proposed can be minimized by the proper choice of measurement geometry. A nonlinear programming approach is used to minimize the navigation error subject to constraints that are due to geometric and laser requirements. The problem is solved for a particular set of laser parameters and the optimal solution is presented.
Dodging Democracy: The Educator's Flight from the Specter of Choice
ERIC Educational Resources Information Center
Coons, John E.
2005-01-01
Jerry Paquette, in his article "Public Funding for "Private" Education: The Equity Challenge of Enhanced Choice" (in this issue, 568), properly urges everyone to ponder, then to reject, various hypothetical schemes for school choice whose design might worsen the plight of the poor. Among these devices, his chief bugbear is a largely unregulated…
Radiation reaction effect on laser driven auto-resonant particle acceleration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sagar, Vikram; Sengupta, Sudip; Kaw, P. K.
2015-12-15
The effects of radiation reaction force on laser driven auto-resonant particle acceleration scheme are studied using Landau-Lifshitz equation of motion. These studies are carried out for both linear and circularly polarized laser fields in the presence of static axial magnetic field. From the parametric study, a radiation reaction dominated region has been identified in which the particle dynamics is greatly effected by this force. In the radiation reaction dominated region, the two significant effects on particle dynamics are seen, viz., (1) saturation in energy gain by the initially resonant particle and (2) net energy gain by an initially non-resonant particlemore » which is caused due to resonance broadening. It has been further shown that with the relaxation of resonance condition and with optimum choice of parameters, this scheme may become competitive with the other present-day laser driven particle acceleration schemes. The quantum corrections to the Landau-Lifshitz equation of motion have also been taken into account. The difference in the energy gain estimates of the particle by the quantum corrected and classical Landau-Lifshitz equation is found to be insignificant for the present day as well as upcoming laser facilities.« less
Swords, Kelly; Wallen, Eric M; Pruthi, Raj S
2010-01-01
African American men have a higher rate of prostate cancer mortality compared with their Caucasian American counterparts. However, it remains unclear as to whether such differences are due to biologic or socioeconomic influences. This study sought to determine if there are differences in demographic and clinical characteristics between African American and Caucasian American men in a modern cohort undergoing extended biopsy approach, and evaluated the subsequent choice of therapy in patients diagnosed with prostate cancer. A retrospective review was performed from a consecutive series of 500 men undergoing prostate biopsy at our institution between 2003 and 2005. All patients underwent a contemporary 10-12 biopsy scheme. Demographic, clinical, and pathologic variables as well as treatment choice (those with positive biopsy) were stratified and evaluated with regard to race-African American, Caucasian American, and other (Hispanic, Asian, American Indian). 65% were Caucasian American, 29% African American, and 7% other. The overall positive biopsy rate was 44%. African American men were significantly younger than Caucasian American but were not younger than "other" (61.6 vs. 64.3 vs. 61.5 years). No differences were observed with regard to prostate specific antigen density (PSAD), prostate volume, or rate of abnormal digital rectal exam (DRE). The positive biopsy rate was not different between Caucasian American and African American (46% vs. 46%), but significantly lower in other men (16%). These differences were maintained on odds ratio modeling, including age-adjusted and multivariate models. Of the 223 men with positive biopsies, information on treatment choice demonstrated that African American men had a significantly higher rate of choice of XRT (OR = 2.12) and rate of avoidance of surgery (OR = 0.35) than Caucasian American men. In men undergoing prostate biopsy using an extended (10-12 core) biopsy scheme, no differences were observed with regard to positive biopsy rate or other clinical or biochemical parameters [except for age and prostate specific antigen (PSA) level] between African American and Caucasian American men. Of those with a positive biopsy, African American men were more likely to avoid surgery and choose XRT in our population. Copyright (c) 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bonacker, Esther; Gibali, Aviv; Küfer, Karl-Heinz; Süss, Philipp
2017-04-01
Multicriteria optimization problems occur in many real life applications, for example in cancer radiotherapy treatment and in particular in intensity modulated radiation therapy (IMRT). In this work we focus on optimization problems with multiple objectives that are ranked according to their importance. We solve these problems numerically by combining lexicographic optimization with our recently proposed level set scheme, which yields a sequence of auxiliary convex feasibility problems; solved here via projection methods. The projection enables us to combine the newly introduced superiorization methodology with multicriteria optimization methods to speed up computation while guaranteeing convergence of the optimization. We demonstrate our scheme with a simple 2D academic example (used in the literature) and also present results from calculations on four real head neck cases in IMRT (Radiation Oncology of the Ludwig-Maximilians University, Munich, Germany) for two different choices of superiorization parameter sets suited to yield fast convergence for each case individually or robust behavior for all four cases.
Nguyen, Manh Cuong; Yao, Yongxin; Wang, Cai-Zhuang; ...
2018-05-16
The dependence of the magnetocrystalline anisotropy energy (MAE) in MCo 5 (M = Y, La, Ce, Gd) and CoPt on the Coulomb correlations and strength of spin orbit (SO) interaction within the GGA + U scheme is investigated. A range of parameters suitable for the satisfactory description of key magnetic properties is determined. We show that for a large variation of SO interaction the MAE in these materials can be well described by the traditional second order perturbation theory. We also show that in these materials the MAE can be both proportional and negatively proportional to the orbital moment anisotropymore » (OMA) of Co atoms. Dependence of relativistic effects on Coulomb correlations, applicability of the second order perturbation theory for the description of MAE, and effective screening of the SO interaction in these systems are discussed using a generalized virial theorem. Finally, such determined sets of parameters of Coulomb correlations can be used in much needed large scale atomistic simulations.« less
NASA Astrophysics Data System (ADS)
Nguyen, Manh Cuong; Yao, Yongxin; Wang, Cai-Zhuang; Ho, Kai-Ming; Antropov, Vladimir P.
2018-05-01
The dependence of the magnetocrystalline anisotropy energy (MAE) in MCo5 (M = Y, La, Ce, Gd) and CoPt on the Coulomb correlations and strength of spin orbit (SO) interaction within the GGA + U scheme is investigated. A range of parameters suitable for the satisfactory description of key magnetic properties is determined. We show that for a large variation of SO interaction the MAE in these materials can be well described by the traditional second order perturbation theory. We also show that in these materials the MAE can be both proportional and negatively proportional to the orbital moment anisotropy (OMA) of Co atoms. Dependence of relativistic effects on Coulomb correlations, applicability of the second order perturbation theory for the description of MAE, and effective screening of the SO interaction in these systems are discussed using a generalized virial theorem. Such determined sets of parameters of Coulomb correlations can be used in much needed large scale atomistic simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Manh Cuong; Yao, Yongxin; Wang, Cai-Zhuang
The dependence of the magnetocrystalline anisotropy energy (MAE) in MCo 5 (M = Y, La, Ce, Gd) and CoPt on the Coulomb correlations and strength of spin orbit (SO) interaction within the GGA + U scheme is investigated. A range of parameters suitable for the satisfactory description of key magnetic properties is determined. We show that for a large variation of SO interaction the MAE in these materials can be well described by the traditional second order perturbation theory. We also show that in these materials the MAE can be both proportional and negatively proportional to the orbital moment anisotropymore » (OMA) of Co atoms. Dependence of relativistic effects on Coulomb correlations, applicability of the second order perturbation theory for the description of MAE, and effective screening of the SO interaction in these systems are discussed using a generalized virial theorem. Finally, such determined sets of parameters of Coulomb correlations can be used in much needed large scale atomistic simulations.« less
Information criteria for quantifying loss of reversibility in parallelized KMC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gourgoulias, Konstantinos, E-mail: gourgoul@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Rey-Bellet, Luc, E-mail: luc@math.umass.edu
Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot bemore » computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.« less
Information criteria for quantifying loss of reversibility in parallelized KMC
NASA Astrophysics Data System (ADS)
Gourgoulias, Konstantinos; Katsoulakis, Markos A.; Rey-Bellet, Luc
2017-01-01
Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot be computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.
Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.
2010-01-01
The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808
Cecchini, M; Warin, L
2016-03-01
Food labels are considered a crucial component of strategies tackling unhealthy diets and obesity. This study aims at assessing the effectiveness of food labelling in increasing the selection of healthier products and in reducing calorie intake. In addition, this study compares the relative effectiveness of traffic light schemes, Guideline Daily Amount and other food labelling schemes. A comprehensive set of databases were searched to identify randomized studies. Studies reporting homogeneous outcomes were pooled together and analysed through meta-analyses. Publication bias was evaluated with a funnel plot. Food labelling would increase the amount of people selecting a healthier food product by about 17.95% (confidence interval: +11.24% to +24.66%). Food labelling would also decrease calorie intake/choice by about 3.59% (confidence interval: -8.90% to +1.72%), but results are not statistically significant. Traffic light schemes are marginally more effective in increasing the selection of healthier options. Other food labels and Guideline Daily Amount follow. The available evidence did not allow studying the effects of single labelling schemes on calorie intake/choice. Findings of this study suggest that nutrition labelling may be an effective approach to empowering consumers in choosing healthier products. Interpretive labels, as traffic light labels, may be more effective. © 2015 World Obesity.
A unifying Bayesian account of contextual effects in value-based choice
Friston, Karl J.; Dolan, Raymond J.
2017-01-01
Empirical evidence suggests the incentive value of an option is affected by other options available during choice and by options presented in the past. These contextual effects are hard to reconcile with classical theories and have inspired accounts where contextual influences play a crucial role. However, each account only addresses one or the other of the empirical findings and a unifying perspective has been elusive. Here, we offer a unifying theory of context effects on incentive value attribution and choice based on normative Bayesian principles. This formulation assumes that incentive value corresponds to a precision-weighted prediction error, where predictions are based upon expectations about reward. We show that this scheme explains a wide range of contextual effects, such as those elicited by other options available during choice (or within-choice context effects). These include both conditions in which choice requires an integration of multiple attributes and conditions where a multi-attribute integration is not necessary. Moreover, the same scheme explains context effects elicited by options presented in the past or between-choice context effects. Our formulation encompasses a wide range of contextual influences (comprising both within- and between-choice effects) by calling on Bayesian principles, without invoking ad-hoc assumptions. This helps clarify the contextual nature of incentive value and choice behaviour and may offer insights into psychopathologies characterized by dysfunctional decision-making, such as addiction and pathological gambling. PMID:28981514
The generalized scheme-independent Crewther relation in QCD
Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang; ...
2017-05-10
The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton–nucleon scattering times the Adler function, defined from the cross section for electron–positron annihilation into hadrons, has no pQCD radiative corrections. The “Generalized Crewther Relation” relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (D ns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (C Bjp) at leading twist. A scheme-dependent Δ CSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both D ns and the inverse coefficient Cmore » $$-1\\atop{Bjp}$$ have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, $$\\hat{α}$$ d(Q)=Σ i≥1$$\\hat{α}^i\\atop{g1}$$(Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on the choice of the initial scale is negligible. Lastly, similar scale-fixed commensurate scale relations also connect other physical observables at their physical momentum scales, thus providing convention-independent, fundamental precision tests of QCD.« less
The generalized scheme-independent Crewther relation in QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang
The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton–nucleon scattering times the Adler function, defined from the cross section for electron–positron annihilation into hadrons, has no pQCD radiative corrections. The “Generalized Crewther Relation” relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (D ns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (C Bjp) at leading twist. A scheme-dependent Δ CSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both D ns and the inverse coefficient Cmore » $$-1\\atop{Bjp}$$ have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, $$\\hat{α}$$ d(Q)=Σ i≥1$$\\hat{α}^i\\atop{g1}$$(Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on the choice of the initial scale is negligible. Lastly, similar scale-fixed commensurate scale relations also connect other physical observables at their physical momentum scales, thus providing convention-independent, fundamental precision tests of QCD.« less
NASA Technical Reports Server (NTRS)
Wang, Shugong; Liang, Xu
2013-01-01
A new approach is presented in this paper to effectively obtain parameter estimations for the Multiscale Kalman Smoother (MKS) algorithm. This new approach has demonstrated promising potentials in deriving better data products based on data of different spatial scales and precisions. Our new approach employs a multi-objective (MO) parameter estimation scheme (called MO scheme hereafter), rather than using the conventional maximum likelihood scheme (called ML scheme) to estimate the MKS parameters. Unlike the ML scheme, the MO scheme is not simply built on strict statistical assumptions related to prediction errors and observation errors, rather, it directly associates the fused data of multiple scales with multiple objective functions in searching best parameter estimations for MKS through optimization. In the MO scheme, objective functions are defined to facilitate consistency among the fused data at multiscales and the input data at their original scales in terms of spatial patterns and magnitudes. The new approach is evaluated through a Monte Carlo experiment and a series of comparison analyses using synthetic precipitation data. Our results show that the MKS fused precipitation performs better using the MO scheme than that using the ML scheme. Particularly, improvements are significant compared to that using the ML scheme for the fused precipitation associated with fine spatial resolutions. This is mainly due to having more criteria and constraints involved in the MO scheme than those included in the ML scheme. The weakness of the original ML scheme that blindly puts more weights onto the data associated with finer resolutions is overcome in our new approach.
NASA Astrophysics Data System (ADS)
Demuzere, Matthias; Harshan, Suraj; Järvi, Leena; Roth, Matthias; Betham Grimmond, Christine Susan; Masson, Valéry; Oleson, Keith; Velasco Saldana, Hector Erik; Wouters, Hendrik
2017-04-01
This paper provides the first comparative evaluation of four urban land surface models for a tropical residential neighbourhood in Singapore. The simulations are performed offline, for an 11-month period, using the bulk scheme TERRA_URB and three models of intermediate complexity (CLM, SURFEX and SUEWS). In addition, information from three different parameter lists are added to quantify the impact (interaction) of (between) external parameter settings and model formulations on the modelled urban energy balance components. Overall, the models' performance using the reference parameters aligns well with previous findings for mid- and high-latitude sites against (for) which the models are generally optimised (evaluated). The various combinations of models and different parameter values suggest that error statistics tend to be more dominated by the choice of the latter than the choice of model. Stratifying the observation period into dry / wet periods and hours since selected precipitation events reveals that the models' skill generally deteriorates during dry periods while e.g. CLM/SURFEX has a positive bias in the latent heat flux directly after a precipitation event. It is shown that the latter is due to simple representation of water intercepted on the impervious surfaces. In addition, the positive bias in modelled outgoing longwave radiation is attributed to neglecting the interactions between water vapor and radiation between the surface and the tower sensor. These findings suggest that future developments in urban climate research should continue the integration of more physically-based processes in urban canopy models, ensure the consistency between the observed and modelled atmospheric properties and focus on the correct representation of urban morphology and thermal and radiative characteristics.
Building a Smart Portal for Astronomy
NASA Astrophysics Data System (ADS)
Derriere, S.; Boch, T.
2011-07-01
The development of a portal for accessing astronomical resources is not an easy task. The ever-increasing complexity of the data products can result in very complex user interfaces, requiring a lot of effort and learning from the user in order to perform searches. This is often a design choice, where the user must explicitly set many constraints, while the portal search logic remains simple. We investigated a different approach, where the query interface is kept as simple as possible (ideally, a simple text field, like for Google search), and the search logic is made much more complex to interpret the query in a relevant manner. We will present the implications of this approach in terms of interpretation and categorization of the query parameters (related to astronomical vocabularies), translation (mapping) of these concepts into the portal components metadata, identification of query schemes and use cases matching the input parameters, and delivery of query results to the user.
Aerodynamic optimization by simultaneously updating flow variables and design parameters
NASA Technical Reports Server (NTRS)
Rizk, M. H.
1990-01-01
The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.
ERIC Educational Resources Information Center
Reindl, Marie-Sol; Waltz, Mitzi; Schippers, Alice
2016-01-01
This study focused on parent-initiated supported living schemes in the South of the Netherlands and the ability of these living schemes to enhance participation, choice, autonomy and self-advocacy for people with intellectual or developmental disabilities through personalized planning, support and care. Based on in-depth interviews with tenants,…
ff14ipq: A Self-Consistent Force Field for Condensed-Phase Simulations of Proteins
2015-01-01
We present the ff14ipq force field, implementing the previously published IPolQ charge set for simulations of complete proteins. Minor modifications to the charge derivation scheme and van der Waals interactions between polar atoms are introduced. Torsion parameters are developed through a generational learning approach, based on gas-phase MP2/cc-pVTZ single-point energies computed of structures optimized by the force field itself rather than the quantum benchmark. In this manner, we sacrifice information about the true quantum minima in order to ensure that the force field maintains optimal agreement with the MP2/cc-pVTZ benchmark for the ensembles it will actually produce in simulations. A means of making the gas-phase torsion parameters compatible with solution-phase IPolQ charges is presented. The ff14ipq model is an alternative to ff99SB and other Amber force fields for protein simulations in programs that accommodate pair-specific Lennard–Jones combining rules. The force field gives strong performance on α-helical and β-sheet oligopeptides as well as globular proteins over microsecond time scale simulations, although it has not yet been tested in conjunction with lipid and nucleic acid models. We show how our choices in parameter development influence the resulting force field and how other choices that may have appeared reasonable would actually have led to poorer results. The tools we developed may also aid in the development of future fixed-charge and even polarizable biomolecular force fields. PMID:25328495
The Testing Methods and Gender Differences in Multiple-Choice Assessment
NASA Astrophysics Data System (ADS)
Ng, Annie W. Y.; Chan, Alan H. S.
2009-10-01
This paper provides a comprehensive review of the multiple-choice assessment in the past two decades for facilitating people to conduct effective testing in various subject areas. It was revealed that a variety of multiple-choice test methods viz. conventional multiple-choice, liberal multiple-choice, elimination testing, confidence marking, probability testing, and order-of-preference scheme are available for use in assessing subjects' knowledge and decision ability. However, the best multiple-choice test method for use has not yet been identified. The review also indicated that the existence of gender differences in multiple-choice task performance might be due to the test area, instruction/scoring condition, and item difficulty.
Revisiting low-fidelity two-fluid models for gas-solids transport
NASA Astrophysics Data System (ADS)
Adeleke, Najeem; Adewumi, Michael; Ityokumbul, Thaddeus
2016-08-01
Two-phase gas-solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas-solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The model equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe-Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.
ERIC Educational Resources Information Center
Campbell, Mark L.
2015-01-01
Multiple-choice exams, while widely used, are necessarily imprecise due to the contribution of the final student score due to guessing. This past year at the United States Naval Academy the construction and grading scheme for the department-wide general chemistry multiple-choice exams were revised with the goal of decreasing the contribution of…
Pricing schemes for new drugs: a welfare analysis.
Levaggi, Rosella
2014-02-01
Drug price regulation is acquiring increasing significance in the investment choices of the pharmaceutical sector. The overall objective is to determine an optimal trade-off between the incentives for innovation, consumer protection, and value for money. However, price regulation is itself a source of distortion. In this study, we examine the welfare properties of listing through a bargaining process and value-based pricing schemes. The latter are superior instruments to uncertain listing processes for maximising total welfare, but the distribution of the benefits between consumers and the industry depends on rate of rebate chosen by the regulator. However, through an appropriate choice, it is always possible to define a value-based pricing scheme with risk sharing, which both consumers and the industry prefer to an uncertain bargaining process. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Chunxi; Wang, Yuqing
2018-01-01
The sensitivity of simulated tropical cyclones (TCs) to the choice of cumulus parameterization (CP) scheme in the advanced Weather Research and Forecasting Model (WRF-ARW) version 3.5 is analyzed based on ten seasonal simulations with 20-km horizontal grid spacing over the western North Pacific. Results show that the simulated frequency and intensity of TCs are very sensitive to the choice of the CP scheme. The sensitivity can be explained well by the difference in the low-level circulation in a height and sorted moisture space. By transporting moist static energy from dry to moist region, the low-level circulation is important to convective self-aggregation which is believed to be related to genesis of TC-like vortices (TCLVs) and TCs in idealized settings. The radiative and evaporative cooling associated with low-level clouds and shallow convection in dry regions is found to play a crucial role in driving the moisture-sorted low-level circulation. With shallow convection turned off in a CP scheme, relatively strong precipitation occurs frequently in dry regions. In this case, the diabatic cooling can still drive the low-level circulation but its strength is reduced and thus TCLV/TC genesis is suppressed. The inclusion of the cumulus momentum transport (CMT) in a CP scheme can considerably suppress genesis of TCLVs/TCs, while changes in the moisture-sorted low-level circulation and horizontal distribution of precipitation are trivial, indicating that the CMT modulates the TCLVs/TCs activities in the model by mechanisms other than the horizontal transport of moist static energy.
Welfare Reform when Recipients Are Forward-Looking
ERIC Educational Resources Information Center
Swann, Christopher A.
2005-01-01
By studying recipients of aid under the Temporary Assistance for Needy Families (TANF) welfare scheme, the effect of time limits of welfare schemes on forward looking recipients is assessed using a discrete-choice dynamic programming framework model. The policy simulations for the preferred specification of utility reveal that two year time limits…
NASA Astrophysics Data System (ADS)
Zhai, Guoqing; Li, Xiaofan
2015-04-01
The Bergeron-Findeisen process has been simulated using the parameterization scheme for the depositional growth of ice crystal with the temperature-dependent theoretically predicted parameters in the past decades. Recently, Westbrook and Heymsfield (2011) calculated these parameters using the laboratory data from Takahashi and Fukuta (1988) and Takahashi et al. (1991) and found significant differences between the two parameter sets. There are two schemes that parameterize the depositional growth of ice crystal: Hsie et al. (1980), Krueger et al. (1995) and Zeng et al. (2008). In this study, we conducted three pairs of sensitivity experiments using three parameterization schemes and the two parameter sets. The pre-summer torrential rainfall event is chosen as the simulated rainfall case in this study. The analysis of root-mean-squared difference and correlation coefficient between the simulation and observation of surface rain rate shows that the experiment with the Krueger scheme and the Takahashi laboratory-derived parameters produces the best rain-rate simulation. The mean simulated rain rates are higher than the mean observational rain rate. The calculations of 5-day and model domain mean rain rates reveal that the three schemes with Takahashi laboratory-derived parameters tend to reduce the mean rain rate. The Krueger scheme together with the Takahashi laboratory-derived parameters generate the closest mean rain rate to the mean observational rain rate. The decrease in the mean rain rate caused by the Takahashi laboratory-derived parameters in the experiment with the Krueger scheme is associated with the reductions in the mean net condensation and the mean hydrometeor loss. These reductions correspond to the suppressed mean infrared radiative cooling due to the enhanced cloud ice and snow in the upper troposphere.
NASA Astrophysics Data System (ADS)
Wu, Xing-Gang; Shen, Jian-Ming; Du, Bo-Lun; Brodsky, Stanley J.
2018-05-01
As a basic requirement of the renormalization group invariance, any physical observable must be independent of the choice of both the renormalization scheme and the initial renormalization scale. In this paper, we show that by using the newly suggested C -scheme coupling, one can obtain a demonstration that the principle of maximum conformality prediction is scheme-independent to all-orders for any renormalization schemes, thus satisfying all of the conditions of the renormalization group invariance. We illustrate these features for the nonsinglet Adler function and for τ decay to ν + hadrons at the four-loop level.
High brightness fully coherent x-ray amplifier seeded by a free-electron laser oscillator
NASA Astrophysics Data System (ADS)
Li, Kai; Yan, Jiawei; Feng, Chao; Zhang, Meng; Deng, Haixiao
2018-04-01
X-ray free-electron laser oscillator (XFELO) is expected to be a cutting-edge tool for fully coherent x-ray laser generation, and undulator taper technique is well-known for considerably increasing the efficiency of free-electron lasers (FELs). In order to combine the advantages of these two schemes, FEL amplifier seeded by XFELO is proposed by simply using a chirped electron beam. With the right choice of the beam parameters, the bunch tail is within the gain bandwidth of XFELO, and lase to saturation, which will be served as a seeding for further amplification. Meanwhile, the bunch head which is outside the gain bandwidth of XFELO, is preserved and used in the following FEL amplifier. It is found that the natural "double-horn" beam current, as well as residual energy chirp from chicane compressor, are quite suitable for the new scheme. Inheriting the advantages from XFELO seeding and undulator tapering, it is feasible to generate nearly terawatt level, fully coherent x-ray pulses with unprecedented shot-to-shot stability, which might open up new scientific opportunities in various research fields.
Evaluating the accuracy performance of Lucas-Kanade algorithm in the circumstance of PIV application
NASA Astrophysics Data System (ADS)
Pan, Chong; Xue, Dong; Xu, Yang; Wang, JinJun; Wei, RunJie
2015-10-01
Lucas-Kanade (LK) algorithm, usually used in optical flow filed, has recently received increasing attention from PIV community due to its advanced calculation efficiency by GPU acceleration. Although applications of this algorithm are continuously emerging, a systematic performance evaluation is still lacking. This forms the primary aim of the present work. Three warping schemes in the family of LK algorithm: forward/inverse/symmetric warping, are evaluated in a prototype flow of a hierarchy of multiple two-dimensional vortices. Second-order Newton descent is also considered here. The accuracy & efficiency of all these LK variants are investigated under a large domain of various influential parameters. It is found that the constant displacement constraint, which is a necessary building block for GPU acceleration, is the most critical issue in affecting LK algorithm's accuracy, which can be somehow ameliorated by using second-order Newton descent. Moreover, symmetric warping outbids the other two warping schemes in accuracy level, robustness to noise, convergence speed and tolerance to displacement gradient, and might be the first choice when applying LK algorithm to PIV measurement.
Further Education and Training: A Comparison of Policy Models in Britain and Norway.
ERIC Educational Resources Information Center
Skinningsrud, Tone
1995-01-01
Compares public intervention schemes in Britain and Norway supporting participation of public educational institutions in the delivery of continuing labor force development and training. These schemes demonstrate that British policy is based on belief in free market principles, while Norwegian policy combines elements of consumer choice and legal…
Time as a Tool for Policy Analysis in Aging.
ERIC Educational Resources Information Center
Pastorello, Thomas
National policy makers have put forth different life cycle planning proposals for the more satisfying integration of education, work and leisure over the life course. This speech describes a decision making scheme, the Time Paradigm, for researched-based choice among various proposals. The scheme is defined in terms of a typology of time-related…
Kaufmann, Cornel; Schmid, Christian; Boes, Stefan
2017-09-01
The extent to which premium subsidies can influence health insurance choices is an open question. In this paper, we explore the regional variation in subsidy schemes in Switzerland, designed as either in-kind or cash transfers, to study their impact on the choice of health insurance deductibles. Using health survey data and a difference-in-differences methodology, we find that in-kind transfers increase the likelihood of choosing a low deductible plan by approximately 4 percentage points (or 7%). Our results indicate that the response to in-kind transfers is strongest among women, middle-aged and unmarried individuals, which we explain by differences in risk-taking behavior, health status, financial constraints, health insurance and financial literacy. We discuss our results in the light of potential extra-marginal effects on the demand for health care services, which are however not supported by our data. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deur, Alexandre; Shen, Jian -Ming; Wu, Xing -Gang
The Principle of Maximum Conformality (PMC) provides scale-fixed perturbative QCD predictions which are independent of the choice of the renormalization scheme, as well as the choice of the initial renormalization scale. In this article, we will test the PMC by comparing its predictions for the strong couplingmore » $$\\alpha^s_{g_1}(Q)$$, defined from the Bjorken sum rule, with predictions using conventional pQCD scale-setting. The two results are found to be compatible with each other and with the available experimental data. However, the PMC provides a significantly more precise determination, although its domain of applicability ($$Q \\gtrsim 1.5$$ GeV) does not extend to as small values of momentum transfer as that of a conventional pQCD analysis ($$Q \\gtrsim 1$$ GeV). In conclusion, we suggest that the PMC range of applicability could be improved by a modified intermediate scheme choice or using a single effective PMC scale.« less
Modelling and simulation of a dynamical system with the Atangana-Baleanu fractional derivative
NASA Astrophysics Data System (ADS)
Owolabi, Kolade M.
2018-01-01
In this paper, we model an ecological system consisting of a predator and two preys with the newly derived two-step fractional Adams-Bashforth method via the Atangana-Baleanu derivative in the Caputo sense. We analyze the dynamical system for correct choice of parameter values that are biologically meaningful. The local analysis of the main model is based on the application of qualitative theory for ordinary differential equations. By using the fixed point theorem idea, we establish the existence and uniqueness of the solutions. Convergence results of the new scheme are verified in both space and time. Dynamical wave phenomena of solutions are verified via some numerical results obtained for different values of the fractional index, which have some interesting ecological implications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, B., E-mail: badrul.alam@uniroma1.it; Veroli, A.; Benedetti, A.
2016-08-28
A structure featuring vertical directional coupling of long-range surface plasmon polaritons between strip waveguides at λ = 1.55 μm is investigated with the aim of producing efficient elements that enable optical multilayer routing for 3D photonics. We have introduced a practical computational method to calculate the interaction on the bent part. This method allows us both to assess the importance of the interaction in the bent part and to control it by a suitable choice of the fabrication parameters that helps also to restrain effects due to fabrication issues. The scheme adopted here allows to reduce the insertion losses compared with othermore » planar and multilayer devices.« less
Hassan, Ahnaf Rashik; Bhuiyan, Mohammed Imamul Hassan
2016-09-15
Automatic sleep scoring is essential owing to the fact that conventionally a large volume of data have to be analyzed visually by the physicians which is onerous, time-consuming and error-prone. Therefore, there is a dire need of an automated sleep staging scheme. In this work, we decompose sleep-EEG signal segments using tunable-Q factor wavelet transform (TQWT). Various spectral features are then computed from TQWT sub-bands. The performance of spectral features in the TQWT domain has been determined by intuitive and graphical analyses, statistical validation, and Fisher criteria. Random forest is used to perform classification. Optimal choices and the effects of TQWT and random forest parameters have been determined and expounded. Experimental outcomes manifest the efficacy of our feature generation scheme in terms of p-values of ANOVA analysis and Fisher criteria. The proposed scheme yields 90.38%, 91.50%, 92.11%, 94.80%, 97.50% for 6-stage to 2-stage classification of sleep states on the benchmark Sleep-EDF data-set. In addition, its performance on DREAMS Subjects Data-set is also promising. The performance of the proposed method is significantly better than the existing ones in terms of accuracy and Cohen's kappa coefficient. Additionally, the proposed scheme gives high detection accuracy for sleep stages non-REM 1 and REM. Spectral features in the TQWT domain can discriminate sleep-EEG signals corresponding to various sleep states efficaciously. The proposed scheme will alleviate the burden of the physicians, speed-up sleep disorder diagnosis, and expedite sleep research. Copyright © 2016 Elsevier B.V. All rights reserved.
Cognitive models of risky choice: parameter stability and predictive accuracy of prospect theory.
Glöckner, Andreas; Pachur, Thorsten
2012-04-01
In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPT's parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individual's choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPT's parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ginzburg, Irina
2016-02-01
In this Comment on the recent work (Zhu and Ma, 2013) [11] by Zhu and Ma (ZM) we first show that all three local gray Lattice Boltzmann (GLB) schemes in the form (Zhu and Ma, 2013) [11]: GS (Chen and Zhu, 2008; Gao and Sharma, 1994) [1,4], WBS (Walsh et al., 2009) [12] and ZM, fail to get constant Darcy's velocity in series of porous blocks. This inconsistency is because of their incorrect definition of the macroscopic velocity in the presence of the heterogeneous momentum exchange, while the original WBS model (Walsh et al., 2009) [12] does this properly. We improve the GS and ZM schemes for this and other related deficiencies. Second, we show that the ;discontinuous velocity; they recover on the stratified interfaces with their WBS scheme is inherent, in different degrees, to all LBE Brinkman schemes, including ZM scheme. None of them guarantees the stress and the velocity continuity by their implicit interface conditions, even in the frame of the two-relaxation-times (TRT) collision operator where these two properties are assured in stratified Stokes flow, Ginzburg (2007) [5]. Third, the GLB schemes are presented in work (Zhu and Ma, 2013) [11] as the alternative ones to direct, Brinkman-force based (BF) schemes (Freed, 1998; Nie and Martys, 2007) [3,8]. Yet, we show that the BF-TRT scheme (Ginzburg, 2008) [6] gets the solutions of any of the improved GLB schemes for specific, viscosity-dependent choice of its one or two local relaxation rates. This provides the principal difference between the GLB and BF: while the BF may respect the linearity of the Stokes-Brinkman equation rigorously, the GLB-TRT cannot, unless it reduces to the BF via the inverse transform of the relaxation rates. Furthermore, we show that, in limited parameter space, ;gray; schemes may run one another. From the practical point of view, permeability values obtained with the GLB are viscosity-dependent, unlike with the BF. Finally, the GLB shares with the BF a so-called anisotropy (Ginzburg, 2008; Nie and Martys, 2007) [6,8], that is, flow-direction-dependency in their effective viscosity corrections, related to the discretized spatial variation of the resistance forcing.
Selected List of Low Energy Beam Transport Facilities for Light-Ion, High-Intensity Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prost, L. R.
This paper presents a list of Low Energy Beam Transport (LEBT) facilities for light-ion, high-intensity accelerators. It was put together to facilitate comparisons with the PXIE LEBT design choices. A short discussion regarding the importance of the beam perveance in the choice of the transport scheme follows.
An adaptive control scheme for a flexible manipulator
NASA Technical Reports Server (NTRS)
Yang, T. C.; Yang, J. C. S.; Kudva, P.
1987-01-01
The problem of controlling a single link flexible manipulator is considered. A self-tuning adaptive control scheme is proposed which consists of a least squares on-line parameter identification of an equivalent linear model followed by a tuning of the gains of a pole placement controller using the parameter estimates. Since the initial parameter values for this model are assumed unknown, the use of arbitrarily chosen initial parameter estimates in the adaptive controller would result in undesirable transient effects. Hence, the initial stage control is carried out with a PID controller. Once the identified parameters have converged, control is transferred to the adaptive controller. Naturally, the relevant issues in this scheme are tests for parameter convergence and minimization of overshoots during control switch-over. To demonstrate the effectiveness of the proposed scheme, simulation results are presented with an analytical nonlinear dynamic model of a single link flexible manipulator.
Revisiting low-fidelity two-fluid models for gas–solids transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adeleke, Najeem, E-mail: najm@psu.edu; Adewumi, Michael, E-mail: m2a@psu.edu; Ityokumbul, Thaddeus
Two-phase gas–solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas–solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The modelmore » equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe–Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.« less
Long-range analysis of density fitting in extended systems
NASA Astrophysics Data System (ADS)
Varga, Scarontefan
Density fitting scheme is analyzed for the Coulomb problem in extended systems from the correctness of long-range behavior point of view. We show that for the correct cancellation of divergent long-range Coulomb terms it is crucial for the density fitting scheme to reproduce the overlap matrix exactly. It is demonstrated that from all possible fitting metric choices the Coulomb metric is the only one which inherently preserves the overlap matrix for infinite systems with translational periodicity. Moreover, we show that by a small additional effort any non-Coulomb metric fit can be made overlap-preserving as well. The problem is analyzed for both ordinary and Poisson basis set choices.
Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.
Duarte, Belmiro P M; Wong, Weng Kee
2015-08-01
This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.
Approximated adjusted fractional Bayes factors: A general method for testing informative hypotheses.
Gu, Xin; Mulder, Joris; Hoijtink, Herbert
2018-05-01
Informative hypotheses are increasingly being used in psychological sciences because they adequately capture researchers' theories and expectations. In the Bayesian framework, the evaluation of informative hypotheses often makes use of default Bayes factors such as the fractional Bayes factor. This paper approximates and adjusts the fractional Bayes factor such that it can be used to evaluate informative hypotheses in general statistical models. In the fractional Bayes factor a fraction parameter must be specified which controls the amount of information in the data used for specifying an implicit prior. The remaining fraction is used for testing the informative hypotheses. We discuss different choices of this parameter and present a scheme for setting it. Furthermore, a software package is described which computes the approximated adjusted fractional Bayes factor. Using this software package, psychological researchers can evaluate informative hypotheses by means of Bayes factors in an easy manner. Two empirical examples are used to illustrate the procedure. © 2017 The British Psychological Society.
Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach
Duarte, Belmiro P. M.; Wong, Weng Kee
2014-01-01
Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159
NASA Astrophysics Data System (ADS)
Chen, Wei-Guo; Wan, Xia; Wang, You-Kai
2018-05-01
A top quark mass measurement scheme near the {{t}}\\bar{{{t}}} production threshold in future {{{e}}}+{{{e}}}- colliders, e.g. the Circular Electron Positron Collider (CEPC), is simulated. A {χ }2 fitting method is adopted to determine the number of energy points to be taken and their locations. Our results show that the optimal energy point is located near the largest slope of the cross section v. beam energy plot, and the most efficient scheme is to concentrate all luminosity on this single energy point in the case of one-parameter top mass fitting. This suggests that the so-called data-driven method could be the best choice for future real experimental measurements. Conveniently, the top mass statistical uncertainty can also be calculated directly by the error matrix even without any sampling and fitting. The agreement of the above two optimization methods has been checked. Our conclusion is that by taking 50 fb‑1 total effective integrated luminosity data, the statistical uncertainty of the top potential subtracted mass can be suppressed to about 7 MeV and the total uncertainty is about 30 MeV. This precision will help to identify the stability of the electroweak vacuum at the Planck scale. Supported by National Science Foundation of China (11405102) and the Fundamental Research Funds for the Central Universities of China (GK201603027, GK201803019)
Computational Aeroacoustics by the Space-time CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.
2001-01-01
In recent years, a new numerical methodology for conservation laws-the Space-Time Conservation Element and Solution Element Method (CE/SE), was developed by Dr. Chang of NASA Glenn Research Center and collaborators. In nature, the new method may be categorized as a finite volume method, where the conservation element (CE) is equivalent to a finite control volume (or cell) and the solution element (SE) can be understood as the cell interface. However, due to its rigorous treatment of the fluxes and geometry, it is different from the existing schemes. The CE/SE scheme features: (1) space and time treated on the same footing, the integral equations of conservation laws are solve( for with second order accuracy, (2) high resolution, low dispersion and low dissipation, (3) novel, truly multi-dimensional, simple but effective non-reflecting boundary condition, (4) effortless implementation of computation, no numerical fix or parameter choice is needed, an( (5) robust enough to cover a wide spectrum of compressible flow: from weak linear acoustic waves to strong, discontinuous waves (shocks) appropriate for linear and nonlinear aeroacoustics. Currently, the CE/SE scheme has been developed to such a stage that a 3-13 unstructured CE/SE Navier-Stokes solver is already available. However, in the present paper, as a general introduction to the CE/SE method, only the 2-D unstructured Euler CE/SE solver is chosen as a prototype and is sketched in Section 2. Then applications of the CE/SE scheme to linear, nonlinear aeroacoustics and airframe noise are depicted in Sections 3, 4, and 5 respectively to demonstrate its robustness and capability.
Lord, Anton; Ehrlich, Stefan; Borchardt, Viola; Geisler, Daniel; Seidel, Maria; Huber, Stefanie; Murr, Julia; Walter, Martin
2016-03-30
Network-based analyses of deviant brain function have become extremely popular in psychiatric neuroimaging. Underpinning brain network analyses is the selection of appropriate regions of interest (ROIs). Although ROI selection is fundamental in network analysis, its impact on detecting disease effects remains unclear. We investigated the impact of parcellation choice when comparing results from different studies. We investigated the effects of anatomical (AAL) and literature-based (Dosenbach) parcellation schemes on comparability of group differences in 35 female patients with anorexia nervosa and 35 age- and sex-matched healthy controls. Global and local network properties, including network-based statistics (NBS), were assessed on resting state functional magnetic resonance imaging data obtained at 3T. Parcellation schemes were comparably consistent on global network properties, while NBS and local metrics differed in location, but not metric type. Location of local metric alterations varied for AAL (parietal and cingulate cortices) versus Dosenbach (insula, thalamus) parcellation approaches. However, consistency was observed for the occipital cortex. Patient-specific global network properties can be robustly observed using different parcellation schemes, while graph metrics characterizing impairments of individual nodes vary considerably. Therefore, the impact of parcellation choice on specific group differences varies depending on the level of network organization. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Güttler, I.
2012-04-01
Systematic errors in near-surface temperature (T2m), total cloud cover (CLD), shortwave albedo (ALB) and surface net longwave (SNL) and shortwave energy flux (SNS) are detected in simulations of RegCM on 50 km resolution over the European CORDEX domain when forced with ERA-Interim reanalysis. Simulated T2m is compared to CRU 3.0 and other variables to GEWEX-SRB 3.0 dataset. Most of systematic errors found in SNL and SNS are consistent with errors in T2m, CLD and ALB: they include prevailing negative errors in T2m and positive errors in CLD present during most of the year. Errors in T2m and CLD can be associated with the overestimation of SNL and SNS in most simulations. Impact of errors in albedo are primarily confined to north Africa, where e.g. underestimation of albedo in JJA is consistent with associated surface heating and positive SNS and T2m errors. Sensitivity to the choice of the PBL scheme and various parameters in PBL schemes is examined from an ensemble of 20 simulations. The recently implemented prognostic PBL scheme performs over Europe with a mixed success when compared to standard diagnostic scheme with a general increase of errors in T2m and CLD over all of the domain. Nevertheless, the improvements in T2m can be found in e.g. north-eastern Europe during DJF and western Europe during JJA where substantial warm biases existed in simulations with the diagnostic scheme. The most detectable impact, in terms of the JJA T2m errors over western Europe, comes form the variation in the formulation of mixing length. In order to reduce the above errors an update of the RegCM albedo values and further work in customizing PBL scheme is suggested.
Analyzing Hydraulic Conductivity Sampling Schemes in an Idealized Meandering Stream Model
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.
2017-12-01
Hydraulic conductivity (K) is an important parameter affecting the flow of water through sediments under streams, which can vary by orders of magnitude within a stream reach. Measuring heterogeneous K distributions in the field is limited by time and resources. This study investigates hypothetical sampling practices within a modeling framework on a highly idealized meandering stream. We generated three sets of 100 hydraulic conductivity grids containing two sands with connectivity values of 0.02, 0.08, and 0.32. We investigated systems with twice as much fast (K=0.1 cm/s) sand as slow sand (K=0.01 cm/s) and the reverse ratio on the same grids. The K values did not vary with depth. For these 600 cases, we calculated the homogenous K value, Keq, that would yield the same flux into the sediments as the corresponding heterogeneous grid. We then investigated sampling schemes with six weighted probability distributions derived from the homogenous case: uniform, flow-paths, velocity, in-stream, flux-in, and flux-out. For each grid, we selected locations from these distributions and compared the arithmetic, geometric, and harmonic means of these lists to the corresponding Keq using the root-mean-square deviation. We found that arithmetic averaging of samples outperformed geometric or harmonic means for all sampling schemes. Of the sampling schemes, flux-in (sampling inside the stream in an inward flux-weighted manner) yielded the least error and flux-out yielded the most error. All three sampling schemes outside of the stream yielded very similar results. Grids with lower connectivity values (fewer and larger clusters) showed the most sensitivity to the choice of sampling scheme, and thus improved the most with the flux-insampling. We also explored the relationship between the number of samples taken and the resulting error. Increasing the number of sampling points reduced error for the arithmetic mean with diminishing returns, but did not substantially reduce error associated with geometric and harmonic means.
ERIC Educational Resources Information Center
Hassan, Nedim
2017-01-01
Background: Person-centred planning, which commonly becomes formalised within services for people with learning disabilities through an Essential Lifestyle Plan (ELP), was intended to help place the choices of individuals at the forefront of service provision. However, beyond UK government policy rhetoric, scholars have raised issues regarding the…
Consumer choice among Mutual Healthcare Purchasers: a feasible option for China?
Xu, Weiwei; van de Ven, Wynand P M M
2013-11-01
In its 2009 blue print of healthcare reform, the Chinese government aimed to create a competitive health insurance market in order to increase efficiency in the health insurance sector. A major advantage of a competitive health insurance market is that insurers are stimulated to act as well-motivated prudent purchasers of healthcare on behalf of their enrolees, and that consumers can choose among these purchasers. To emphasize the insurers' role of purchasers of care we denote them, as well as other entities that can fulfil this role (e.g. fundholding community health centres), as 'Mutual Healthcare Purchasers' (MHPs). As feasible proposals for creating competition in China's health insurance sector have yet to be made, we suggest two potential approaches to create competition among MHPs: (1) separating finance and operation of social health insurance and allowing consumer choice among operators of social health insurance schemes; (2) allowing consumer choice among fund-holding community health centres. Although the benefits of competition are widely accepted in China, the problematic consequences of a free competitive health insurance market - especially in relation to affordability and accessibility - are generally neglected. To solve the problems of lack of affordability and inaccessibility that would occur in the case of unregulated competition among MHPs, at least the following regulations are proposed to the Chinese policy makers: a 'standard benefit package' for basic health insurance, a 'risk-equalization scheme', and 'open enrolment'. Potential obstacles for implementing a risk equalization scheme are examined based on theoretical arguments and international experiences. We conclude that allowing consumer choice among MHPs and implementing a risk equalization scheme in China is politically and technically complex. Therefore, the Chinese government should prepare carefully for a market-oriented reform in its healthcare sector and adopt a strategic approach in the implementation procedure. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.
Ullah, Azmat; Malik, Suheel Abdullah; Alimgeer, Khurram Saleem
2018-01-01
In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA) with Interior Point Algorithm (IPA) is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.
Wu, Yao; Dai, Xiaodong; Huang, Niu; Zhao, Lifeng
2013-06-05
In force field parameter development using ab initio potential energy surfaces (PES) as target data, an important but often neglected matter is the lack of a weighting scheme with optimal discrimination power to fit the target data. Here, we developed a novel partition function-based weighting scheme, which not only fits the target potential energies exponentially like the general Boltzmann weighting method, but also reduces the effect of fitting errors leading to overfitting. The van der Waals (vdW) parameters of benzene and propane were reparameterized by using the new weighting scheme to fit the high-level ab initio PESs probed by a water molecule in global configurational space. The molecular simulation results indicate that the newly derived parameters are capable of reproducing experimental properties in a broader range of temperatures, which supports the partition function-based weighting scheme. Our simulation results also suggest that structural properties are more sensitive to vdW parameters than partial atomic charge parameters in these systems although the electrostatic interactions are still important in energetic properties. As no prerequisite conditions are required, the partition function-based weighting method may be applied in developing any types of force field parameters. Copyright © 2013 Wiley Periodicals, Inc.
Building fast well-balanced two-stage numerical schemes for a model of two-phase flows
NASA Astrophysics Data System (ADS)
Thanh, Mai Duc
2014-06-01
We present a set of well-balanced two-stage schemes for an isentropic model of two-phase flows arisen from the modeling of deflagration-to-detonation transition in granular materials. The first stage is to absorb the source term in nonconservative form into equilibria. Then in the second stage, these equilibria will be composed into a numerical flux formed by using a convex combination of the numerical flux of a stable Lax-Friedrichs-type scheme and the one of a higher-order Richtmyer-type scheme. Numerical schemes constructed in such a way are expected to get the interesting property: they are fast and stable. Tests show that the method works out until the parameter takes on the value CFL, and so any value of the parameter between zero and this value is expected to work as well. All the schemes in this family are shown to capture stationary waves and preserves the positivity of the volume fractions. The special values of the parameter 0,1/2,1/(1+CFL), and CFL in this family define the Lax-Friedrichs-type, FAST1, FAST2, and FAST3 schemes, respectively. These schemes are shown to give a desirable accuracy. The errors and the CPU time of these schemes and the Roe-type scheme are calculated and compared. The constructed schemes are shown to be well-balanced and faster than the Roe-type scheme.
NASA Astrophysics Data System (ADS)
Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone
2016-10-01
The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.
New Exoskeleton Arm Concept Design And Actuation For Haptic Interaction With Virtual Objects
NASA Astrophysics Data System (ADS)
Chakarov, D.; Veneva, I.; Tsveov, M.; Tiankov, T.
2014-12-01
In the work presented in this paper the conceptual design and actuation of one new exoskeleton of the upper limb is presented. The device is designed for application where both motion tracking and force feedback are required, such as human interaction with virtual environment or rehabilitation tasks. The choice is presented of mechanical structure kinematical equivalent to the structure of the human arm. An actuation system is selected based on braided pneumatic muscle actuators. Antagonistic drive system for each joint is shown, using pulley and cable transmissions. Force/displacement diagrams are presented of two antagonistic acting muscles. Kinematics and dynamic estimations are performed of the system exoskeleton and upper limb. Selected parameters ensure in the antagonistic scheme joint torque regulation and human arm range of motion.
Variability metrics in Josephson Junction fabrication for Quantum Computing circuits
NASA Astrophysics Data System (ADS)
Rosenblatt, Sami; Hertzberg, Jared; Brink, Markus; Chow, Jerry; Gambetta, Jay; Leng, Zhaoqi; Houck, Andrew; Nelson, J. J.; Plourde, Britton; Wu, Xian; Lake, Russell; Shainline, Jeff; Pappas, David; Patel, Umeshkumar; McDermott, Robert
Multi-qubit gates depend on the relative frequencies of the qubits. To reliably build multi-qubit devices therefore requires careful fabrication of Josephson junctions in order to precisely set their critical currents. The Ambegaokar-Baratoff relation between tunnel conductance and critical current implies a correlation between qubit frequency spread and tunnel junction resistance spread. Here we discuss measurement of large numbers of tunnel junctions to assess these resistance spreads, which can exceed 5% of mean resistance. With the goal of minimizing these spreads, we investigate process parameters such as lithographic junction area, evaporation and masking scheme, oxidation conditions, and substrate choice, as well as test environment, design and setup. In addition, trends of junction resistance with temperature are compared with theoretical models for further insights into process and test variability.
Constraining the loop quantum gravity parameter space from phenomenology
NASA Astrophysics Data System (ADS)
Brahma, Suddhasattwa; Ronco, Michele
2018-03-01
Development of quantum gravity theories rarely takes inputs from experimental physics. In this letter, we take a small step towards correcting this by establishing a paradigm for incorporating putative quantum corrections, arising from canonical quantum gravity (QG) theories, in deriving falsifiable modified dispersion relations (MDRs) for particles on a deformed Minkowski space-time. This allows us to differentiate and, hopefully, pick between several quantization choices via testable, state-of-the-art phenomenological predictions. Although a few explicit examples from loop quantum gravity (LQG) (such as the regularization scheme used or the representation of the gauge group) are shown here to establish the claim, our framework is more general and is capable of addressing other quantization ambiguities within LQG and also those arising from other similar QG approaches.
Kontopantelis, Evangelos; Buchan, Iain; Reeves, David; Checkland, Kath; Doran, Tim
2013-08-02
To investigate the relationship between performance on the UK Quality and Outcomes Framework pay-for-performance scheme and choice of clinical computer system. Retrospective longitudinal study. Data for 2007-2008 to 2010-2011, extracted from the clinical computer systems of general practices in England. All English practices participating in the pay-for-performance scheme: average 8257 each year, covering over 99% of the English population registered with a general practice. Levels of achievement on 62 quality-of-care indicators, measured as: reported achievement (levels of care after excluding inappropriate patients); population achievement (levels of care for all patients with the relevant condition) and percentage of available quality points attained. Multilevel mixed effects multiple linear regression models were used to identify population, practice and clinical computing system predictors of achievement. Seven clinical computer systems were consistently active in the study period, collectively holding approximately 99% of the market share. Of all population and practice characteristics assessed, choice of clinical computing system was the strongest predictor of performance across all three outcome measures. Differences between systems were greatest for intermediate outcomes indicators (eg, control of cholesterol levels). Under the UK's pay-for-performance scheme, differences in practice performance were associated with the choice of clinical computing system. This raises the question of whether particular system characteristics facilitate higher quality of care, better data recording or both. Inconsistencies across systems need to be understood and addressed, and researchers need to be cautious when generalising findings from samples of providers using a single computing system.
Effective Fragment Potential Method for H-Bonding: How To Obtain Parameters for Nonrigid Fragments.
Dubinets, Nikita; Slipchenko, Lyudmila V
2017-07-20
Accuracy of the effective fragment potential (EFP) method was explored for describing intermolecular interaction energies in three dimers with strong H-bonded interactions, formic acid, formamide, and formamidine dimers, which are a part of HBC6 database of noncovalent interactions. Monomer geometries in these dimers change significantly as a function of intermonomer separation. Several EFP schemes were considered, in which fragment parameters were prepared for a fragment in its gas-phase geometry or recomputed for each unique fragment geometry. Additionally, a scheme in which gas-phase fragment parameters are shifted according to relaxed fragment geometries is introduced and tested. EFP data are compared against the coupled cluster with single, double, and perturbative triple excitations (CCSD(T)) method in a complete basis set (CBS) and the symmetry adapted perturbation theory (SAPT). All considered EFP schemes provide a good agreement with CCSD(T)/CBS for binding energies at equilibrium separations, with discrepancies not exceeding 2 kcal/mol. However, only the schemes that utilize relaxed fragment geometries remain qualitatively correct at shorter than equilibrium intermolecular distances. The EFP scheme with shifted parameters behaves quantitatively similar to the scheme in which parameters are recomputed for each monomer geometry and thus is recommended as a computationally efficient approach for large-scale EFP simulations of flexible systems.
Optimizing the choice of spin-squeezed states for detecting and characterizing quantum processes
Rozema, Lee A.; Mahler, Dylan H.; Blume-Kohout, Robin; ...
2014-11-07
Quantum metrology uses quantum states with no classical counterpart to measure a physical quantity with extraordinary sensitivity or precision. Most such schemes characterize a dynamical process by probing it with a specially designed quantum state. The success of such a scheme usually relies on the process belonging to a particular one-parameter family. If this assumption is violated, or if the goal is to measure more than one parameter, a different quantum state may perform better. In the most extreme case, we know nothing about the process and wish to learn everything. This requires quantum process tomography, which demands an informationallymore » complete set of probe states. It is very convenient if this set is group covariant—i.e., each element is generated by applying an element of the quantum system’s natural symmetry group to a single fixed fiducial state. In this paper, we consider metrology with 2-photon (“biphoton”) states and report experimental studies of different states’ sensitivity to small, unknown collective SU( 2) rotations [“ SU( 2) jitter”]. Maximally entangled N00 N states are the most sensitive detectors of such a rotation, yet they are also among the worst at fully characterizing an a priori unknown process. We identify (and confirm experimentally) the best SU( 2)-covariant set for process tomography; these states are all less entangled than the N00 N state, and are characterized by the fact that they form a 2-design.« less
NASA Astrophysics Data System (ADS)
dos Santos, A. F.; Freitas, S. R.; de Mattos, J. G. Z.; de Campos Velho, H. F.; Gan, M. A.; da Luz, E. F. P.; Grell, G. A.
2013-09-01
In this paper we consider an optimization problem applying the metaheuristic Firefly algorithm (FY) to weight an ensemble of rainfall forecasts from daily precipitation simulations with the Brazilian developments on the Regional Atmospheric Modeling System (BRAMS) over South America during January 2006. The method is addressed as a parameter estimation problem to weight the ensemble of precipitation forecasts carried out using different options of the convective parameterization scheme. Ensemble simulations were performed using different choices of closures, representing different formulations of dynamic control (the modulation of convection by the environment) in a deep convection scheme. The optimization problem is solved as an inverse problem of parameter estimation. The application and validation of the methodology is carried out using daily precipitation fields, defined over South America and obtained by merging remote sensing estimations with rain gauge observations. The quadratic difference between the model and observed data was used as the objective function to determine the best combination of the ensemble members to reproduce the observations. To reduce the model rainfall biases, the set of weights determined by the algorithm is used to weight members of an ensemble of model simulations in order to compute a new precipitation field that represents the observed precipitation as closely as possible. The validation of the methodology is carried out using classical statistical scores. The algorithm has produced the best combination of the weights, resulting in a new precipitation field closest to the observations.
Implications of the principle of maximum conformality for the QCD strong coupling
Deur, Alexandre; Shen, Jian -Ming; Wu, Xing -Gang; ...
2017-08-14
The Principle of Maximum Conformality (PMC) provides scale-fixed perturbative QCD predictions which are independent of the choice of the renormalization scheme, as well as the choice of the initial renormalization scale. In this article, we will test the PMC by comparing its predictions for the strong couplingmore » $$\\alpha^s_{g_1}(Q)$$, defined from the Bjorken sum rule, with predictions using conventional pQCD scale-setting. The two results are found to be compatible with each other and with the available experimental data. However, the PMC provides a significantly more precise determination, although its domain of applicability ($$Q \\gtrsim 1.5$$ GeV) does not extend to as small values of momentum transfer as that of a conventional pQCD analysis ($$Q \\gtrsim 1$$ GeV). In conclusion, we suggest that the PMC range of applicability could be improved by a modified intermediate scheme choice or using a single effective PMC scale.« less
Incentives and children's dietary choices: A field experiment in primary schools.
Belot, Michèle; James, Jonathan; Nolen, Patrick
2016-12-01
We conduct a field experiment in 31 primary schools in England to test the effectiveness of different temporary incentives on increasing choice and consumption of fruit and vegetables at lunchtime. In each treatment, pupils received a sticker for choosing a fruit or vegetable at lunch. They were eligible for an additional reward at the end of the week depending on the number of stickers accumulated, either individually (individual scheme) or in comparison to others (competition). Overall, we find no significant effect of the individual scheme, but positive effects of competition. For children who had margin to increase their consumption, competition increases choice of fruit and vegetables by 33% and consumption by 48%. These positive effects generally carry over to the week immediately following the treatment, but are not sustained effects six months later. We also find large differences in effectiveness across demographic characteristics such as age and gender. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Rizk, Magdi H.
1988-01-01
A scheme is developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The scheme updates the design parameter iterative solutions and the flow variable iterative solutions simultaneously. It is applied to an advanced propeller design problem with the Euler equations used as the flow governing equations. The scheme's accuracy, efficiency and sensitivity to the computational parameters are tested.
Externalities and School Enrollment Policy: A Supply-Side Analysis of School Choice in New Zealand
ERIC Educational Resources Information Center
Thomson, Kat Sonia
2010-01-01
This article is an in-progress examination of the current landscape of school choice in a well-known case of universal decentralization: New Zealand's public school system. Using a supply-side analysis of the implications of a specific policy--school enrollment schemes--this author seeks to test hypotheses about zoning and self-preservation using…
A Secure ECC-based RFID Mutual Authentication Protocol to Enhance Patient Medication Safety.
Jin, Chunhua; Xu, Chunxiang; Zhang, Xiaojun; Li, Fagen
2016-01-01
Patient medication safety is an important issue in patient medication systems. In order to prevent medication errors, integrating Radio Frequency Identification (RFID) technology into automated patient medication systems is required in hospitals. Based on RFID technology, such systems can provide medical evidence for patients' prescriptions and medicine doses, etc. Due to the mutual authentication between the medication server and the tag, RFID authentication scheme is the best choice for automated patient medication systems. In this paper, we present a RFID mutual authentication scheme based on elliptic curve cryptography (ECC) to enhance patient medication safety. Our scheme can achieve security requirements and overcome various attacks existing in other schemes. In addition, our scheme has better performance in terms of computational cost and communication overhead. Therefore, the proposed scheme is well suitable for patient medication systems.
Prediction of Geomagnetic Activity and Key Parameters in High-Latitude Ionosphere-Basic Elements
NASA Technical Reports Server (NTRS)
Lyatsky, W.; Khazanov, G. V.
2007-01-01
Prediction of geomagnetic activity and related events in the Earth's magnetosphere and ionosphere is an important task of the Space Weather program. Prediction reliability is dependent on the prediction method and elements included in the prediction scheme. Two main elements are a suitable geomagnetic activity index and coupling function -- the combination of solar wind parameters providing the best correlation between upstream solar wind data and geomagnetic activity. The appropriate choice of these two elements is imperative for any reliable prediction model. The purpose of this work was to elaborate on these two elements -- the appropriate geomagnetic activity index and the coupling function -- and investigate the opportunity to improve the reliability of the prediction of geomagnetic activity and other events in the Earth's magnetosphere. The new polar magnetic index of geomagnetic activity and the new version of the coupling function lead to a significant increase in the reliability of predicting the geomagnetic activity and some key parameters, such as cross-polar cap voltage and total Joule heating in high-latitude ionosphere, which play a very important role in the development of geomagnetic and other activity in the Earth s magnetosphere, and are widely used as key input parameters in modeling magnetospheric, ionospheric, and thermospheric processes.
Counterfactual distribution of Schrödinger cat states
NASA Astrophysics Data System (ADS)
Shenoy-Hejamadi, Akshata; Srikanth, R.
2015-12-01
In the counterfactual cryptography scheme proposed by Noh, the sender Alice probabilistically transmits classical information to the receiver Bob without the physical travel of a particle. Here we generalize this idea to the distribution of quantum entanglement. The key insight is to replace their classical input choices with quantum superpositions. We further show that the scheme can be generalized to counterfactually distribute multipartite cat states.
Mirocha, Jeffrey D.; Churchfield, Matthew J.; Munoz-Esparza, Domingo; ...
2017-08-28
Here, the sensitivities of idealized Large-Eddy Simulations (LES) to variations of model configuration and forcing parameters on quantities of interest to wind power applications are examined. Simulated wind speed, turbulent fluxes, spectra and cospectra are assessed in relation to variations of two physical factors, geostrophic wind speed and surface roughness length, and several model configuration choices, including mesh size and grid aspect ratio, turbulence model, and numerical discretization schemes, in three different code bases. Two case studies representing nearly steady neutral and convective atmospheric boundary layer (ABL) flow conditions over nearly flat and homogeneous terrain were used to force andmore » assess idealized LES, using periodic lateral boundary conditions. Comparison with fast-response velocity measurements at five heights within the lowest 50 m indicates that most model configurations performed similarly overall, with differences between observed and predicted wind speed generally smaller than measurement variability. Simulations of convective conditions produced turbulence quantities and spectra that matched the observations well, while those of neutral simulations produced good predictions of stress, but smaller than observed magnitudes of turbulence kinetic energy, likely due to tower wakes influencing the measurements. While sensitivities to model configuration choices and variability in forcing can be considerable, idealized LES are shown to reliably reproduce quantities of interest to wind energy applications within the lower ABL during quasi-ideal, nearly steady neutral and convective conditions over nearly flat and homogeneous terrain.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirocha, Jeffrey D.; Churchfield, Matthew J.; Munoz-Esparza, Domingo
Here, the sensitivities of idealized Large-Eddy Simulations (LES) to variations of model configuration and forcing parameters on quantities of interest to wind power applications are examined. Simulated wind speed, turbulent fluxes, spectra and cospectra are assessed in relation to variations of two physical factors, geostrophic wind speed and surface roughness length, and several model configuration choices, including mesh size and grid aspect ratio, turbulence model, and numerical discretization schemes, in three different code bases. Two case studies representing nearly steady neutral and convective atmospheric boundary layer (ABL) flow conditions over nearly flat and homogeneous terrain were used to force andmore » assess idealized LES, using periodic lateral boundary conditions. Comparison with fast-response velocity measurements at five heights within the lowest 50 m indicates that most model configurations performed similarly overall, with differences between observed and predicted wind speed generally smaller than measurement variability. Simulations of convective conditions produced turbulence quantities and spectra that matched the observations well, while those of neutral simulations produced good predictions of stress, but smaller than observed magnitudes of turbulence kinetic energy, likely due to tower wakes influencing the measurements. While sensitivities to model configuration choices and variability in forcing can be considerable, idealized LES are shown to reliably reproduce quantities of interest to wind energy applications within the lower ABL during quasi-ideal, nearly steady neutral and convective conditions over nearly flat and homogeneous terrain.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson-Sellers, A.
Land-surface schemes developed for incorporation into global climate models include parameterizations that are not yet fully validated and depend upon the specification of a large (20-50) number of ecological and soil parameters, the values of which are not yet well known. There are two methods of investigating the sensitivity of a land-surface scheme to prescribed values: simple one-at-a-time changes or factorial experiments. Factorial experiments offer information about interactions between parameters and are thus a more powerful tool. Here the results of a suite of factorial experiments are reported. These are designed (i) to illustrate the usefulness of this methodology andmore » (ii) to identify factors important to the performance of complex land-surface schemes. The Biosphere-Atmosphere Transfer Scheme (BATS) is used and its sensitivity is considered (a) to prescribed ecological and soil parameters and (b) to atmospheric forcing used in the off-line tests undertaken. Results indicate that the most important atmospheric forcings are mean monthly temperature and the interaction between mean monthly temperature and total monthly precipitation, although fractional cloudiness and other parameters are also important. The most important ecological parameters are vegetation roughness length, soil porosity, and a factor describing the sensitivity of the stomatal resistance of vegetation to the amount of photosynthetically active solar radiation and, to a lesser extent, soil and vegetation albedos. Two-factor interactions including vegetation roughness length are more important than many of the 23 specified single factors. The results of factorial sensitivity experiments such as these could form the basis for intercomparison of land-surface parameterization schemes and for field experiments and satellite-based observation programs aimed at improving evaluation of important parameters.« less
Organizing and Typing Persistent Objects Within an Object-Oriented Framework
NASA Technical Reports Server (NTRS)
Madany, Peter W.; Campbell, Roy H.
1991-01-01
Conventional operating systems provide little or no direct support for the services required for an efficient persistent object system implementation. We have built a persistent object scheme using a customization and extension of an object-oriented operating system called Choices. Choices includes a framework for the storage of persistent data that is suited to the construction of both conventional file system and persistent object system. In this paper we describe three areas in which persistent object support differs from file system support: storage organization, storage management, and typing. Persistent object systems must support various sizes of objects efficiently. Customizable containers, which are themselves persistent objects and can be nested, support a wide range of object sizes in Choices. Collections of persistent objects that are accessed as an aggregate and collections of light-weight persistent objects can be clustered in containers that are nested within containers for larger objects. Automated garbage collection schemes are added to storage management and have a major impact on persistent object applications. The Choices persistent object store provides extensible sets of persistent object types. The store contains not only the data for persistent objects but also the names of the classes to which they belong and the code for the operation of the classes. Besides presenting persistent object storage organization, storage management, and typing, this paper discusses how persistent objects are named and used within the Choices persistent data/file system framework.
Study on the design schemes of the air-conditioning system in a gymnasium
NASA Astrophysics Data System (ADS)
Zhang, Yujin; Wu, Xinwei; Zhang, Jing; Pan, Zhixin
2017-08-01
In view of designing the air conditioning project for a gymnasium successfully, the cooling and heating source schemes are fully studied by analyzing the surrounding environment and energy conditions of the project, as well as the analysis of the initial investment and operating costs, which indicates the air source heat pump air conditioning system is the best choice for the project. The indoor air conditioning schemes are also studied systematically and the optimization of air conditioning schemes is carried out in each area. The principle of operating conditions for the whole year is followed and the quality of indoor air and energy-saving are ensured by the optimized design schemes, which provide references for the air conditioning system design in the same kinds of building.
Embedded WENO: A design strategy to improve existing WENO schemes
NASA Astrophysics Data System (ADS)
van Lith, Bart S.; ten Thije Boonkkamp, Jan H. M.; IJzerman, Wilbert L.
2017-02-01
Embedded WENO methods utilise all adjacent smooth substencils to construct a desirable interpolation. Conventional WENO schemes under-use this possibility close to large gradients or discontinuities. We develop a general approach for constructing embedded versions of existing WENO schemes. Embedded methods based on the WENO schemes of Jiang and Shu [1] and on the WENO-Z scheme of Borges et al. [2] are explicitly constructed. Several possible choices are presented that result in either better spectral properties or a higher order of convergence for sufficiently smooth solutions. However, these improvements carry over to discontinuous solutions. The embedded methods are demonstrated to be indeed improvements over their standard counterparts by several numerical examples. All the embedded methods presented have no added computational effort compared to their standard counterparts.
Validation d'un nouveau calcul de reference en evolution pour les reacteurs thermiques
NASA Astrophysics Data System (ADS)
Canbakan, Axel
Resonance self-shielding calculations are an essential component of a deterministic lattice code calculation. Even if their aim is to correct the cross sections deviation, they introduce a non negligible error in evaluated parameters such as the flux. Until now, French studies for light water reactors are based on effective reaction rates obtained using an equivalence in dilution technique. With the increase of computing capacities, this method starts to show its limits in precision and can be replaced by a subgroup method. Originally used for fast neutron reactor calculations, the subgroup method has many advantages such as using an exact slowing down equation. The aim of this thesis is to suggest a validation as precise as possible without burnup, and then with an isotopic depletion study for the subgroup method. In the end, users interested in implementing a subgroup method in their scheme for Pressurized Water Reactors can rely on this thesis to justify their modelization choices. Moreover, other parameters are validated to suggest a new reference scheme for fast execution and precise results. These new techniques are implemented in the French lattice scheme SHEM-MOC, composed of a Method Of Characteristics flux calculation and a SHEM-like 281-energy group mesh. First, the libraries processed by the CEA are compared. Then, this thesis suggests the most suitable energetic discretization for a subgroup method. Finally, other techniques such as the representation of the anisotropy of the scattering sources and the spatial representation of the source in the MOC calculation are studied. A DRAGON5 scheme is also validated as it shows interesting elements: the DRAGON5 subgroup method is run with a 295-eenergy group mesh (compared to 361 groups for APOLLO2). There are two reasons to use this code. The first involves offering a new reference lattice scheme for Pressurized Water Reactors to DRAGON5 users. The second is to study parameters that are not available in APOLLO2 such as self-shielding in a temperature gradient and using a flux calculation based on MOC in the self-shielding part of the simulation. This thesis concludes that: (1) The subgroup method is at least more precise than a technique based on effective reaction rates, only if we use a 361-energy group mesh; (2) MOC with a linear source in a geometrical region gives better results than a MOC with a constant model. A moderator discretization is compulsory; (3) A P3 choc law is satisfactory, ensuring a coherence with 2D full core calculations; (4) SHEM295 is viable with a Subgroup Projection Method for DRAGON5.
ERIC Educational Resources Information Center
Perreault, Jean M., Ed.
Several factors are involved in the decision to reclassify library collections and several problems and choices must be faced. The discussion of four classification schemes (Dewey Decimal, Library of Congress, Library of Congress subject-headings and Universal Decimal Classification) involved in the choices concerns their structure, currency,…
Crystal field parameters and energy levels scheme of trivalent chromium doped BSO
NASA Astrophysics Data System (ADS)
Petkova, P.; Andreici, E.-L.; Avram, N. M.
2014-11-01
The aim of this paper is to give an analysis of crystal field parameters and energy levels schemes for the above doped material, in order to give a reliable explanation for experimental data. The crystal field parameters have been modeled in the frame of Exchange Charge Model (ECM) of the crystal field theory, taken into account the geometry of systems, with actually site symmetry of the impurity ions. The effect of the charges of the ligands and covalence bonding between chromium cation and oxygen anions, in the cluster approach, also were taken into account. With the obtained values of the crystal field parameters we simulated the scheme of energy levels of chromium ions by diagonalizing the matrix of the Hamiltonian of the doped crystal. The obtained energy levels and estimated Racah parameters B and C were compared with the experimental spectroscopic data and discussed. Comparison with experiment shows that the results are quite satisfactory which justify the model and simulation scheme used for the title system.
Crystal field parameters and energy levels scheme of trivalent chromium doped BSO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petkova, P.; Andreici, E.-L.; Avram, N. M., E-mail: n1m2marva@yahoo.com
The aim of this paper is to give an analysis of crystal field parameters and energy levels schemes for the above doped material, in order to give a reliable explanation for experimental data. The crystal field parameters have been modeled in the frame of Exchange Charge Model (ECM) of the crystal field theory, taken into account the geometry of systems, with actually site symmetry of the impurity ions. The effect of the charges of the ligands and covalence bonding between chromium cation and oxygen anions, in the cluster approach, also were taken into account. With the obtained values of themore » crystal field parameters we simulated the scheme of energy levels of chromium ions by diagonalizing the matrix of the Hamiltonian of the doped crystal. The obtained energy levels and estimated Racah parameters B and C were compared with the experimental spectroscopic data and discussed. Comparison with experiment shows that the results are quite satisfactory which justify the model and simulation scheme used for the title system.« less
A New Paradigm for Satellite Retrieval of Hydrologic Variables: The CDRD Methodology
NASA Astrophysics Data System (ADS)
Smith, E. A.; Mugnai, A.; Tripoli, G. J.
2009-09-01
Historically, retrieval of thermodynamically active geophysical variables in the atmosphere (e.g., temperature, moisture, precipitation) involved some time of inversion scheme - embedded within the retrieval algorithm - to transform radiometric observations (a vector) to the desired geophysical parameter(s) (either a scalar or a vector). Inversion is fundamentally a mathematical operation involving some type of integral-differential radiative transfer equation - often resisting a straightforward algebraic solution - in which the integral side of the equation (typically the right-hand side) contains the desired geophysical vector, while the left-hand side contains the radiative measurement vector often free of operators. Inversion was considered more desirable than forward modeling because the forward model solution had to be selected from a generally unmanageable set of parameter-observation relationships. However, in the classical inversion problem for retrieval of temperature using multiple radiative frequencies along the wing of an absorption band (or line) of a well-mixed radiatively active gas, in either the infrared or microwave spectrums, the inversion equation to be solved consists of a Fredholm integral equation of the 2nd kind - a specific type of transform problem in which there are an infinite number of solutions. This meant that special treatment of the transform process was required in order to obtain a single solution. Inversion had become the method of choice for retrieval in the 1950s because it appealed to the use of mathematical elegance, and because the numerical approaches used to solve the problems (typically some type of relaxation or perturbation scheme) were computationally fast in an age when computers speeds were slow. Like many solution schemes, inversion has lingered on regardless of the fact that computer speeds have increased many orders of magnitude and forward modeling itself has become far more elegant in combination with Bayesian averaging procedures given that the a priori probabilities of occurrence in the true environment of the parameter(s) in question can be approximated (or are actually known). In this presentation, the theory of the more modern retrieval approach using a combination of cloud, radiation and other specialized forward models in conjunction with Bayesian weighted averaging will be reviewed in light of a brief history of inversion. The application of the theory will be cast in the framework of what we call the Cloud-Dynamics-Radiation-Database (CDRD) methodology - which we now use for the retrieval of precipitation from spaceborne passive microwave radiometers. In a companion presentation, we will specifically describe the CDRD methodology and present results for its application within the Mediterranean basin.
A back-fitting algorithm to improve real-time flood forecasting
NASA Astrophysics Data System (ADS)
Zhang, Xiaojing; Liu, Pan; Cheng, Lei; Liu, Zhangjun; Zhao, Yan
2018-07-01
Real-time flood forecasting is important for decision-making with regards to flood control and disaster reduction. The conventional approach involves a postprocessor calibration strategy that first calibrates the hydrological model and then estimates errors. This procedure can simulate streamflow consistent with observations, but obtained parameters are not optimal. Joint calibration strategies address this issue by refining hydrological model parameters jointly with the autoregressive (AR) model. In this study, five alternative schemes are used to forecast floods. Scheme I uses only the hydrological model, while scheme II includes an AR model for error correction. In scheme III, differencing is used to remove non-stationarity in the error series. A joint inference strategy employed in scheme IV calibrates the hydrological and AR models simultaneously. The back-fitting algorithm, a basic approach for training an additive model, is adopted in scheme V to alternately recalibrate hydrological and AR model parameters. The performance of the five schemes is compared with a case study of 15 recorded flood events from China's Baiyunshan reservoir basin. Our results show that (1) schemes IV and V outperform scheme III during the calibration and validation periods and (2) scheme V is inferior to scheme IV in the calibration period, but provides better results in the validation period. Joint calibration strategies can therefore improve the accuracy of flood forecasting. Additionally, the back-fitting recalibration strategy produces weaker overcorrection and a more robust performance compared with the joint inference strategy.
Race, Alan M; Bunch, Josephine
2015-03-01
The choice of colour scheme used to present data can have a dramatic effect on the perceived structure present within the data. This is of particular significance in mass spectrometry imaging (MSI), where ion images that provide 2D distributions of a wide range of analytes are used to draw conclusions about the observed system. Commonly employed colour schemes are generally suboptimal for providing an accurate representation of the maximum amount of data. Rainbow-based colour schemes are extremely popular within the community, but they introduce well-documented artefacts which can be actively misleading in the interpretation of the data. In this article, we consider the suitability of colour schemes and composite image formation found in MSI literature in the context of human colour perception. We also discuss recommendations of rules for colour scheme selection for ion composites and multivariate analysis techniques such as principal component analysis (PCA).
The best prostate biopsy scheme is dictated by the gland volume: a monocentric study.
Dell'Atti, L
2015-08-01
Accuracy of biopsy scheme depends on different parameters. Prostate-specific antigen (PSA) level and digital rectal examination (DRE) influenced the detection rate and suggested the biopsy scheme to approach each patient. Another parameter is the prostate volume. Sampling accuracy tends to decrease progressively with an increasing prostate volume. We prospectively observed detection cancer rate in suspicious prostate cancer (PCa) and improved by applying a protocol biopsy according to prostate volume (PV). Clinical data and pathological features of these 1356 patients were analysed and included in this study. This protocol is a combined scheme that includes transrectal (TR) 12-core PBx (TR12PBx) for PV ≤ 30 cc, TR 14-core PBx (TR14PBx) for PV > 30 cc but < 60 cc, TR 18-core PBx (TR18PBx) for PV ≥ 60 cc. Out of a total of 1356 patients, in 111 (8.2%) PCa was identified through TR12PBx scheme, in 198 (14.6%) through TR14PBx scheme and in 253 (18.6%) through TR18PBx scheme. The PCa detection rate was increased by 44% by adding two TZ cores (TR14PBx scheme). The TR18PBx scheme increased this rate by 21.7% vs. TR14PBx scheme. The diagnostic yield offered by TR18PBx was statistically significant compared to the detection rate offered by the TR14PBx scheme (p < 0.003). The biopsy Gleason score and the percentage of core involvement were comparable between PCa detected by the TR14PBx scheme diagnostic yield and those detected by the TR18PBx scheme (p = 0.362). The only PV parameter, in our opinion, can be significant in choosing the best biopsy scheme to approach in a first setting of biopsies increasing PCa detection rate.
An efficient and provable secure revocable identity-based encryption scheme.
Wang, Changji; Li, Yuan; Xia, Xiaonan; Zheng, Kangjia
2014-01-01
Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE) has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.
Design of algorithms for a dispersive hyperbolic problem
NASA Technical Reports Server (NTRS)
Roe, Philip L.; Arora, Mohit
1991-01-01
In order to develop numerical schemes for stiff problems, a model of relaxing heat flow is studied. To isolate those errors unavoidably associated with discretization, a method of characteristics is developed, containing three free parameters depending on the stiffness ratio. It is shown that such 'decoupled' schemes do not take into account the interaction between the wave families, and hence result in incorrect wavespeeds. Schemes can differ by up to two orders of magnitude in their rms errors, even while maintaining second-order accuracy. 'Coupled' schemes which account for the interactions are developed to obtain two additional free parameters. Numerical results are given for several decoupled and coupled schemes.
Kontopantelis, Evangelos; Buchan, Iain; Reeves, David; Checkland, Kath; Doran, Tim
2013-01-01
Objectives To investigate the relationship between performance on the UK Quality and Outcomes Framework pay-for-performance scheme and choice of clinical computer system. Design Retrospective longitudinal study. Setting Data for 2007–2008 to 2010–2011, extracted from the clinical computer systems of general practices in England. Participants All English practices participating in the pay-for-performance scheme: average 8257 each year, covering over 99% of the English population registered with a general practice. Main outcome measures Levels of achievement on 62 quality-of-care indicators, measured as: reported achievement (levels of care after excluding inappropriate patients); population achievement (levels of care for all patients with the relevant condition) and percentage of available quality points attained. Multilevel mixed effects multiple linear regression models were used to identify population, practice and clinical computing system predictors of achievement. Results Seven clinical computer systems were consistently active in the study period, collectively holding approximately 99% of the market share. Of all population and practice characteristics assessed, choice of clinical computing system was the strongest predictor of performance across all three outcome measures. Differences between systems were greatest for intermediate outcomes indicators (eg, control of cholesterol levels). Conclusions Under the UK's pay-for-performance scheme, differences in practice performance were associated with the choice of clinical computing system. This raises the question of whether particular system characteristics facilitate higher quality of care, better data recording or both. Inconsistencies across systems need to be understood and addressed, and researchers need to be cautious when generalising findings from samples of providers using a single computing system. PMID:23913774
NASA Astrophysics Data System (ADS)
Han, Y.; Misra, S.
2018-04-01
Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir
An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less
NASA Astrophysics Data System (ADS)
Beck, Hylke; de Roo, Ad; van Dijk, Albert; McVicar, Tim; Miralles, Diego; Schellekens, Jaap; Bruijnzeel, Sampurno; de Jeu, Richard
2015-04-01
Motivated by the lack of large-scale model parameter regionalization studies, a large set of 3328 small catchments (< 10000 km2) around the globe was used to set up and evaluate five model parameterization schemes at global scale. The HBV-light model was chosen because of its parsimony and flexibility to test the schemes. The catchments were calibrated against observed streamflow (Q) using an objective function incorporating both behavioral and goodness-of-fit measures, after which the catchment set was split into subsets of 1215 donor and 2113 evaluation catchments based on the calibration performance. The donor catchments were subsequently used to derive parameter sets that were transferred to similar grid cells based on a similarity measure incorporating climatic and physiographic characteristics, thereby producing parameter maps with global coverage. Overall, there was a lack of suitable donor catchments for mountainous and tropical environments. The schemes with spatially-uniform parameter sets (EXP2 and EXP3) achieved the worst Q estimation performance in the evaluation catchments, emphasizing the importance of parameter regionalization. The direct transfer of calibrated parameter sets from donor catchments to similar grid cells (scheme EXP1) performed best, although there was still a large performance gap between EXP1 and HBV-light calibrated against observed Q. The schemes with parameter sets obtained by simultaneously calibrating clusters of similar donor catchments (NC10 and NC58) performed worse than EXP1. The relatively poor Q estimation performance achieved by two (uncalibrated) macro-scale hydrological models suggests there is considerable merit in regionalizing the parameters of such models. The global HBV-light parameter maps and ancillary data are freely available via http://water.jrc.ec.europa.eu.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Mark S.; Son Wonmin; Heaney, Libby
Recently, it was demonstrated by Son et al., Phys. Rev. Lett. 102, 110404 (2009), that a separable bipartite continuous-variable quantum system can violate the Clauser-Horne-Shimony-Holt (CHSH) inequality via operationally local transformations. Operationally local transformations are parametrized only by local variables; however, in order to allow violation of the CHSH inequality, a maximally entangled ancilla was necessary. The use of the entangled ancilla in this scheme caused the state under test to become dependent on the measurement choice one uses to calculate the CHSH inequality, thus violating one of the assumptions used in deriving a Bell inequality, namely, the free willmore » or statistical independence assumption. The novelty in this scheme however is that the measurement settings can be external free parameters. In this paper, we generalize these operationally local transformations for multipartite Bell inequalities (with dichotomic observables) and provide necessary and sufficient conditions for violation within this scheme. Namely, a violation of a multipartite Bell inequality in this setting is contingent on whether an ancillary system admits any realistic local hidden variable model (i.e., whether the ancilla violates the given Bell inequality). These results indicate that violation of a Bell inequality performed on a system does not necessarily imply that the system is nonlocal. In fact, the system under test may be completely classical. However, nonlocality must have resided somewhere, this may have been in the environment, the physical variables used to manipulate the system or the detectors themselves provided the measurement settings are external free variables.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rothenberg, Daniel; Wang, Chien
We describe an emulator of a detailed cloud parcel model which has been trained to assess droplet nucleation from a complex, multimodal aerosol size distribution simulated by a global aerosol–climate model. The emulator is constructed using a sensitivity analysis approach (polynomial chaos expansion) which reproduces the behavior of the targeted parcel model across the full range of aerosol properties and meteorology simulated by the parent climate model. An iterative technique using aerosol fields sampled from a global model is used to identify the critical aerosol size distribution parameters necessary for accurately predicting activation. Across the large parameter space used tomore » train them, the emulators estimate cloud droplet number concentration (CDNC) with a mean relative error of 9.2% for aerosol populations without giant cloud condensation nuclei (CCN) and 6.9% when including them. Versus a parcel model driven by those same aerosol fields, the best-performing emulator has a mean relative error of 4.6%, which is comparable with two commonly used activation schemes also evaluated here (which have mean relative errors of 2.9 and 6.7%, respectively). We identify the potential for regional biases in modeled CDNC, particularly in oceanic regimes, where our best-performing emulator tends to overpredict by 7%, whereas the reference activation schemes range in mean relative error from -3 to 7%. The emulators which include the effects of giant CCN are more accurate in continental regimes (mean relative error of 0.3%) but strongly overestimate CDNC in oceanic regimes by up to 22%, particularly in the Southern Ocean. Finally, the biases in CDNC resulting from the subjective choice of activation scheme could potentially influence the magnitude of the indirect effect diagnosed from the model incorporating it.« less
Rothenberg, Daniel; Wang, Chien
2017-04-27
We describe an emulator of a detailed cloud parcel model which has been trained to assess droplet nucleation from a complex, multimodal aerosol size distribution simulated by a global aerosol–climate model. The emulator is constructed using a sensitivity analysis approach (polynomial chaos expansion) which reproduces the behavior of the targeted parcel model across the full range of aerosol properties and meteorology simulated by the parent climate model. An iterative technique using aerosol fields sampled from a global model is used to identify the critical aerosol size distribution parameters necessary for accurately predicting activation. Across the large parameter space used tomore » train them, the emulators estimate cloud droplet number concentration (CDNC) with a mean relative error of 9.2% for aerosol populations without giant cloud condensation nuclei (CCN) and 6.9% when including them. Versus a parcel model driven by those same aerosol fields, the best-performing emulator has a mean relative error of 4.6%, which is comparable with two commonly used activation schemes also evaluated here (which have mean relative errors of 2.9 and 6.7%, respectively). We identify the potential for regional biases in modeled CDNC, particularly in oceanic regimes, where our best-performing emulator tends to overpredict by 7%, whereas the reference activation schemes range in mean relative error from -3 to 7%. The emulators which include the effects of giant CCN are more accurate in continental regimes (mean relative error of 0.3%) but strongly overestimate CDNC in oceanic regimes by up to 22%, particularly in the Southern Ocean. Finally, the biases in CDNC resulting from the subjective choice of activation scheme could potentially influence the magnitude of the indirect effect diagnosed from the model incorporating it.« less
Acoustic and elastic waveform inversion best practices
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.
Sidler, Dominik; Schwaninger, Arthur; Riniker, Sereina
2016-10-21
In molecular dynamics (MD) simulations, free-energy differences are often calculated using free energy perturbation or thermodynamic integration (TI) methods. However, both techniques are only suited to calculate free-energy differences between two end states. Enveloping distribution sampling (EDS) presents an attractive alternative that allows to calculate multiple free-energy differences in a single simulation. In EDS, a reference state is simulated which "envelopes" the end states. The challenge of this methodology is the determination of optimal reference-state parameters to ensure equal sampling of all end states. Currently, the automatic determination of the reference-state parameters for multiple end states is an unsolved issue that limits the application of the methodology. To resolve this, we have generalised the replica-exchange EDS (RE-EDS) approach, introduced by Lee et al. [J. Chem. Theory Comput. 10, 2738 (2014)] for constant-pH MD simulations. By exchanging configurations between replicas with different reference-state parameters, the complexity of the parameter-choice problem can be substantially reduced. A new robust scheme to estimate the reference-state parameters from a short initial RE-EDS simulation with default parameters was developed, which allowed the calculation of 36 free-energy differences between nine small-molecule inhibitors of phenylethanolamine N-methyltransferase from a single simulation. The resulting free-energy differences were in excellent agreement with values obtained previously by TI and two-state EDS simulations.
NASA Astrophysics Data System (ADS)
Harshan, S.; Roth, M.; Velasco, E.
2014-12-01
Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.
Observation uncertainty in reversible Markov chains.
Metzner, Philipp; Weber, Marcus; Schütte, Christof
2010-09-01
In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .
Sanjeevi, Sathish K P; Zarghami, Ahad; Padding, Johan T
2018-04-01
Various curved no-slip boundary conditions available in literature improve the accuracy of lattice Boltzmann simulations compared to the traditional staircase approximation of curved geometries. Usually, the required unknown distribution functions emerging from the solid nodes are computed based on the known distribution functions using interpolation or extrapolation schemes. On using such curved boundary schemes, there will be mass loss or gain at each time step during the simulations, especially apparent at high Reynolds numbers, which is called mass leakage. Such an issue becomes severe in periodic flows, where the mass leakage accumulation would affect the computed flow fields over time. In this paper, we examine mass leakage of the most well-known curved boundary treatments for high-Reynolds-number flows. Apart from the existing schemes, we also test different forced mass conservation schemes and a constant density scheme. The capability of each scheme is investigated and, finally, recommendations for choosing a proper boundary condition scheme are given for stable and accurate simulations.
NASA Astrophysics Data System (ADS)
Sanjeevi, Sathish K. P.; Zarghami, Ahad; Padding, Johan T.
2018-04-01
Various curved no-slip boundary conditions available in literature improve the accuracy of lattice Boltzmann simulations compared to the traditional staircase approximation of curved geometries. Usually, the required unknown distribution functions emerging from the solid nodes are computed based on the known distribution functions using interpolation or extrapolation schemes. On using such curved boundary schemes, there will be mass loss or gain at each time step during the simulations, especially apparent at high Reynolds numbers, which is called mass leakage. Such an issue becomes severe in periodic flows, where the mass leakage accumulation would affect the computed flow fields over time. In this paper, we examine mass leakage of the most well-known curved boundary treatments for high-Reynolds-number flows. Apart from the existing schemes, we also test different forced mass conservation schemes and a constant density scheme. The capability of each scheme is investigated and, finally, recommendations for choosing a proper boundary condition scheme are given for stable and accurate simulations.
Ponzo, V; Rosato, R; Tarsia, E; Goitre, I; De Michieli, F; Fadda, M; Monge, T; Pezzana, A; Broglio, F; Bo, S
2017-07-01
Few studies have evaluated the attitudes of patients with type 2 diabetes mellitus (T2DM) towards the given dietary plans. In this study, we aimed to evaluate: i) the self-reported adherence of T2DM patients to the prescribed diets; ii) the use of other types of diet schemes; iii) the patients' preferences towards the type of meal plans. A 16 multiple-choice items questionnaire was administered to 500 T2DM patients; 71.2% (356/500) of them had the perception of having received a dietary plan; only 163/356 declared to be fully adherent. The latter had lower BMI (25.8 ± 4.5 vs 29.1 ± 4.5 kg/m 2 , p < 0.001) than patients who were partly adherent. Among patients not following the given diet, 61.8% was eating in accordance to a self-made diet and 20.9% did not follow any diet. Only a few patients (2.4%) had tried a popular diet/commercial program. Most patients preferred either a "sufficiently free" (201/500) or a "free" (218/500) scheme. The use of supplements attracted younger, obese individuals, with higher education, and most managers. In a multinomial regression model, age and diabetes duration were inversely associated with the choice of a "rigid" scheme, diabetes duration and glycated hemoglobin levels were inversely correlated with a "free" diet choice, obesity was associated with a "strategic" scheme choice, while lower education (inversely) and obesity (directly) correlated with the preference for "supplement use". Socio-cultural/individual factors could affect attitudes and preferences of T2DM patients towards diet. These factors should be considered in order to draw an individually tailored dietary plan. Copyright © 2017 The Italian Society of Diabetology, the Italian Society for the Study of Atherosclerosis, the Italian Society of Human Nutrition, and the Department of Clinical Medicine and Surgery, Federico II University. Published by Elsevier B.V. All rights reserved.
Parameter Estimation for Thurstone Choice Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vojnovic, Milan; Yun, Seyoung
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one ormore » more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.« less
Frozen soil parameterization in a distributed biosphere hydrological model
NASA Astrophysics Data System (ADS)
Wang, L.; Koike, T.; Yang, K.; Jin, R.; Li, H.
2010-03-01
In this study, a frozen soil parameterization has been modified and incorporated into a distributed biosphere hydrological model (WEB-DHM). The WEB-DHM with the frozen scheme was then rigorously evaluated in a small cold area, the Binngou watershed, against the in-situ observations from the WATER (Watershed Allied Telemetry Experimental Research). First, by using the original WEB-DHM without the frozen scheme, the land surface parameters and two van Genuchten parameters were optimized using the observed surface radiation fluxes and the soil moistures at upper layers (5, 10 and 20 cm depths) at the DY station in July. Second, by using the WEB-DHM with the frozen scheme, two frozen soil parameters were calibrated using the observed soil temperature at 5 cm depth at the DY station from 21 November 2007 to 20 April 2008; while the other soil hydraulic parameters were optimized by the calibration of the discharges at the basin outlet in July and August that covers the annual largest flood peak in 2008. With these calibrated parameters, the WEB-DHM with the frozen scheme was then used for a yearlong validation from 21 November 2007 to 20 November 2008. Results showed that the WEB-DHM with the frozen scheme has given much better performance than the WEB-DHM without the frozen scheme, in the simulations of soil moisture profile at the cold regions catchment and the discharges at the basin outlet in the yearlong simulation.
Algorithms for adaptive stochastic control for a class of linear systems
NASA Technical Reports Server (NTRS)
Toda, M.; Patel, R. V.
1977-01-01
Control of linear, discrete time, stochastic systems with unknown control gain parameters is discussed. Two suboptimal adaptive control schemes are derived: one is based on underestimating future control and the other is based on overestimating future control. Both schemes require little on-line computation and incorporate in their control laws some information on estimation errors. The performance of these laws is studied by Monte Carlo simulations on a computer. Two single input, third order systems are considered, one stable and the other unstable, and the performance of the two adaptive control schemes is compared with that of the scheme based on enforced certainty equivalence and the scheme where the control gain parameters are known.
The Grid File: A Data Structure Designed to Support Proximity Queries on Spatial Objects.
1983-06-01
dimensional space. The technique to be presented for storing spatial objects works for any choice of parameters by which * simple objects can be represented...However, depending on characteristics of the data to be processed , some choices of parameters are better than others. Let us discuss some...considerations that may determine the choice of parameters. 1) istinction between lmaerba peuwuers ad extensiem prwuneert For some clasm of simple objects It
A Regev-type fully homomorphic encryption scheme using modulus switching.
Chen, Zhigang; Wang, Jian; Chen, Liqun; Song, Xinxia
2014-01-01
A critical challenge in a fully homomorphic encryption (FHE) scheme is to manage noise. Modulus switching technique is currently the most efficient noise management technique. When using the modulus switching technique to design and implement a FHE scheme, how to choose concrete parameters is an important step, but to our best knowledge, this step has drawn very little attention to the existing FHE researches in the literature. The contributions of this paper are twofold. On one hand, we propose a function of the lower bound of dimension value in the switching techniques depending on the LWE specific security levels. On the other hand, as a case study, we modify the Brakerski FHE scheme (in Crypto 2012) by using the modulus switching technique. We recommend concrete parameter values of our proposed scheme and provide security analysis. Our result shows that the modified FHE scheme is more efficient than the original Brakerski scheme in the same security level.
High-order asynchrony-tolerant finite difference schemes for partial differential equations
NASA Astrophysics Data System (ADS)
Aditya, Konduri; Donzis, Diego A.
2017-12-01
Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.
Trujillo, Antonio J; Ruiz, Fernando; Bridges, John F P; Amaya, Jeannette L; Buttorff, Christine; Quiroga, Angélica M
2012-03-01
In many countries, health insurance coverage is the primary way for individuals to access care. Governments can support access through social insurance programmes; however, after a certain period, governments struggle to achieve universal coverage. Evidence suggests that complex individual behaviour may play a role. Using a choice experiment, this research explored consumer preferences for health insurance in Colombia. We also evaluated whether preferences differed across consumers with differing demographic and health status factors. A household field experiment was conducted in Bogotá in 2010. The sample consisted of 109 uninsured and 133 low-income insured individuals. Each individual evaluated 12 pair-wise comparisons of hypothetical health plans. We focused on six characteristics of health insurance: premium, out-of-pocket expenditure, chronic condition coverage, quality of care, family coverage and sick leave. A main effects orthogonal design was used to derive the 72 scenarios used in the choice experiment. Parameters were estimated using conditional logit models. Since price data were included, we estimated respondents' willingness to pay for characteristics. Consumers valued health benefits and family coverage more than other attributes. Additionally, differences in preferences can be exploited to increase coverage. The willingness to pay for benefits may partially cover the average cost of providing them. Policy makers might be able to encourage those insured via the subsidized system to enrol in the next level of the social health insurance scheme through expanding benefits to family members and expanding the level of chronic condition coverage.
ERIC Educational Resources Information Center
Abad, Francisco J.; Olea, Julio; Ponsoda, Vicente
2009-01-01
This article deals with some of the problems that have hindered the application of Samejima's and Thissen and Steinberg's multiple-choice models: (a) parameter estimation difficulties owing to the large number of parameters involved, (b) parameter identifiability problems in the Thissen and Steinberg model, and (c) their treatment of omitted…
Reindl, Marie-Sol; Waltz, Mitzi; Schippers, Alice
2016-06-01
This study focused on parent-initiated supported living schemes in the South of the Netherlands and the ability of these living schemes to enhance participation, choice, autonomy and self-advocacy for people with intellectual or developmental disabilities through personalized planning, support and care. Based on in-depth interviews with tenants, parents and caregivers, findings included that parent-initiated supported housing schemes made steps towards stimulating self-advocacy and autonomy for tenants. However, overprotective and paternalistic attitudes expressed by a significant number of parents, as well as structural constraints affecting the living schemes, created obstacles to tenants' personal development. The study calls for consideration of interdependence as a model for the relationship of parents and adult offspring with disabilities. The benefits and tensions inherent within this relationship must be taken into consideration during inclusive community building. © The Author(s) 2016.
Multiswitching compound antisynchronization of four chaotic systems
NASA Astrophysics Data System (ADS)
Khan, Ayub; Khattar, Dinesh; Prajapati, Nitish
2017-12-01
Based on three drive-one response system, in this article, the authors investigate a novel synchronization scheme for a class of chaotic systems. The new scheme, multiswitching compound antisynchronization (MSCoAS), is a notable extension of the earlier multiswitching schemes concerning only one drive-one response system model. The concept of multiswitching synchronization is extended to compound synchronization scheme such that the state variables of three drive systems antisynchronize with different state variables of the response system, simultaneously. The study involving multiswitching of three drive systems and one response system is first of its kind. Various switched modified function projective antisynchronization schemes are obtained as special cases of MSCoAS, for a suitable choice of scaling factors. Using suitable controllers and Lyapunov stability theory, sufficient condition is obtained to achieve MSCoAS between four chaotic systems and the corresponding theoretical proof is given. Numerical simulations are performed using Lorenz system in MATLAB to demonstrate the validity of the presented method.
Diffusion of Zonal Variables Using Node-Centered Diffusion Solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, T B
2007-08-06
Tom Kaiser [1] has done some preliminary work to use the node-centered diffusion solver (originally developed by T. Palmer [2]) in Kull for diffusion of zonal variables such as electron temperature. To avoid numerical diffusion, Tom used a scheme developed by Shestakov et al. [3] and found their scheme could, in the vicinity of steep gradients, decouple nearest-neighbor zonal sub-meshes leading to 'alternating-zone' (red-black mode) errors. Tom extended their scheme to couple the sub-meshes with appropriate chosen artificial diffusion and thereby solved the 'alternating-zone' problem. Because the choice of the artificial diffusion coefficient could be very delicate, it is desirablemore » to use a scheme that does not require the artificial diffusion but still able to avoid both numerical diffusion and the 'alternating-zone' problem. In this document we present such a scheme.« less
NASA Astrophysics Data System (ADS)
Tian, Jiyang; Liu, Jia; Wang, Jianhua; Li, Chuanzhe; Yu, Fuliang; Chu, Zhigang
2017-07-01
Mesoscale Numerical Weather Prediction systems can provide rainfall products at high resolutions in space and time, playing an increasingly more important role in water management and flood forecasting. The Weather Research and Forecasting (WRF) model is one of the most popular mesoscale systems and has been extensively used in research and practice. However, for hydrologists, an unsolved question must be addressed before each model application in a different target area. That is, how are the most appropriate combinations of physical parameterisations from the vast WRF library selected to provide the best downscaled rainfall? In this study, the WRF model was applied with 12 designed parameterisation schemes with different combinations of physical parameterisations, including microphysics, radiation, planetary boundary layer (PBL), land-surface model (LSM) and cumulus parameterisations. The selected study areas are two semi-humid and semi-arid catchments located in the Daqinghe River basin, Northern China. The performance of WRF with different parameterisation schemes is tested for simulating eight typical 24-h storm events with different evenness in space and time. In addition to the cumulative rainfall amount, the spatial and temporal patterns of the simulated rainfall are evaluated based on a two-dimensional composed verification statistic. Among the 12 parameterisation schemes, Scheme 4 outperforms the other schemes with the best average performance in simulating rainfall totals and temporal patterns; in contrast, Scheme 6 is generally a good choice for simulations of spatial rainfall distributions. Regarding the individual parameterisations, Single-Moment 6 (WSM6), Yonsei University (YSU), Kain-Fritsch (KF) and Grell-Devenyi (GD) are better choices for microphysics, planetary boundary layers (PBL) and cumulus parameterisations, respectively, in the study area. These findings provide helpful information for WRF rainfall downscaling in semi-humid and semi-arid areas. The methodologies to design and test the combination schemes of parameterisations can also be regarded as a reference for generating ensembles in numerical rainfall predictions using the WRF model.
Probabilistic teleportation via multi-parameter measurements and partially entangled states
NASA Astrophysics Data System (ADS)
Wei, Jiahua; Shi, Lei; Han, Chen; Xu, Zhiyan; Zhu, Yu; Wang, Gang; Wu, Hao
2018-04-01
In this paper, a novel scheme for probabilistic teleportation is presented with multi-parameter measurements via a non-maximally entangled state. This is in contrast to the fact that the measurement kinds for quantum teleportation are usually particular in most previous schemes. The detail implementation producers for our proposal are given by using of appropriate local unitary operations. Moreover, the total success probability and classical information of this proposal are calculated. It is demonstrated that the success probability and classical cost would be changed with the multi-measurement parameters and the entanglement factor of quantum channel. Our scheme could enlarge the research range of probabilistic teleportation.
Evaluation of hardware costs of implementing PSK signal detection circuit based on "system on chip"
NASA Astrophysics Data System (ADS)
Sokolovskiy, A. V.; Dmitriev, D. D.; Veisov, E. A.; Gladyshev, A. B.
2018-05-01
The article deals with the choice of the architecture of digital signal processing units for implementing the PSK signal detection scheme. As an assessment of the effectiveness of architectures, the required number of shift registers and computational processes are used when implementing the "system on a chip" on the chip. A statistical estimation of the normalized code sequence offset in the signal synchronization scheme for various hardware block architectures is used.
Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J
2014-01-01
We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840
Mergias, I; Moustakas, K; Papadopoulos, A; Loizidou, M
2007-08-25
Each alternative scheme for treating a vehicle at its end of life has its own consequences from a social, environmental, economic and technical point of view. Furthermore, the criteria used to determine these consequences are often contradictory and not equally important. In the presence of multiple conflicting criteria, an optimal alternative scheme never exists. A multiple-criteria decision aid (MCDA) method to aid the Decision Maker (DM) in selecting the best compromise scheme for the management of End-of-Life Vehicles (ELVs) is presented in this paper. The constitution of a set of alternatives schemes, the selection of a list of relevant criteria to evaluate these alternative schemes and the choice of an appropriate management system are also analyzed in this framework. The proposed procedure relies on the PROMETHEE method which belongs to the well-known family of multiple criteria outranking methods. For this purpose, level, linear and Gaussian functions are used as preference functions.
NASA Astrophysics Data System (ADS)
Sivalingam, Kantharuban; Krupicka, Martin; Auer, Alexander A.; Neese, Frank
2016-08-01
Multireference (MR) methods occupy an important class of approaches in quantum chemistry. In many instances, for example, in studying complex magnetic properties of transition metal complexes, they are actually the only physically satisfactory choice. In traditional MR approaches, single and double excitations are performed with respect to all reference configurations (or configuration state functions, CSFs), which leads to an explosive increase of computational cost for larger reference spaces. This can be avoided by the internal contraction scheme proposed by Meyer and Siegbahn, which effectively reduces the number of wavefunction parameters to their single-reference counterpart. The "fully internally contracted" scheme (FIC) is well known from the popular CASPT2 approach. An even shorter expansion of the wavefunction is possible with the "strong contraction" (SC) scheme proposed by Angeli and Malrieu in their NEVPT2 approach. Promising multireference configuration interaction formulations (MRCI) employing internal contraction and strong contraction have been reported by several authors. In this work, we report on the implementation of the FIC-MRCI and SC-MRCI methodologies, using a computer assisted implementation strategy. The methods are benchmarked against the traditional uncontracted MRCI approach for ground and excited states of small molecules (N2, O2, CO, CO+, OH, CH, and CN). For ground states, the comparison includes the "partially internally contracted" MRCI based on the Celani-Werner ansatz (PC-MRCI). For the three contraction schemes, the average errors range from 2% to 6% of the uncontracted MRCI correlation energies. Excitation energies are reproduced with ˜0.2 eV accuracy. In most cases, the agreement is better than 0.2 eV, even in cases with very large differential correlation contributions as exemplified for the d-d and ligand-to-metal charge transfer transitions of a Cu [NH 3 ] 4 2 + model complex. The benchmark is supplemented with the investigation of typical potential energy surfaces (i.e., N2, HF, LiF, BeH2, ethane C-C bond stretching, and the ethylene double bond torsion). Our results indicate that the SC-scheme, which is successful in the context of second- and third-order perturbation theory, does not offer computational advantages and at the same time leads to much larger errors than the PC and FIC schemes. We discuss the advantages and disadvantages of the PC and FIC schemes, which are of comparable accuracy and, for the systems tested, also of comparable efficiency.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin
Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.
Under-sampling trajectory design for compressed sensing based DCE-MRI.
Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting
2013-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.
A Scheme to Smooth Aggregated Traffic from Sensors with Periodic Reports
Oh, Sungmin; Jang, Ju Wook
2017-01-01
The possibility of smoothing aggregated traffic from sensors with varying reporting periods and frame sizes to be carried on an access link is investigated. A straightforward optimization would take O(pn) time, whereas our heuristic scheme takes O(np) time where n, p denote the number of sensors and size of periods, respectively. Our heuristic scheme performs local optimization sensor by sensor, starting with the smallest to largest periods. This is based on an observation that sensors with large offsets have more choices in offsets to avoid traffic peaks than the sensors with smaller periods. A MATLAB simulation shows that our scheme excels the known scheme by M. Grenier et al. in a similar situation (aggregating periodic traffic in a controller area network) for almost all possible permutations. The performance of our scheme is very close to the straightforward optimization, which compares all possible permutations. We expect that our scheme would greatly contribute in smoothing the traffic from an ever-increasing number of IoT sensors to the gateway, reducing the burden on the access link to the Internet. PMID:28273831
A Scheme to Smooth Aggregated Traffic from Sensors with Periodic Reports.
Oh, Sungmin; Jang, Ju Wook
2017-03-03
The possibility of smoothing aggregated traffic from sensors with varying reporting periods and frame sizes to be carried on an access link is investigated. A straightforward optimization would take O(pn) time, whereas our heuristic scheme takes O(np) time where n, p denote the number of sensors and size of periods, respectively. Our heuristic scheme performs local optimization sensor by sensor, starting with the smallest to largest periods. This is based on an observation that sensors with large offsets have more choices in offsets to avoid traffic peaks than the sensors with smaller periods. A MATLAB simulation shows that our scheme excels the known scheme by M. Grenier et al. in a similar situation (aggregating periodic traffic in a controller area network) for almost all possible permutations. The performance of our scheme is very close to the straightforward optimization, which compares all possible permutations. We expect that our scheme would greatly contribute in smoothing the traffic from an ever-increasing number of IoT sensors to the gateway, reducing the burden on the access link to the Internet.
Influence of Choice of Null Network on Small-World Parameters of Structural Correlation Networks
Hosseini, S. M. Hadi; Kesler, Shelli R.
2013-01-01
In recent years, coordinated variations in brain morphology (e.g., volume, thickness) have been employed as a measure of structural association between brain regions to infer large-scale structural correlation networks. Recent evidence suggests that brain networks constructed in this manner are inherently more clustered than random networks of the same size and degree. Thus, null networks constructed by randomizing topology are not a good choice for benchmarking small-world parameters of these networks. In the present report, we investigated the influence of choice of null networks on small-world parameters of gray matter correlation networks in healthy individuals and survivors of acute lymphoblastic leukemia. Three types of null networks were studied: 1) networks constructed by topology randomization (TOP), 2) networks matched to the distributional properties of the observed covariance matrix (HQS), and 3) networks generated from correlation of randomized input data (COR). The results revealed that the choice of null network not only influences the estimated small-world parameters, it also influences the results of between-group differences in small-world parameters. In addition, at higher network densities, the choice of null network influences the direction of group differences in network measures. Our data suggest that the choice of null network is quite crucial for interpretation of group differences in small-world parameters of structural correlation networks. We argue that none of the available null models is perfect for estimation of small-world parameters for correlation networks and the relative strengths and weaknesses of the selected model should be carefully considered with respect to obtained network measures. PMID:23840672
A modular design of molecular qubits to implement universal quantum gates
Ferrando-Soria, Jesús; Moreno Pineda, Eufemio; Chiesa, Alessandro; Fernandez, Antonio; Magee, Samantha A.; Carretta, Stefano; Santini, Paolo; Vitorica-Yrezabal, Iñigo J.; Tuna, Floriana; Timco, Grigore A.; McInnes, Eric J.L.; Winpenny, Richard E.P.
2016-01-01
The physical implementation of quantum information processing relies on individual modules—qubits—and operations that modify such modules either individually or in groups—quantum gates. Two examples of gates that entangle pairs of qubits are the controlled NOT-gate (CNOT) gate, which flips the state of one qubit depending on the state of another, and the gate that brings a two-qubit product state into a superposition involving partially swapping the qubit states. Here we show that through supramolecular chemistry a single simple module, molecular {Cr7Ni} rings, which act as the qubits, can be assembled into structures suitable for either the CNOT or gate by choice of linker, and we characterize these structures by electron spin resonance spectroscopy. We introduce two schemes for implementing such gates with these supramolecular assemblies and perform detailed simulations, based on the measured parameters including decoherence, to demonstrate how the gates would operate. PMID:27109358
Creating high-purity angular-momentum-state Rydberg atoms by a pair of unipolar laser pulses
NASA Astrophysics Data System (ADS)
Xin, PeiPei; Cheng, Hong; Zhang, ShanShan; Wang, HanMu; Xu, ZiShan; Liu, HongPing
2018-04-01
We propose a method of producing high-purity angular-momentum-state Rydberg atoms by a pair of unipolar laser pulses. The first positive-polarity optical half-cycle pulse is used to prepare an excited-state wave packet while the second one is less intense, but with opposite polarity and time delayed, and is employed to drag back the escaping free electron and clip the shape of the bound Rydberg wave packet, selectively increasing or decreasing a fraction of the angular-momentum components. An intelligent choice of laser parameters such as phase and amplitude helps us to control the orbital-angular-momentum composition of an electron wave packet with more facility; thus, a specified angular-momentum state with high purity can be achieved. This scheme of producing high-purity angular-momentum-state Rydberg atoms has significant application in quantum-information processing.
Tuning quantum measurements to control chaos.
Eastman, Jessica K; Hope, Joseph J; Carvalho, André R R
2017-03-20
Environment-induced decoherence has long been recognised as being of crucial importance in the study of chaos in quantum systems. In particular, the exact form and strength of the system-environment interaction play a major role in the quantum-to-classical transition of chaotic systems. In this work we focus on the effect of varying monitoring strategies, i.e. for a given decoherence model and a fixed environmental coupling, there is still freedom on how to monitor a quantum system. We show here that there is a region between the deep quantum regime and the classical limit where the choice of the monitoring parameter allows one to control the complex behaviour of the system, leading to either the emergence or suppression of chaos. Our work shows that this is a result from the interplay between quantum interference effects induced by the nonlinear dynamics and the effectiveness of the decoherence for different measurement schemes.
Signal classification using global dynamical models, Part II: SONAR data analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kremliovsky, M.; Kadtke, J.
1996-06-01
In Part I of this paper, we described a numerical method for nonlinear signal detection and classification which made use of techniques borrowed from dynamical systems theory. Here in Part II of the paper, we will describe an example of data analysis using this method, for data consisting of open ocean acoustic (SONAR) recordings of marine mammal transients, supplied from NUWC sources. The purpose here is two-fold: first to give a more operational description of the technique and provide rules-of-thumb for parameter choices; and second to discuss some new issues raised by the analysis of non-ideal (real-world) data sets. Themore » particular data set considered here is quite non-stationary, relatively noisy, is not clearly localized in the background, and as such provides a difficult challenge for most detection/classification schemes. {copyright} {ital 1996 American Institute of Physics.}« less
Analysis of fault-tolerant neurocontrol architectures
NASA Technical Reports Server (NTRS)
Troudet, T.; Merrill, W.
1992-01-01
The fault-tolerance of analog parallel distributed implementations of a multivariable aircraft neurocontroller is analyzed by simulating weight and neuron failures in a simplified scheme of analog processing based on the functional architecture of the ETANN chip (Electrically Trainable Artificial Neural Network). The neural information processing is found to be only partially distributed throughout the set of weights of the neurocontroller synthesized with the backpropagation algorithm. Although the degree of distribution of the neural processing, and consequently the fault-tolerance of the neurocontroller, could be enhanced using Locally Distributed Weight and Neuron Approaches, a satisfactory level of fault-tolerance could only be obtained by retraining the degrated VLSI neurocontroller. The possibility of maintaining neurocontrol performance and stability in the presence of single weight of neuron failures was demonstrated through an automated retraining procedure of the neurocontroller based on a pre-programmed choice and sequence of the training parameters.
NASA Astrophysics Data System (ADS)
Tang, Guoning; Xu, Kesheng; Jiang, Luoluo
2011-10-01
The synchronization is investigated in a two-dimensional Hindmarsh-Rose neuronal network by introducing a global coupling scheme with time delay, where the length of time delay is proportional to the spatial distance between neurons. We find that the time delay always disturbs synchronization of the neuronal network. When both the coupling strength and length of time delay per unit distance (i.e., enlargement factor) are large enough, the time delay induces the abnormal membrane potential oscillations in neurons. Specifically, the abnormal membrane potential oscillations for the symmetrically placed neurons form an antiphase, so that the large coupling strength and enlargement factor lead to the desynchronization of the neuronal network. The complete and intermittently complete synchronization of the neuronal network are observed for the right choice of parameters. The physical mechanism underlying these phenomena is analyzed.
Sliding mode controller with modified sliding function for DC-DC Buck Converter.
Naik, B B; Mehta, A J
2017-09-01
This article presents design of Sliding Mode Controller with proportional integral type sliding function for DC-DC Buck Converter for the controlled power supply. The converter with conventional sliding mode controller results in a steady state error in load voltage. The proposed modified sliding function improves the steady state and dynamic performance of the Convertor and facilitates better choices of controller tuning parameters. The conditions for existence of sliding modes for proposed control scheme are derived. The stability of the closed loop system with proposed sliding mode control is proved and improvement in steady state performance is exemplified. The idea of adaptive tuning for the proposed controller to compensate load variations is outlined. The comparative study of conventional and proposed control strategy is presented. The efficacy of the proposed strategy is endowed by the simulation and experimental results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
A global design of high power Nd 3+-Yb 3+ co-doped fiber lasers
NASA Astrophysics Data System (ADS)
Fan, Zhang; Chuncan, Wang; Tigang, Ning
2008-09-01
A global optimization method - niche hybrid genetic algorithm (NHGA) based on fitness sharing and elite replacement is applied to optimize Nd3+-Yb3+ co-doped fiber lasers (NYDFLs) for obtaining maximum signal output power. With a objective function and different pumping powers, five critical parameters (the fiber length, L; the proportion of pump power for pumping Nd3+, η; Nd3+ and Yb3+ concentrations, NNd and NYb and output mirror reflectivity, Rout) of the given NYDFLs are optimized by solving the rate and power propagation equations. Results show that dividing equally the input pump power among 808 nm (Nd3+) and 940 nm (Yb3+) is not an optimal choice and the pump power of Nd3+ ions should be kept around 10-13.78% of the total pump power. Three optimal schemes are obtained by NHGA and the highest slope efficiency of the laser is able to reach 80.1%.
NASA Astrophysics Data System (ADS)
Wang, C. C.; Tan, J. Y.; Liu, L. H.
2018-05-01
Hamiltonian adaptive resolution scheme (H-AdResS), which allows to simulate materials by treating different domains of the system at different levels of resolution, is a recently proposed atomistic/coarse-grained multiscale model. In this work, a scheme to calculate the dielectric functions of liquids on account of H-AdResS is presented. In the proposed H-AdResS dielectric-function calculation scheme (DielectFunctCalS), the corrected molecular dipole moments are calculated by multiplying molecular dipole moment by the weighting fraction of the molecular mapping point. As the widths of all-atom and hybrid regions show different degrees of influence on the dielectric functions, a prefactor is multiplied to eliminate the effects of all-atom and hybrid region widths. Since one goal of using the H-AdResS method is to reduce computational costs, widths of the all-atom region and the hybrid region can be reduced considering that the coarse-grained simulation is much more timesaving compared to atomistic simulation. Liquid water and ethanol are taken as test cases to validate the DielectFunctCalS. The H-AdResS DielectFunctCalS results are in good agreement with all-atom molecular dynamics simulations. The accuracy of the H-AdResS results, together with all-atom molecular dynamics results, depends heavily on the choice of the force field and force field parameters. The H-AdResS DielectFunctCalS allows us to calculate the dielectric functions of macromolecule systems with high efficiency and makes the dielectric function calculations of large biomolecular systems possible.
A new Scheme for ATLAS Trigger Simulation using Legacy Code
NASA Astrophysics Data System (ADS)
Galster, Gorm; Stelzer, Joerg; Wiedenmann, Werner
2014-06-01
Analyses at the LHC which search for rare physics processes or determine with high precision Standard Model parameters require accurate simulations of the detector response and the event selection processes. The accurate determination of the trigger response is crucial for the determination of overall selection efficiencies and signal sensitivities. For the generation and the reconstruction of simulated event data, the most recent software releases are usually used to ensure the best agreement between simulated data and real data. For the simulation of the trigger selection process, however, ideally the same software release that was deployed when the real data were taken should be used. This potentially requires running software dating many years back. Having a strategy for running old software in a modern environment thus becomes essential when data simulated for past years start to present a sizable fraction of the total. We examined the requirements and possibilities for such a simulation scheme within the ATLAS software framework and successfully implemented a proof-of-concept simulation chain. One of the greatest challenges was the choice of a data format which promises long term compatibility with old and new software releases. Over the time periods envisaged, data format incompatibilities are also likely to emerge in databases and other external support services. Software availability may become an issue, when e.g. the support for the underlying operating system might stop. In this paper we present the encountered problems and developed solutions, and discuss proposals for future development. Some ideas reach beyond the retrospective trigger simulation scheme in ATLAS as they also touch more generally aspects of data preservation.
Analogue and digital linear modulation techniques for mobile satellite
NASA Technical Reports Server (NTRS)
Whitmarsh, W. J.; Bateman, A.; Mcgeehan, J. P.
1990-01-01
The choice of modulation format for a mobile satellite service is complex. The subjective performance is summarized of candidate schemes and voice coder technologies. It is shown that good performance can be achieved with both analogue and digital voice systems, although the analogue system gives superior performance in fading. The results highlight the need for flexibility in the choice of signaling format. Linear transceiver technology capable of using many forms of narrowband modulation is described.
A Regev-Type Fully Homomorphic Encryption Scheme Using Modulus Switching
Chen, Zhigang; Wang, Jian; Song, Xinxia
2014-01-01
A critical challenge in a fully homomorphic encryption (FHE) scheme is to manage noise. Modulus switching technique is currently the most efficient noise management technique. When using the modulus switching technique to design and implement a FHE scheme, how to choose concrete parameters is an important step, but to our best knowledge, this step has drawn very little attention to the existing FHE researches in the literature. The contributions of this paper are twofold. On one hand, we propose a function of the lower bound of dimension value in the switching techniques depending on the LWE specific security levels. On the other hand, as a case study, we modify the Brakerski FHE scheme (in Crypto 2012) by using the modulus switching technique. We recommend concrete parameter values of our proposed scheme and provide security analysis. Our result shows that the modified FHE scheme is more efficient than the original Brakerski scheme in the same security level. PMID:25093212
Functional relationships of landfill and landraise capacity with design and operation parameters.
Aivaliotis, Vassilis; Dokas, Ioannis; Hatzigiannakou, Maria; Panagiotakopoulos, Demetrios
2004-08-01
Solid waste management presses for effective landfill design and operation. While planning and operating a landfill (LF) or a landraise (LR), choices need to be made regarding: (1) LF-LR morphology (base shape, side slopes, final cover thickness, LR/LF height/depth); (2) cell geometry (height, length, slopes); and (3) operation parameters (waste density, working face length, cover thicknesses). These parameters affect LF/LR capacity, operation lifespan and construction/ operation costs. In this paper, relationships are generated between capacity (C, space available for waste) and the above parameters. Incorporating real data into simulation kgamma A1.38, runs, two types of functions are developed: first, C = where A is the LF/LR base area size and kgamma a base shape-dependent coefficient; and second, C = alpha(p,gamma,A) + delta(p,gamma,A)Xp for every parameter p, where Xp is the value of p and alpha(p,gamma,A) and delta(p,gamma,A) are parameter- and base (shape/size)-specific coefficients. Moreover, the relationship between LF depth and LR height that balances excavation volume with cover material, is identified. Another result is that, for a symmetrical combination of LF/LR, with base surface area shape between square and 1:2 orthogonal, and final density between 500 and 800 kg m(-3), waste quantity placed ranges from 1.76A1.38 to 2.55A1.38 tons. The significance of such functions is obvious, as they allow the analyst to investigate alternative LF/LR schemes and make trade-off analyses.
A methodology for the transfer of probabilities between accident severity categories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitlow, J. D.; Neuhauser, K. S.
A methodology has been developed which allows the accident probabilities associated with one accident-severity category scheme to be transferred to another severity category scheme. The methodology requires that the schemes use a common set of parameters to define the categories. The transfer of accident probabilities is based on the relationships between probability of occurrence and each of the parameters used to define the categories. Because of the lack of historical data describing accident environments in engineering terms, these relationships may be difficult to obtain directly for some parameters. Numerical models or experienced judgement are often needed to obtain the relationships.more » These relationships, even if they are not exact, allow the accident probability associated with any severity category to be distributed within that category in a manner consistent with accident experience, which in turn will allow the accident probability to be appropriately transferred to a different category scheme.« less
On-line estimation of error covariance parameters for atmospheric data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1995-01-01
A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.
NASA Astrophysics Data System (ADS)
Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest
2017-12-01
The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo (MCMC) inversion algorithm, a full non-linear uncertainty analysis of the parameters and the parameter correlations can be accessed. This is essential to understand to what degree the spectral Cole-Cole parameters can be resolved from TDIP data. MCMC inversions of synthetic TDIP data, which show bell-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations increase and become non-linear. It is further investigated how waveform and parameter values influence the resolution of the Cole-Cole parameters. A limiting factor is the value of the frequency exponent, C. As C decreases, the resolution of all the Cole-Cole parameters decreases and the results become increasingly non-linear. While the values of the time constant, τ, must be in the acquisition range to resolve the parameters well, the choice between a 50 per cent and a 100 per cent duty cycle for the current injection does not have an influence on the parameter resolution. The limits of resolution and linearity are also studied in a comparison between the MCMC and a linearized gradient-based inversion approach. The two methods are consistent for resolved models, but the linearized approach tends to underestimate the uncertainties for poorly resolved parameters due to the corresponding non-linear features. Finally, an MCMC inversion of 1-D field data verifies that spectral Cole-Cole parameters can also be resolved from TD field measurements.
Large Area Crop Inventory Experiment (LACIE). An early estimate of small grains acreage
NASA Technical Reports Server (NTRS)
Lea, R. N.; Kern, D. M. (Principal Investigator)
1979-01-01
The author has identified the following significant results. A major advantage of this scheme is that it needs minimal human intervention. The entire scheme, with the exception of the choice of dates, can be computerized and the results obtained in minutes. The decision to limit the number of acquisitions processed to four was made to facilitate operation on the particular computer being used. Some earlier runs on another computer system were based on as many as seven biophase-1 acquisitions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pitman, A.J.
The sensitivity of a land-surface scheme (the Biosphere Atmosphere Transfer Scheme, BATS) to its parameter values was investigated using a single column model. Identifying which parameters were important in controlling the turbulent energy fluxes, temperature, soil moisture, and runoff was dependent upon many factors. In the simulation of a nonmoisture-stressed tropical forest, results were dependent on a combination of reservoir terms (soil depth, root distribution), flux efficiency terms (roughness length, stomatal resistance), and available energy (albedo). If moisture became limited, the reservoir terms increased in importance because the total fluxes predicted depended on moisture availability and not on the ratemore » of transfer between the surface and the atmosphere. The sensitivity shown by BATS depended on which vegetation type was being simulated, which variable was used to determine sensitivity, the magnitude and sign of the parameter change, the climate regime (precipitation amount and frequency), and soil moisture levels and proximity to wilting. The interactions between these factors made it difficult to identify the most important parameters in BATS. Therefore, this paper does not argue that a particular set of parameters is important in BATS, rather it shows that no general ranking of parameters is possible. It is also emphasized that using `stand-alone` forcing to examine the sensitivity of a land-surface scheme to perturbations, in either parameters or the atmosphere, is unreliable due to the lack of surface-atmospheric feedbacks.« less
Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.
2016-07-07
This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.
ERIC Educational Resources Information Center
Hamadneh, Iyad Mohammed
2015-01-01
This study aimed at investigating the impact changing of escape alternative position in multiple-choice test on the psychometric properties of a test and it's items parameters (difficulty, discrimination & guessing), and estimation of examinee ability. To achieve the study objectives, a 4-alternative multiple choice type achievement test…
NASA Technical Reports Server (NTRS)
Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet
1994-01-01
This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.
Prospect theory based estimation of drivers' risk attitudes in route choice behaviors.
Zhou, Lizhen; Zhong, Shiquan; Ma, Shoufeng; Jia, Ning
2014-12-01
This paper applied prospect theory (PT) to describe drivers' route choice behavior under Variable Message Sign (VMS), which presented visual traffic information to assist them to make route choice decisions. A quite rich empirical data from questionnaire and field spot was used to estimate parameters of PT. In order to make the parameters more realistic with drivers' attitudes, they were classified into different types by significant factors influencing their behaviors. Based on the travel time distribution of alternative routes and route choice results from questionnaire, the parameterized value function of each category was figured out, which represented drivers' risk attitudes and choice characteristics. The empirical verification showed that the estimates were acceptable and effective. The result showed drivers' risk attitudes and route choice characteristics could be captured by PT under real-time information shown on VMS. For practical application, once drivers' route choice characteristics and parameters were identified, their route choice behavior under different road conditions could be predicted accurately, which was the basis of traffic guidance measures formulation and implementation for targeted traffic management. Moreover, the heterogeneous risk attitudes among drivers should be considered when releasing traffic information and regulating traffic flow. Copyright © 2014 Elsevier Ltd. All rights reserved.
WRF model sensitivity to choice of parameterization: a study of the `York Flood 1999'
NASA Astrophysics Data System (ADS)
Remesan, Renji; Bellerby, Tim; Holman, Ian; Frostick, Lynne
2015-10-01
Numerical weather modelling has gained considerable attention in the field of hydrology especially in un-gauged catchments and in conjunction with distributed models. As a consequence, the accuracy with which these models represent precipitation, sub-grid-scale processes and exceptional events has become of considerable concern to the hydrological community. This paper presents sensitivity analyses for the Weather Research Forecast (WRF) model with respect to the choice of physical parameterization schemes (both cumulus parameterisation (CPSs) and microphysics parameterization schemes (MPSs)) used to represent the `1999 York Flood' event, which occurred over North Yorkshire, UK, 1st-14th March 1999. The study assessed four CPSs (Kain-Fritsch (KF2), Betts-Miller-Janjic (BMJ), Grell-Devenyi ensemble (GD) and the old Kain-Fritsch (KF1)) and four MPSs (Kessler, Lin et al., WRF single-moment 3-class (WSM3) and WRF single-moment 5-class (WSM5)] with respect to their influence on modelled rainfall. The study suggests that the BMJ scheme may be a better cumulus parameterization choice for the study region, giving a consistently better performance than other three CPSs, though there are suggestions of underestimation. The WSM3 was identified as the best MPSs and a combined WSM3/BMJ model setup produced realistic estimates of precipitation quantities for this exceptional flood event. This study analysed spatial variability in WRF performance through categorical indices, including POD, FBI, FAR and CSI during York Flood 1999 under various model settings. Moreover, the WRF model was good at predicting high-intensity rare events over the Yorkshire region, suggesting it has potential for operational use.
Regional Climate Model sesitivity to different parameterizations schemes with WRF over Spain
NASA Astrophysics Data System (ADS)
García-Valdecasas Ojeda, Matilde; Raquel Gámiz-Fortis, Sonia; Hidalgo-Muñoz, Jose Manuel; Argüeso, Daniel; Castro-Díez, Yolanda; Jesús Esteban-Parra, María
2015-04-01
The ability of the Weather Research and Forecasting (WRF) model to simulate the regional climate depends on the selection of an adequate combination of parameterization schemes. This study assesses WRF sensitivity to different parameterizations using six different runs that combined three cumulus, two microphysics and three surface/planetary boundary layer schemes in a topographically complex region such as Spain, for the period 1995-1996. Each of the simulations spanned a period of two years, and were carried out at a spatial resolution of 0.088° over a domain encompassing the Iberian Peninsula and nested in the coarser EURO-CORDEX domain (0.44° resolution). The experiments were driven by Interim ECMWF Re-Analysis (ERA-Interim) data. In addition, two different spectral nudging configurations were also analysed. The simulated precipitation and maximum and minimum temperatures from WRF were compared with Spain02 version 4 observational gridded datasets. The comparison was performed at different time scales with the purpose of evaluating the model capability to capture mean values and high-order statistics. ERA-Interim data was also compared with observations to determine the improvement obtained using dynamical downscaling with respect to the driving data. For this purpose, several parameters were analysed by directly comparing grid-points. On the other hand, the observational gridded data were grouped using a multistep regionalization to facilitate the comparison in term of monthly annual cycle and the percentiles of daily values analysed. The results confirm that no configuration performs best, but some combinations that produce better results could be chosen. Concerning temperatures, WRF provides an improvement over ERA-Interim. Overall, model outputs reduce the biases and the RMSE for monthly-mean maximum and minimum temperatures and are higher correlated with observations than ERA-Interim. The analysis shows that the Yonsei University planetary boundary layer scheme is the most appropriate parameterization in term of temperatures because it better describes monthly minimum temperatures and seems to perform well for maximum temperatures. Regarding precipitation, ERA-Interim time series are slightly higher correlated with observations than WRF, but the bias and the RMSE are largely worse. These results also suggest that CAM V.5.1 2-moment 5-class microphysics schemes should not be used due to the computational cost with no apparent gain with respect to simpler schemes such as WRF single-moment 3-class. For the convection scheme, this study suggests that Betts-Miller-Janjic scheme is an appropriate choice due to its robustness and Kain-Fritsch cumulus scheme should not be used over this region. KEY WORDS: Regional climate modelling, physics schemes, parameterizations, WRF. ACKNOWLEDGEMENTS This work has been financed by the projects P11-RNM-7941 (Junta de Andalucía-Spain) and CGL2013-48539-R (MINECO-Spain, FEDER).
A discrete-time adaptive control scheme for robot manipulators
NASA Technical Reports Server (NTRS)
Tarokh, M.
1990-01-01
A discrete-time model reference adaptive control scheme is developed for trajectory tracking of robot manipulators. The scheme utilizes feedback, feedforward, and auxiliary signals, obtained from joint angle measurement through simple expressions. Hyperstability theory is utilized to derive the adaptation laws for the controller gain matrices. It is shown that trajectory tracking is achieved despite gross robot parameter variation and uncertainties. The method offers considerable design flexibility and enables the designer to improve the performance of the control system by adjusting free design parameters. The discrete-time adaptation algorithm is extremely simple and is therefore suitable for real-time implementation. Simulations and experimental results are given to demonstrate the performance of the scheme.
A new scheme for stigmatic x-ray imaging with large magnification.
Bitter, M; Hill, K W; Delgado-Aparicio, L F; Pablant, N A; Scott, S; Jones, F; Beiersdorfer, P; Wang, E; del Rio, M Sanchez; Caughey, T A; Brunner, J
2012-10-01
This paper describes a new x-ray scheme for stigmatic imaging. The scheme consists of one convex spherically bent crystal and one concave spherically bent crystal. The radii of curvature and Bragg reflecting lattice planes of the two crystals are properly matched to eliminate the astigmatism, so that the conditions for stigmatic imaging are met for a particular wavelength. The magnification is adjustable and solely a function of the two Bragg angles or angles of incidence. Although the choice of Bragg angles is constrained by the availability of crystals, this is not a severe limitation for the imaging of plasmas, since a particular wavelength can be selected from the bremsstrahlung continuum. The working principle of this imaging scheme has been verified with visible light. Further tests with x rays are planned for the near future.
Computational unsteady aerodynamics for lifting surfaces
NASA Technical Reports Server (NTRS)
Edwards, John W.
1988-01-01
Two dimensional problems are solved using numerical techniques. Navier-Stokes equations are studied both in the vorticity-stream function formulation which appears to be the optimal choice for two dimensional problems, using a storage approach, and in the velocity pressure formulation which minimizes the number of unknowns in three dimensional problems. Analysis shows that compact centered conservative second order schemes for the vorticity equation are the most robust for high Reynolds number flows. Serious difficulties remain in the choice of turbulent models, to keep reasonable CPU efficiency.
The Effects of Block Size on the Performance of Coherent Caches in Shared-Memory Multiprocessors
1993-05-01
increase with the bandwidth and latency. For those applications with poor spatial locality, the best choice of cache line size is determined by the...observation was used in the design of two schemes: LimitLESS di- rectories and Tag caches. LimitLESS directories [15] were designed for the ALEWIFE...small packets may be used to avoid network congestion. The most important factor influencing the choice of cache line size for a multipro- cessor is the
Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis
NASA Astrophysics Data System (ADS)
Jiao, Yujian; Wang, Li-Lian; Huang, Can
2016-01-01
The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.
Smagorinsky-type diffusion in a high-resolution GCM
NASA Astrophysics Data System (ADS)
Schaefer-Rolffs, Urs; Becker, Erich
2013-04-01
The parametrization of the (horizontal) momentum diffusion is a paramount component of a Global Circulation Model (GCM). Aside from friction in the boundary layer, a relevant fraction of kinetic energy is dissipated in the free atmosphere, and it is known that a linear harmonic turbulence model is not sufficient to obtain a reasonable simulation of the kinetic energy spectrum. Therefore, often empirical hyper-diffusion schemes are employed, regardless of disadvantages like the violation of energy conservation and the second law of thermodynamics. At IAP we have developed an improved parametrization of the horizontal diffusion that is based on Smagorinsky's nonlinear and energy conservation formulation. This approach is extended by the dynamic Smagorinsky model (DSM) of M. Germano. In this new scheme, the mixing length is no longer a prescribed parameter but calculated dynamically from the resolved flow such as to preserve scale invariance for the horizontal energy cascade. The so-called Germano identity is solved by a tensor norm ansatz which yields a positive definite frictional heating. We present results from an investigation using the DSM as a parametrization of horizontal diffusion in a high-resolution version of the Kühlungborn Mechanistic general Circulation Model (KMCM) with spectral truncation at horizontal wavenumber 330. The DSM calculates the Smagorinsky parameter cS independent from the resolution scale. We find that this method yields an energy spectrum that exhibits a pronounced transition from a synoptic -3 to a mesoscale -5-3 slope at wavenumbers around 50. At the highest wavenumber end, a behaviour similar to that often obtained by tuning the hyper-diffusion is achieved self-consistently. This result is very sensitive to the explicit choice of the test filter in the DSM.
Active stabilization to prevent surge in centrifugal compression systems
NASA Technical Reports Server (NTRS)
Epstein, Alan H.; Greitzer, Edward M.; Simon, Jon S.; Valavani, Lena
1993-01-01
This report documents an experimental and analytical study of the active stabilization of surge in a centrifugal engine. The aims of the research were to extend the operating range of a compressor as far as possible and to establish the theoretical framework for the active stabilization of surge from both an aerodynamic stability and a control theoretic perspective. In particular, much attention was paid to understanding the physical limitations of active stabilization and how they are influenced by control system design parameters. Previously developed linear models of actively stabilized compressors were extended to include such nonlinear phenomena as bounded actuation, bandwidth limits, and robustness criteria. This model was then used to systematically quantify the influence of sensor-actuator selection on system performance. Five different actuation schemes were considered along with four different sensors. Sensor-actuator choice was shown to have a profound effect on the performance of the stabilized compressor. The optimum choice was not unique, but rather shown to be a strong function of some of the non-dimensional parameters which characterize the compression system dynamics. Specifically, the utility of the concepts were shown to depend on the system compliance to inertia ratio ('B' parameter) and the local slope of the compressor speedline. In general, the most effective arrangements are ones in which the actuator is most closely coupled to the compressor, such as a close-coupled bleed valve inlet jet, rather than elsewhere in the flow train, such as a fuel flow modulator. The analytical model was used to explore the influence of control system bandwidth on control effectiveness. The relevant reference frequency was shown to be the compression system's Helmholtz frequency rather than the surge frequency. The analysis shows that control bandwidths of three to ten times the Helmholtz frequency are required for larger increases in the compressor flow range. This has important implications for implementation in gas turbine engines since the Helmholtz frequencies can be over 100 Hz, making actuator design extremely challenging.
Multistage Estimation Of Frequency And Phase
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1991-01-01
Conceptual two-stage software scheme serves as prototype of multistage scheme for digital estimation of phase, frequency, and rate of change of frequency ("Doppler rate") of possibly phase-modulated received sinusoidal signal in communication system in which transmitter and/or receiver traveling rapidly, accelerating, and/or jerking severely. Each additional stage of multistage scheme provides increasingly refined estimate of frequency and phase of signal. Conceived for use in estimating parameters of signals from spacecraft and high dynamic GPS signal parameters, also applicable, to terrestrial stationary/mobile (e.g., cellular radio) and land-mobile/satellite communication systems.
An empirical analysis of the Ebola outbreak in West Africa
NASA Astrophysics Data System (ADS)
Khaleque, Abdul; Sen, Parongama
2017-02-01
The data for the Ebola outbreak that occurred in 2014-2016 in three countries of West Africa are analysed within a common framework. The analysis is made using the results of an agent based Susceptible-Infected-Removed (SIR) model on a Euclidean network, where nodes at a distance l are connected with probability P(l) ∝ l-δ, δ determining the range of the interaction, in addition to nearest neighbors. The cumulative (total) density of infected population here has the form , where the parameters depend on δ and the infection probability q. This form is seen to fit well with the data. Using the best fitting parameters, the time at which the peak is reached is estimated and is shown to be consistent with the data. We also show that in the Euclidean model, one can choose δ and q values which reproduce the data for the three countries qualitatively. These choices are correlated with population density, control schemes and other factors. Comparing the real data and the results from the model one can also estimate the size of the actual population susceptible to the disease. Rescaling the real data a reasonably good quantitative agreement with the simulation results is obtained.
An Overview of the MaRIE X-FEL and Electron Radiography LINAC RF Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, Joseph Thomas III; Rees, Daniel Earl; Scheinker, Alexander
The purpose of the Matter-Radiation Interactions in Extremes (MaRIE) facility at Los Alamos National Laboratory is to investigate the performance limits of materials in extreme environments. The MaRIE facility will utilize a 12 GeV linac to drive an X-ray Free-Electron Laser (FEL). Most of the same linac will also be used to perform electron radiography. The main linac is driven by two shorter linacs; one short linac optimized for X-FEL pulses and one for electron radiography. The RF systems have historically been the one of the largest single component costs of a linac. We will describe the details of themore » different types of RF systems required by each part of the linacs. Starting with the High Power RF system, we will present our methodology for the choice of RF system peak power and pulselength with respect to klystron parameters, modulator parameters, performance requirements and relative costs. We will also present an overview of the Low Level RF systems that are proposed for MaRIE and briefly describe their use with some proposed control schemes.« less
Adaptive control of stochastic linear systems with unknown parameters. M.S. Thesis
NASA Technical Reports Server (NTRS)
Ku, R. T.
1972-01-01
The problem of optimal control of linear discrete-time stochastic dynamical system with unknown and, possibly, stochastically varying parameters is considered on the basis of noisy measurements. It is desired to minimize the expected value of a quadratic cost functional. Since the simultaneous estimation of the state and plant parameters is a nonlinear filtering problem, the extended Kalman filter algorithm is used. Several qualitative and asymptotic properties of the open loop feedback optimal control and the enforced separation scheme are discussed. Simulation results via Monte Carlo method show that, in terms of the performance measure, for stable systems the open loop feedback optimal control system is slightly better than the enforced separation scheme, while for unstable systems the latter scheme is far better.
A mentorship scheme for senior house officers.
Beckett, M
2000-12-01
Many junior doctors are unsure as to how their aspirations and developing abilities will match up to the demands of specialty training. They need sensitive and realistic guidance if they are to make the right career choice in a highly competitive market.
The Partial Purification and Characterization of Lactate Dehydrogenase.
ERIC Educational Resources Information Center
Wolf, Edward C.
1988-01-01
Offers several advantages over other possibilities as the enzyme of choice for a student's first exposure to a purification scheme. Uses equipment and materials normally found in biochemistry laboratories. Incorporates several important biochemical techniques including spectrophotometry, chromatography, centrifugation, and electrophoresis. (MVL)
NASA Technical Reports Server (NTRS)
McFarquhar, Greg M.; Zhang, Henian; Dudhia, Jimy; Halverson, Jeffrey B.; Heymsfield, Gerald; Hood, Robbie; Marks, Frank, Jr.
2003-01-01
Fine-resolution simulations of Hurricane Erin 2001 are conducted using the Penn State University/National Center for Atmospheric Research mesoscale model version 3.5 to investigate the role of thermodynamic, boundary layer and microphysical processes in Erin's growth and maintenance, and their effects on the horizontal and vertical distributions of hydrometeors. Through comparison against radar, radiometer, and dropsonde data collected during the Convection and Moisture Experiment 4, it is seen that realistic simulations of Erin are obtained provided that fine resolution simulations with detailed representations of physical processes are conducted. The principle findings of the study are as follows: 1) a new iterative condensation scheme, which limits the unphysical increase of equivalent potential temperature associated with most condensation schemes, increases the horizontal size of the hurricane, decreases its maximum rainfall rate, reduces its intensity, and makes its eye more moist; 2) in general, microphysical parameterization schemes with more categories of hydrometeors produce more intense hurricanes, larger hydrometeor mixing ratios, and more intense updrafts and downdrafts; 3) the choice of coefficients describing hydrometeor fall velocities has as big of an impact on the hurricane simulations as does choice of microphysical parameterization scheme with no clear relationship between fall velocity and hurricane intensity; and 4) in order for a tropical cyclone to adequately intensify, an advanced boundary layer scheme (e.g., Burk-Thompson scheme) must be used to represent boundary layer processes. The impacts of varying simulations on the horizontal and vertical distributions of different categories of hydrometeor species, on equivalent potential temperature, and on storm updrafts and downdrafts are examined to determine how the release of latent heat feedbacks upon the structure of Erin. In general, all simulations tend to overpredict precipitation rate and hydrometeor mixing ratios. The ramifications of these findings for quantitative precipitation forecasts (QPFs) of tropical cyclones are discussed.
ECCM Scheme against Interrupted Sampling Repeater Jammer Based on Parameter-Adjusted Waveform Design
Wei, Zhenhua; Peng, Bo; Shen, Rui
2018-01-01
Interrupted sampling repeater jamming (ISRJ) is an effective way of deceiving coherent radar sensors, especially for linear frequency modulated (LFM) radar. In this paper, for a simplified scenario with a single jammer, we propose a dynamic electronic counter-counter measure (ECCM) scheme based on jammer parameter estimation and transmitted signal design. Firstly, the LFM waveform is transmitted to estimate the main jamming parameters by investigating the discontinuousness of the ISRJ’s time-frequency (TF) characteristics. Then, a parameter-adjusted intra-pulse frequency coded signal, whose ISRJ signal after matched filtering only forms a single false target, is designed adaptively according to the estimated parameters, i.e., sampling interval, sampling duration and repeater times. Ultimately, for typical jamming scenes with different jamming signal ratio (JSR) and duty cycle, we propose two particular ISRJ suppression approaches. Simulation results validate the effective performance of the proposed scheme for countering the ISRJ, and the trade-off relationship between the two approaches is demonstrated. PMID:29642508
A new chaotic communication scheme based on adaptive synchronization.
Xiang-Jun, Wu
2006-12-01
A new chaotic communication scheme using adaptive synchronization technique of two unified chaotic systems is proposed. Different from the existing secure communication methods, the transmitted signal is modulated into the parameter of chaotic systems. The adaptive synchronization technique is used to synchronize two identical chaotic systems embedded in the transmitter and the receiver. It is assumed that the parameter of the receiver system is unknown. Based on the Lyapunov stability theory, an adaptive control law is derived to make the states of two identical unified chaotic systems with unknown system parameters asymptotically synchronized; thus the parameter of the receiver system is identified. Then the recovery of the original information signal in the receiver is successfully achieved on the basis of the estimated parameter. It is noticed that the time required for recovering the information signal and the accuracy of the recovered signal very sensitively depends on the frequency of the information signal. Numerical results have verified the effectiveness of the proposed scheme.
NASA Astrophysics Data System (ADS)
Liu, W.; Wang, H.; Liu, D.; Miu, Y.
2018-05-01
Precise geometric parameters are essential to ensure the positioning accuracy for space optical cameras. However, state-of-the-art onorbit calibration method inevitably suffers from long update cycle and poor timeliness performance. To this end, in this paper we exploit the optical auto-collimation principle and propose a real-time onboard calibration scheme for monitoring key geometric parameters. Specifically, in the proposed scheme, auto-collimation devices are first designed by installing collimated light sources, area-array CCDs, and prisms inside the satellite payload system. Through utilizing those devices, the changes in the geometric parameters are elegantly converted into changes in the spot image positions. The variation of geometric parameters can be derived via extracting and processing the spot images. An experimental platform is then set up to verify the feasibility and analyze the precision index of the proposed scheme. The experiment results demonstrate that it is feasible to apply the optical auto-collimation principle for real-time onboard monitoring.
Entropy Splitting and Numerical Dissipation
NASA Technical Reports Server (NTRS)
Yee, H. C.; Vinokur, M.; Djomehri, M. J.
1999-01-01
A rigorous stability estimate for arbitrary order of accuracy of spatial central difference schemes for initial-boundary value problems of nonlinear symmetrizable systems of hyperbolic conservation laws was established recently by Olsson and Oliger (1994) and Olsson (1995) and was applied to the two-dimensional compressible Euler equations for a perfect gas by Gerritsen and Olsson (1996) and Gerritsen (1996). The basic building block in developing the stability estimate is a generalized energy approach based on a special splitting of the flux derivative via a convex entropy function and certain homogeneous properties. Due to some of the unique properties of the compressible Euler equations for a perfect gas, the splitting resulted in the sum of a conservative portion and a non-conservative portion of the flux derivative. hereafter referred to as the "Entropy Splitting." There are several potential desirable attributes and side benefits of the entropy splitting for the compressible Euler equations that were not fully explored in Gerritsen and Olsson. The paper has several objectives. The first is to investigate the choice of the arbitrary parameter that determines the amount of splitting and its dependence on the type of physics of current interest to computational fluid dynamics. The second is to investigate in what manner the splitting affects the nonlinear stability of the central schemes for long time integrations of unsteady flows such as in nonlinear aeroacoustics and turbulence dynamics. If numerical dissipation indeed is needed to stabilize the central scheme, can the splitting help minimize the numerical dissipation compared to its un-split cousin? Extensive numerical study on the vortex preservation capability of the splitting in conjunction with central schemes for long time integrations will be presented. The third is to study the effect of the non-conservative proportion of splitting in obtaining the correct shock location for high speed complex shock-turbulence interactions. The fourth is to determine if this method can be extended to other physical equations of state and other evolutionary equation sets. If numerical dissipation is needed, the Yee, Sandham, and Djomehri (1999) numerical dissipation is employed. The Yee et al. schemes fit in the Olsson and Oliger framework.
A general range-separated double-hybrid density-functional theory
NASA Astrophysics Data System (ADS)
Kalai, Cairedine; Toulouse, Julien
2018-04-01
A range-separated double-hybrid (RSDH) scheme which generalizes the usual range-separated hybrids and double hybrids is developed. This scheme consistently uses a two-parameter Coulomb-attenuating-method (CAM)-like decomposition of the electron-electron interaction for both exchange and correlation in order to combine Hartree-Fock exchange and second-order Møller-Plesset (MP2) correlation with a density functional. The RSDH scheme relies on an exact theory which is presented in some detail. Several semi-local approximations are developed for the short-range exchange-correlation density functional involved in this scheme. After finding optimal values for the two parameters of the CAM-like decomposition, the RSDH scheme is shown to have a relatively small basis dependence and to provide atomization energies, reaction barrier heights, and weak intermolecular interactions globally more accurate or comparable to range-separated MP2 or standard MP2. The RSDH scheme represents a new family of double hybrids with minimal empiricism which could be useful for general chemical applications.
LES of Temporally Evolving Mixing Layers by Three High Order Schemes
NASA Astrophysics Data System (ADS)
Yee, H.; Sjögreen, B.; Hadjadj, A.
2011-10-01
The performance of three high order shock-capturing schemes is compared for large eddy simulations (LES) of temporally evolving mixing layers for different convective Mach number (Mc) ranging from the quasi-incompressible regime to highly compressible supersonic regime. The considered high order schemes are fifth-order WENO (WENO5), seventh-order WENO (WENO7), and the associated eighth-order central spatial base scheme with the dissipative portion of WENO7 as a nonlinear post-processing filter step (WENO7fi). This high order nonlinear filter method (Yee & Sjögreen 2009) is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The LES results by WENO7fi using the same scheme parameter agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) by Rogers & Moser (1994) and Pantano & Sarkar (2002), whereas results by WENO5 and WENO7 compare poorly with experimental data and DNS computations.
NASA Astrophysics Data System (ADS)
Eremenko, M.; Sgheri, L.; Ridolfi, M.; Dufour, G.; Cuesta, J.
2017-12-01
Lower tropospheric ozone (O3) retrievals from nadir sounders is challenging due to the lack of vertical sensitivity of the measurements and towards the lowest layers. If improvements have been made during the last decade, it is still important to explore possibilities to improve the retrieval algorithms themselves. O3 retrieval from nadir satellite observations is an ill-conditioned problem, which requires regularization using constraint matrices. Up to now, most of the retrieval algorithms rely on a fixed constraint. The constraint is determined and fixed beforehand, on the basis of sensitivity tests. This does not allow ones to take advantage of the entire capabilities of the satellite measurements, which vary with the thermal conditions of the observed scenes. To overcome this limitation, we developed a self-adapting and altitude-dependent regularization scheme. A crucial step is the choice of the strength of the constraint. This choice is done during an iterative process and depends on the measurement errors and on the sensitivity of the measurements to the target parameters at the different altitudes. The challenge is to limit the use of a priori constraints to the minimal amount needed to perform the inversion. The algorithm has been tested on synthetic observations matching the future IASI-NG satellite instrument. IASI-NG measurements are simulated on the basis of O3 concentrations taken from an atmospheric model and retrieved using two retrieval schemes (the standard and self-adapting ones). Comparison of the results shows that the sensitivity of the observations to the O3 amount in the lowest layers (given by the degrees of freedom for the solution) is increased, which allows a better description of the ozone distribution, especially in the case of large ozone plumes. Biases are reduced and the spatial correlation is improved. Tentative of application to real observations from IASI, currently onboard the Metop satellite will also be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, A.M.; Feldberg, S.W.; Greenhill, H.B.
1992-05-01
Instrumental, experimental and theoretical approaches required to quantify the thermodynamic and kinetic aspects of the square reaction scheme relating the fac{sup +/0} and mer{sup +/0} redox couples in the high-resistance solvent dichloromethane, at microelectrodes, under both steady-state and fast scan rate (transient) conditions, are presented. fac{sup +}, mer{sup +}, fac{sup 0}, and mer{sup 0} represent the facial and meridional isomers of Cr-(CO){sub 3}({eta}{sup 3}-Ph{sub 2}PCH{sub 2}CH{sub 2}P(Ph)CH{sub 2}CH{sub 2}PPh{sub 2}) in the oxidized 17 electron (fac{sup +}, mer{sup +}) and reduced 18 electron (fac{sup 0}, mer{sup 0}) configurations, respectively. A computationally efficient simulation method based on the DuFort-Frankel algorithm ismore » readily applied to microelectrodes and enables simulations to be undertaken for both steady-state and transient voltammetry at electrodes of microdisk geometry. The minimal ohmic drop present under steady-state conditions enables a limited set of parameters to be calculated for the square scheme. However, data relevant to species generated as a product of electron transfer have to be determined from the transient voltammetry at fast scans rates. For the latter experiments, a newly designed electrochemical cell was developed along with relevant electronic circuitry to minimize the background current and uncompensated resistance. The cell contains two matched working microelectrodes (one in the test solution and one in the separated electrolyte solution) and a common quasi-reference electrode which passes through both compartments of the cell. It is concluded that a judicious choice of steady-state and transient techniques, such as those described in this work, are necessary to characterize complex reaction schemes in high-resistance solvents. 46 refs., 7 figs., 3 tabs.« less
A comprehensive numerical analysis of background phase correction with V-SHARP.
Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand
2017-04-01
Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Application of holographic elements in displays and planar illuminators
NASA Astrophysics Data System (ADS)
Putilin, Andrew; Gustomiasov, Igor
2007-05-01
Holographic Optical Elements (HOE's) on planar waveguides can be used to design the planar optics for backlit units, color selectors or filters, lenses for virtual reality displays. The several schemes for HOE recording are proposed to obtain planar stereo backlit unit and private eye displays light source. It is shown in the paper that the specific light transformation grating permits to construct efficient backlit units for display holograms and LCD. Several schemes of reflection/transmission backlit units and scattering films based on holographic optical elements are also proposed. The performance of the waveguide HOE can be optimized using the parameters of recording scheme and etching parameters. The schemes of HOE application are discussed and some experimental results are shown.
Scheme variations of the QCD coupling
NASA Astrophysics Data System (ADS)
Boito, Diogo; Jamin, Matthias; Miravitllas, Ramon
2017-03-01
The Quantum Chromodynamics (QCD) coupling αs is a central parameter in the Standard Model of particle physics. However, it depends on theoretical conventions related to renormalisation and hence is not an observable quantity. In order to capture this dependence in a transparent way, a novel definition of the QCD coupling, denoted by â, is introduced, whose running is explicitly renormalisation scheme invariant. The remaining renormalisation scheme dependence is related to transformations of the QCD scale Λ, and can be parametrised by a single parameter C. Hence, we call â the C-scheme coupling. The dependence on C can be exploited to study and improve perturbative predictions of physical observables. This is demonstrated for the QCD Adler function and hadronic decays of the τ lepton.
Simplification of a dust emission scheme and comparison with data
NASA Astrophysics Data System (ADS)
Shao, Yaping
2004-05-01
A simplification of a dust emission scheme is proposed, which takes into account of saltation bombardment and aggregates disintegration. The statement of the scheme is that dust emission is proportional to streamwise saltation flux, but the proportionality depends on soil texture and soil plastic pressure p. For small p values (loose soils), dust emission rate is proportional to u*4 (u* is friction velocity) but not necessarily so in general. The dust emission predictions using the scheme are compared with several data sets published in the literature. The comparison enables the estimate of a model parameter and soil plastic pressure for various soils. While more data are needed for further verification, a general guideline for choosing model parameters is recommended.
How the twain can meet: Prospect theory and models of heuristics in risky choice.
Pachur, Thorsten; Suter, Renata S; Hertwig, Ralph
2017-03-01
Two influential approaches to modeling choice between risky options are algebraic models (which focus on predicting the overt decisions) and models of heuristics (which are also concerned with capturing the underlying cognitive process). Because they rest on fundamentally different assumptions and algorithms, the two approaches are usually treated as antithetical, or even incommensurable. Drawing on cumulative prospect theory (CPT; Tversky & Kahneman, 1992) as the currently most influential instance of a descriptive algebraic model, we demonstrate how the two modeling traditions can be linked. CPT's algebraic functions characterize choices in terms of psychophysical (diminishing sensitivity to probabilities and outcomes) as well as psychological (risk aversion and loss aversion) constructs. Models of heuristics characterize choices as rooted in simple information-processing principles such as lexicographic and limited search. In computer simulations, we estimated CPT's parameters for choices produced by various heuristics. The resulting CPT parameter profiles portray each of the choice-generating heuristics in psychologically meaningful ways-capturing, for instance, differences in how the heuristics process probability information. Furthermore, CPT parameters can reflect a key property of many heuristics, lexicographic search, and track the environment-dependent behavior of heuristics. Finally, we show, both in an empirical and a model recovery study, how CPT parameter profiles can be used to detect the operation of heuristics. We also address the limits of CPT's ability to capture choices produced by heuristics. Our results highlight an untapped potential of CPT as a measurement tool to characterize the information processing underlying risky choice. Copyright © 2017 Elsevier Inc. All rights reserved.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems. PMID:25811858
NASA Astrophysics Data System (ADS)
D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice
2018-05-01
In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.
MRI-based treatment planning with pseudo CT generated through atlas registration.
Uh, Jinsoo; Merchant, Thomas E; Li, Yimei; Li, Xingyu; Hua, Chiaho
2014-05-01
To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787-0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%-98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.
MRI-based treatment planning with pseudo CT generated through atlas registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uh, Jinsoo, E-mail: jinsoo.uh@stjude.org; Merchant, Thomas E.; Hua, Chiaho
2014-05-15
Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration ofmore » conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.« less
MRI-based treatment planning with pseudo CT generated through atlas registration
Uh, Jinsoo; Merchant, Thomas E.; Li, Yimei; Li, Xingyu; Hua, Chiaho
2014-01-01
Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs. PMID:24784377
A national UK survey of radiology trainees special interest choices: what and why?
Parvizi, Nassim; Bhuva, Shaheel
2017-11-01
A national survey was designed to better understand factors influencing special interest choices, future aspirations of UK radiology trainees and perceptions of breast radiology. A SurveyMonkey questionnaire was developed and distributed to all radiology trainees in the UK through the British Institute of Radiology, RCR Junior Radiologists Forum and by directly contacting UK training schemes as well as by social media between December 2015 and January 2016. From 21 training schemes across the UK, 232 responses were received. Over half entered radiology after foundation training and 62% were ST1-3; one-fifth of trainees intended to leave the NHS. The most popular special interests were musculoskeletal (18%), abdominal imaging (16%) and neuroradiology (13%). Gynaecological and oncological imaging proved to be the least popular. Strong personal interest, a successful rotation during training, a mix of imaging modalities, direct impact on patient care and job prospects were the most popular factors influencing career choice. Research and potential for private income were the least influential factors. Respondents detailed their perceptions of breast radiology, selecting an awareness of career prospects (41%) and a better trainee experience (36%) as factors that would increase their interest in pursuing it as a career. Understanding the factors that influence special interest choice is essential to addressing the alarming staffing shortfalls that will befall certain radiology special interests. Addressing trainee's preconceptions and improving the trainee experience are key to attracting trainees to breast radiology. Advances in knowledge: This is the first survey of its kind in the UK literature designed to evaluate special interest career choices and the factors that influence those among radiology trainees.
Coherence rephasing combined with spin-wave storage using chirped control pulses
NASA Astrophysics Data System (ADS)
Demeter, Gabor
2014-06-01
Photon-echo based optical quantum memory schemes often employ intermediate steps to transform optical coherences to spin coherences for longer storage times. We analyze a scheme that uses three identical chirped control pulses for coherence rephasing in an inhomogeneously broadened ensemble of three-level Λ systems. The pulses induce a cyclic permutation of the atomic populations in the adiabatic regime. Optical coherences created by a signal pulse are stored as spin coherences at an intermediate time interval, and are rephased for echo emission when the ensemble is returned to the initial state. Echo emission during a possible partial rephasing when the medium is inverted can be suppressed with an appropriate choice of control pulse wave vectors. We demonstrate that the scheme works in an optically dense ensemble, despite control pulse distortions during propagation. It integrates conveniently the spin-wave storage step into memory schemes based on a second rephasing of the atomic coherences.
Pricing and reimbursement frameworks in Central Eastern Europe: a decision tool to support choices.
Kolasa, Katarzyna; Kalo, Zoltan; Hornby, Edward
2015-02-01
Given limited financial resources in the Central Eastern European (CEE) region, challenges in obtaining access to innovative medical technologies are formidable. The objective of this research was to develop a decision tree that supports decision makers and drug manufacturers from CEE region in their search for optimal innovative pricing and reimbursement scheme (IPRSs). A systematic literature review was performed to search for published IPRSs, and then ten experts from the CEE region were interviewed to ascertain their opinions on these schemes. In total, 33 articles representing 46 unique IPRSs were analyzed. Based on our literature review and subsequent expert input, key decision nodes and branches of the decision tree were developed. The results indicate that outcome-based schemes are better suited to deal with uncertainties surrounding cost effectiveness, while non-outcome-based schemes are more appropriate for pricing and budget impact challenges.
Regularization of soft-X-ray imaging in the DIII-D tokamak
Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...
2015-03-02
We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less
Gyroaveraging operations using adaptive matrix operators
NASA Astrophysics Data System (ADS)
Dominski, Julien; Ku, Seung-Hoe; Chang, Choong-Seock
2018-05-01
A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidal equilibrium has been studied. A successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.
NASA Astrophysics Data System (ADS)
Madhulatha, A.; Rajeevan, M.
2018-02-01
Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.
Reliable Geographical Forwarding in Cognitive Radio Sensor Networks Using Virtual Clusters
Zubair, Suleiman; Fisal, Norsheila
2014-01-01
The need for implementing reliable data transfer in resource-constrained cognitive radio ad hoc networks is still an open issue in the research community. Although geographical forwarding schemes are characterized by their low overhead and efficiency in reliable data transfer in traditional wireless sensor network, this potential is still yet to be utilized for viable routing options in resource-constrained cognitive radio ad hoc networks in the presence of lossy links. In this paper, a novel geographical forwarding technique that does not restrict the choice of the next hop to the nodes in the selected route is presented. This is achieved by the creation of virtual clusters based on spectrum correlation from which the next hop choice is made based on link quality. The design maximizes the use of idle listening and receiver contention prioritization for energy efficiency, the avoidance of routing hot spots and stability. The validation result, which closely follows the simulation result, shows that the developed scheme can make more advancement to the sink as against the usual decisions of relevant ad hoc on-demand distance vector route select operations, while ensuring channel quality. Further simulation results have shown the enhanced reliability, lower latency and energy efficiency of the presented scheme. PMID:24854362
Ensemble Kalman Filter Data Assimilation in a Solar Dynamo Model
NASA Astrophysics Data System (ADS)
Dikpati, M.
2017-12-01
Despite great advancement in solar dynamo models since the first model by Parker in 1955, there remain many challenges in the quest to build a dynamo-based prediction scheme that can accurately predict the solar cycle features. One of these challenges is to implement modern data assimilation techniques, which have been used in the oceanic and atmospheric prediction models. Development of data assimilation in solar models are in the early stages. Recently, observing system simulation experiments (OSSE's) have been performed using Ensemble Kalman Filter data assimilation, in the framework of Data Assimilation Research Testbed of NCAR (NCAR-DART), for estimating parameters in a solar dynamo model. I will demonstrate how the selection of ensemble size, number of observations, amount of error in observations and the choice of assimilation interval play important role in parameter estimation. I will also show how the results of parameter reconstruction improve when accuracy in low-latitude observations is increased, despite large error in polar region data. I will then describe how implementation of data assimilation in a solar dynamo model can bring more accuracy in the prediction of polar fields in North and South hemispheres during the declining phase of cycle 24. Recent evidence indicates that the strength of the Sun's polar field during the cycle minima might be a reliable predictor for the next sunspot cycle's amplitude; therefore it is crucial to accurately predict the polar field strength and pattern.
Conceptual design study of the moderate size superconducting spherical tokamak power plant
NASA Astrophysics Data System (ADS)
Gi, Keii; Ono, Yasushi; Nakamura, Makoto; Someya, Youji; Utoh, Hiroyasu; Tobita, Kenji; Ono, Masayuki
2015-06-01
A new conceptual design of the superconducting spherical tokamak (ST) power plant was proposed as an attractive choice for tokamak fusion reactors. We reassessed a possibility of the ST as a power plant using the conservative reactor engineering constraints often used for the conventional tokamak reactor design. An extensive parameters scan which covers all ranges of feasible superconducting ST reactors was completed, and five constraints which include already achieved plasma magnetohydrodynamic (MHD) and confinement parameters in ST experiments were established for the purpose of choosing the optimum operation point. Based on comparison with the estimated future energy costs of electricity (COEs) in Japan, cost-effective ST reactors can be designed if their COEs are smaller than 120 mills kW-1 h-1 (2013). We selected the optimized design point: A = 2.0 and Rp = 5.4 m after considering the maintenance scheme and TF ripple. A self-consistent free-boundary MHD equilibrium and poloidal field coil configuration of the ST reactor were designed by modifying the neutral beam injection system and plasma profiles. The MHD stability of the equilibrium was analysed and a ramp-up scenario was considered for ensuring the new ST design. The optimized moderate-size ST power plant conceptual design realizes realistic plasma and fusion engineering parameters keeping its economic competitiveness against existing energy sources in Japan.
Deficiencies of the cryptography based on multiple-parameter fractional Fourier transform.
Ran, Qiwen; Zhang, Haiying; Zhang, Jin; Tan, Liying; Ma, Jing
2009-06-01
Methods of image encryption based on fractional Fourier transform have an incipient flaw in security. We show that the schemes have the deficiency that one group of encryption keys has many groups of keys to decrypt the encrypted image correctly for several reasons. In some schemes, many factors result in the deficiencies, such as the encryption scheme based on multiple-parameter fractional Fourier transform [Opt. Lett.33, 581 (2008)]. A modified method is proposed to avoid all the deficiencies. Security and reliability are greatly improved without increasing the complexity of the encryption process. (c) 2009 Optical Society of America.
Wavelet-based multiscale adjoint waveform-difference tomography using body and surface waves
NASA Astrophysics Data System (ADS)
Yuan, Y. O.; Simons, F. J.; Bozdag, E.
2014-12-01
We present a multi-scale scheme for full elastic waveform-difference inversion. Using a wavelet transform proves to be a key factor to mitigate cycle-skipping effects. We start with coarse representations of the seismogram to correct a large-scale background model, and subsequently explain the residuals in the fine scales of the seismogram to map the heterogeneities with great complexity. We have previously applied the multi-scale approach successfully to body waves generated in a standard model from the exploration industry: a modified two-dimensional elastic Marmousi model. With this model we explored the optimal choice of wavelet family, number of vanishing moments and decomposition depth. For this presentation we explore the sensitivity of surface waves in waveform-difference tomography. The incorporation of surface waves is rife with cycle-skipping problems compared to the inversions considering body waves only. We implemented an envelope-based objective function probed via a multi-scale wavelet analysis to measure the distance between predicted and target surface-wave waveforms in a synthetic model of heterogeneous near-surface structure. Our proposed method successfully purges the local minima present in the waveform-difference misfit surface. An elastic shallow model with 100~m in depth is used to test the surface-wave inversion scheme. We also analyzed the sensitivities of surface waves and body waves in full waveform inversions, as well as the effects of incorrect density information on elastic parameter inversions. Based on those numerical experiments, we ultimately formalized a flexible scheme to consider both body and surface waves in adjoint tomography. While our early examples are constructed from exploration-style settings, our procedure will be very valuable for the study of global network data.
This study considers the performance of 7 of the Weather Research and Forecast model boundary-layer (BL) parameterization schemes in a complex...schemes performed best. The surface parameters, planetary BL structure, and vertical profiles are important for US Army Research Laboratory
NASA Astrophysics Data System (ADS)
Armand J, K. M.
2017-12-01
In this study, version 4 of the regional climate model (RegCM4) is used to perform 6 years simulation including one year for spin-up (from January 2001 to December 2006) over Central Africa using four convective schemes: The Emmanuel scheme (MIT), the Grell scheme with Arakawa-Schulbert closure assumption (GAS), the Grell scheme with Fritsch-Chappell closure assumption (GFC) and the Anthes-Kuo scheme (Kuo). We have investigated the ability of the model to simulate precipitation, surface temperature, wind and aerosols optical depth. Emphasis in the model results were made in December-January-February (DJF) and July-August-September (JAS) periods. Two subregions have been identified for more specific analysis namely: zone 1 which corresponds to the sahel region mainly classified as desert and steppe and zone 2 which is a region spanning the tropical rain forest and is characterised by a bimodal rain regime. We found that regardless of periods or simulated parameters, MIT scheme generally has a tendency to overestimate. The GAS scheme is more suitable in simulating the aforementioned parameters, as well as the diurnal cycle of precipitations everywhere over the study domain irrespective of the season. In JAS, model results are similar in the representation of regional wind circulation. Apart from the MIT scheme, all the convective schemes give the same trends in aerosols optical depth simulations. Additional experiment reveals that the use of BATS instead of Zeng scheme to calculate ocean flux appears to improve the quality of the model simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Liwei; Qian, Yun; Zhou, Tianjun
2014-10-01
In this study, we calibrated the performance of regional climate model RegCM3 with Massachusetts Institute of Technology (MIT)-Emanuel cumulus parameterization scheme over CORDEX East Asia domain by tuning the selected seven parameters through multiple very fast simulated annealing (MVFSA) sampling method. The seven parameters were selected based on previous studies, which customized the RegCM3 with MIT-Emanuel scheme through three different ways by using the sensitivity experiments. The responses of model results to the seven parameters were investigated. Since the monthly total rainfall is constrained, the simulated spatial pattern of rainfall and the probability density function (PDF) distribution of daily rainfallmore » rates are significantly improved in the optimal simulation. Sensitivity analysis suggest that the parameter “relative humidity criteria” (RH), which has not been considered in the default simulation, has the largest effect on the model results. The responses of total rainfall over different regions to RH were examined. Positive responses of total rainfall to RH are found over northern equatorial western Pacific, which are contributed by the positive responses of explicit rainfall. Followed by an increase of RH, the increases of the low-level convergence and the associated increases in cloud water favor the increase of the explicit rainfall. The identified optimal parameters constrained by the total rainfall have positive effects on the low-level circulation and the surface air temperature. Furthermore, the optimized parameters based on the extreme case are suitable for a normal case and the model’s new version with mixed convection scheme.« less
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby
In this study a numerical modeling framework for simulating extreme storm events was established using the Weather Research and Forecasting (WRF) model. Such a framework is necessary for the derivation of engineering parameters such as probable maximum precipitation that are the cornerstone of large water management infrastructure design. Here this framework was built based on a heavy storm that occurred in Nashville (USA) in 2010, and verified using two other extreme storms. To achieve the optimal setup, several combinations of model resolutions, initial/boundary conditions (IC/BC), cloud microphysics and cumulus parameterization schemes were evaluated using multiple metrics of precipitation characteristics. Themore » evaluation suggests that WRF is most sensitive to IC/BC option. Simulation generally benefits from finer resolutions up to 5 km. At the 15km level, NCEP2 IC/BC produces better results, while NAM IC/BC performs best at the 5km level. Recommended model configuration from this study is: NAM or NCEP2 IC/BC (depending on data availability), 15km or 15km-5km nested grids, Morrison microphysics and Kain-Fritsch cumulus schemes. Validation of the optimal framework suggests that these options are good starting choices for modeling extreme events similar to the test cases. This optimal framework is proposed in response to emerging engineering demands of extreme storm events forecasting and analyses for design, operations and risk assessment of large water infrastructures.« less
Fan, X; He, L; Lu, H W; Li, J
2014-09-01
This study proposes an environmental- and health-risk-induced remediation design approach for benzene-contaminated groundwater. It involves exposure frequency and intake rates that are important but difficult to be exactly quantified as breakthrough point. Flexible health-risk control is considered in the simulation and optimization work. The proposed approach is then applied to a petroleum-contaminated site in western Canada. Different situations about remediation durations, public concerns, and satisfactory degrees are addressed by the approach. The relationship between environmental standards and health-risk limits is analyzed, in association with their effect on remediation costs. Insights of three uncertain factors (i.e. exposure frequency, intake rate and health-risk threshold) for the remediation system are also explored, on a basis of understanding their impacts on health risk as well as their importance order. The case study results show that (1) nature attenuation plays a more important role in long-term remediation scheme than the pump-and-treat system; (2) carcinogenic risks have greater impact on total pumping rates than environmental standards for long-term remediation; (3) intake rates are the second important factor affecting the remediation system's performance, followed by exposure frequency; (4) the 10-year remediation scheme is the most robust choice when environmental and health-risk concerns are not well quantified. Copyright © 2014 Elsevier Ltd. All rights reserved.
Robust gaze-steering of an active vision system against errors in the estimated parameters
NASA Astrophysics Data System (ADS)
Han, Youngmo
2015-01-01
Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.
NASA Astrophysics Data System (ADS)
Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry
2017-04-01
Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show highly nonlinear effect to the model output. The most sensitive parameters will be subject to inverse estimation from the virtual field sampling data using DREAMzs algorithm. The estimated parameters can then be compared with the ground truth in order to determine the suitability of the sampling schemes to identify specific traits or parameters of the root growth model.
Determining Consumer Preference for Furniture Product Characteristics
ERIC Educational Resources Information Center
Turner, Carolyn S.; Edwards, Kay P.
1974-01-01
The paper describes instruments for determining preferences of consumers for selected product characteristics associated with furniture choices--specifically style, color, color scheme, texture, and materials--and the procedures for administration of those instruments. Results are based on a random sampling of public housing residents. (Author/MW)
Molecular dynamics simulations in hybrid particle-continuum schemes: Pitfalls and caveats
NASA Astrophysics Data System (ADS)
Stalter, S.; Yelash, L.; Emamy, N.; Statt, A.; Hanke, M.; Lukáčová-Medvid'ová, M.; Virnau, P.
2018-03-01
Heterogeneous multiscale methods (HMM) combine molecular accuracy of particle-based simulations with the computational efficiency of continuum descriptions to model flow in soft matter liquids. In these schemes, molecular simulations typically pose a computational bottleneck, which we investigate in detail in this study. We find that it is preferable to simulate many small systems as opposed to a few large systems, and that a choice of a simple isokinetic thermostat is typically sufficient while thermostats such as Lowe-Andersen allow for simulations at elevated viscosity. We discuss suitable choices for time steps and finite-size effects which arise in the limit of very small simulation boxes. We also argue that if colloidal systems are considered as opposed to atomistic systems, the gap between microscopic and macroscopic simulations regarding time and length scales is significantly smaller. We propose a novel reduced-order technique for the coupling to the macroscopic solver, which allows us to approximate a non-linear stress-strain relation efficiently and thus further reduce computational effort of microscopic simulations.
Coronini-Cronberg, Sophie; Laohasiriwong, Wongsa; Gericke, Christian A
2007-01-01
Background In 2001, the Government of Thailand introduced a universal coverage scheme with the aim of ensuring equitable health care access for even the poorest citizens. For a flat user fee of 30 Baht per consultation, or for free for those falling into exemption categories, every scheme participant may access registered health services. The exemption categories include children under 12 years of age, senior citizens aged 60 years and over, the very poor, and volunteer health workers. The functioning of these exemption mechanisms and the effect of the scheme on health service utilisation among the poor is controversial. Methods This cross-sectional study investigated the prevalence of 30-Baht Scheme registration and subsequent self-reported health service utilisation among an urban poor population in the Teparuk community within the Mitrapap slum in Khon Kaen city, northeastern Thailand. Furthermore, the effectiveness of the exemption mechanisms in reaching the very poor and the elderly was examined. Factors for users' choice of health facilities were identified. Results Overall, the proportion of the Teparuk community enrolled with the 30-Baht Scheme was high at 86%, with over one quarter of these exempted from paying the consultation fee. User fee exemption was significantly more frequent among households with an above-poverty-line income (64.7%) compared to those below the poverty line (35.3%), χ2 (df) = 5.251 (1); p-value = 0.018. In addition, one third of respondents over 60 years of age were found to be still paying user fees. Self-reported use of registered medical facilities in case of illness was stated to be predominantly due to the service being available through the scheme, with service quality not a chief consideration. Overall consumer satisfaction was high, especially among those not required to pay the 30 Baht user fee. Conclusion Whilst the 30-Baht Scheme seems to cover most of the poor population of Mitrapap slum in Khon Kaen, the user fee exemption mechanism only works partially with regard to reaching the poorest and exempting senior citizens. Service utilisation and satisfaction are highest amongst those who are fee-exempt. Service quality was not an important factor influencing choice of health facility. Ways should be sought to improve the effectiveness of the current exemption mechanisms. PMID:17883874
An adaptive coupling strategy for joint inversions that use petrophysical information as constraints
NASA Astrophysics Data System (ADS)
Heincke, Björn; Jegen, Marion; Moorkamp, Max; Hobbs, Richard W.; Chen, Jin
2017-01-01
Joint inversion strategies for geophysical data have become increasingly popular as they allow for the efficient combination of complementary information from different data sets. The algorithm used for the joint inversion needs to be flexible in its description of the subsurface so as to be able to handle the diverse nature of the data. Hence, joint inversion schemes are needed that 1) adequately balance data from the different methods, 2) have stable convergence behavior, 3) consider the different resolution power of the methods used and 4) link the parameter models in a way that they are suited for a wide range of applications. Here, we combine active source seismic P-wave tomography, gravity and magnetotelluric (MT) data in a petrophysical joint inversion that accounts for these issues. Data from the different methods are inverted separately but are linked through constraints accounting for parameter relationships. An advantage of performing the inversions separately is that no relative weighting between the data sets is required. To avoid perturbing the convergence behavior of the inversions by the coupling, the strengths of the constraints are readjusted at each iteration. The criterion we use to control the adaption of the coupling strengths is based on variations in the objective functions of the individual inversions from one to the next iteration. Adaption of the coupling strengths makes the joint inversion scheme also applicable to subsurface conditions, where assumed relationships are not valid everywhere, because the individual inversions decouple if it is not possible to reach adequately low data misfits for the made assumptions. In addition, the coupling constraints depend on the relative resolutions of the methods, which leads to an improved convergence behavior of the joint inversion. Another benefit of the proposed scheme is that structural information can easily be incorporated in the petrophysical joint inversion (no additional terms are added in the objective functions) by using mutually controlled structural weights for the smoothing constraints. We test our scheme using data generated from a synthetic 2-D sub-basalt model. We observe that the adaption of the coupling strengths makes the convergence of the inversions very robust (data misfits of all methods are close to the target misfits) and that final results are always close to the true models independent of the parameter choices. Finally, the scheme is applied on real data sets from the Faroe-Shetland Basin to image a basaltic sequence and underlying structures. The presence of a borehole and a 3-D reflection seismic survey in this region allows direct comparison and, hence, evaluate the quality of the joint inversion results. The results from joint inversion are more consistent with results from other studies than the ones from the corresponding individual inversions and the shape of the basaltic sequence is better resolved. However, due to the limited resolution of the individual methods used it was not possible to resolve structures underneath the basalt in detail, indicating that additional geophysical information (e.g. CSEM, reflection onsets) needs to be included.
Getting a healthy start: The effectiveness of targeted benefits for improving dietary choices.
Griffith, Rachel; von Hinke, Stephanie; Smith, Sarah
2018-03-01
There is growing policy interest in encouraging better dietary choices. We study a nationally-implemented policy - the UK Healthy Start scheme - that introduced vouchers for fruit, vegetables and milk. We show that the policy has increased spending on fruit and vegetables and has been more effective than an equivalent-value cash benefit. We also show that the policy improved the nutrient composition of households' shopping baskets, with no offsetting changes in spending on other foodstuffs. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
The selection criteria elements of X-ray optics system
NASA Astrophysics Data System (ADS)
Plotnikova, I. V.; Chicherina, N. V.; Bays, S. S.; Bildanov, R. G.; Stary, O.
2018-01-01
At the design of new modifications of x-ray tomography there are difficulties in the right choice of elements of X-ray optical system. Now this problem is solved by practical consideration, selection of values of the corresponding parameters - tension on an x-ray tube taking into account the thickness and type of the studied material. For reduction of time and labor input of design it is necessary to create the criteria of the choice, to determine key parameters and characteristics of elements. In the article two main elements of X-ray optical system - an x-ray tube and the detector of x-ray radiation - are considered. Criteria of the choice of elements, their key characteristics, the main dependences of parameters, quality indicators and also recommendations according to the choice of elements of x-ray systems are received.
NASA Astrophysics Data System (ADS)
Zwanenburg, Philip; Nadarajah, Siva
2016-02-01
The aim of this paper is to demonstrate the equivalence between filtered Discontinuous Galerkin (DG) schemes and the Energy Stable Flux Reconstruction (ESFR) schemes, expanding on previous demonstrations in 1D [1] and for straight-sided elements in 3D [2]. We first derive the DG and ESFR schemes in strong form and compare the respective flux penalization terms while highlighting the implications of the fundamental assumptions for stability in the ESFR formulations, notably that all ESFR scheme correction fields can be interpreted as modally filtered DG correction fields. We present the result in the general context of all higher dimensional curvilinear element formulations. Through a demonstration that there exists a weak form of the ESFR schemes which is both discretely and analytically equivalent to the strong form, we then extend the results obtained for the strong formulations to demonstrate that ESFR schemes can be interpreted as a DG scheme in weak form where discontinuous edge flux is substituted for numerical edge flux correction. Theoretical derivations are then verified with numerical results obtained from a 2D Euler testcase with curved boundaries. Given the current choice of high-order DG-type schemes and the question as to which might be best to use for a specific application, the main significance of this work is the bridge that it provides between them. Clearly outlining the similarities between the schemes results in the important conclusion that it is always less efficient to use ESFR schemes, as opposed to the weak DG scheme, when solving problems implicitly.
Zhang, Chun-Hui; Zhang, Chun-Mei; Guo, Guang-Can; Wang, Qin
2018-02-19
At present, most of the measurement-device-independent quantum key distributions (MDI-QKD) are based on weak coherent sources and limited in the transmission distance under realistic experimental conditions, e.g., considering the finite-size-key effects. Hence in this paper, we propose a new biased decoy-state scheme using heralded single-photon sources for the three-intensity MDI-QKD, where we prepare the decoy pulses only in X basis and adopt both the collective constraints and joint parameter estimation techniques. Compared with former schemes with WCS or HSPS, after implementing full parameter optimizations, our scheme gives distinct reduced quantum bit error rate in the X basis and thus show excellent performance, especially when the data size is relatively small.
Two loop QCD vertices at the symmetric point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gracey, J. A.
2011-10-15
We compute the triple gluon, quark-gluon and ghost-gluon vertices of QCD at the symmetric subtraction point at two loops in the MS scheme. In addition we renormalize each of the three vertices in their respective momentum subtraction schemes, MOMggg, MOMq and MOMh. The conversion functions of all the wave functions, coupling constant and gauge parameter renormalization constants of each of the schemes relative to MS are determined analytically. These are then used to derive the three loop anomalous dimensions of the gluon, quark, Faddeev-Popov ghost and gauge parameter as well as the {beta} function in an arbitrary linear covariant gaugemore » for each MOM scheme. There is good agreement of the latter with earlier Landau gauge numerical estimates of Chetyrkin and Seidensticker.« less
An adaptive Cartesian control scheme for manipulators
NASA Technical Reports Server (NTRS)
Seraji, H.
1987-01-01
A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.
Differential neurobiological effects of expert advice on risky choice in adolescents and adults.
Engelmann, Jan B; Moore, Sara; Monica Capra, C; Berns, Gregory S
2012-06-01
We investigated behavioral and neurobiological mechanisms by which risk-averse advice, provided by an expert, affected risky decisions across three developmental groups [early adolescents (12-14 years), late adolescents (15-17 years), adults (18+ years)]. Using cumulative prospect theory, we modeled choice behavior during a risky-choice task. Results indicate that advice had a significantly greater impact on risky choice in both adolescent groups than in adults. Using functional magnetic resonance imaging, we investigated the neural correlates of this behavioral effect. Developmental effects on correlations between brain activity and valuation parameters were obtained in regions that can be classified into (i) cognitive control regions, such as dorsolateral prefrontal cortex (DLPFC) and ventrolateral PFC; (ii) social cognition regions, such as posterior temporoparietal junction; and (iii) reward-related regions, such as ventromedial PFC (vmPFC) and ventral striatum. Within these regions, differential effects of advice on neural correlates of valuation were observed across development. Specifically, advice increased the correlation strength between brain activity and parameters reflective of safe choice options in adolescent DLPFC and decreased correlation strength between activity and parameters reflective of risky choice options in adult vmPFC. Taken together, results indicate that, across development, distinct brain systems involved in cognitive control and valuation mediate the risk-reducing effect of advice during decision making under risk via specific enhancements and reductions of the correlation strength between brain activity and valuation parameters.
A tuned mesh-generation strategy for image representation based on data-dependent triangulation.
Li, Ping; Adams, Michael D
2013-05-01
A mesh-generation framework for image representation based on data-dependent triangulation is proposed. The proposed framework is a modified version of the frameworks of Rippa and Garland and Heckbert that facilitates the development of more effective mesh-generation methods. As the proposed framework has several free parameters, the effects of different choices of these parameters on mesh quality are studied, leading to the recommendation of a particular set of choices for these parameters. A mesh-generation method is then introduced that employs the proposed framework with these best parameter choices. This method is demonstrated to produce meshes of higher quality (both in terms of squared error and subjectively) than those generated by several competing approaches, at a relatively modest computational and memory cost.
Shan, Yan; Zeng, Meng-su; Liu, Kai; Miao, Xi-Yin; Lin, Jiang; Fu, Cai xia; Xu, Peng-ju
2015-01-01
To evaluate the effect on image quality and intravoxel incoherent motion (IVIM) parameters of small hepatocellular carcinoma (HCC) from choice of either free-breathing (FB) or navigator-triggered (NT) diffusion-weighted (DW) imaging. Thirty patients with 37 small HCCs underwent IVIM DW imaging using 12 b values (0-800 s/mm) with 2 sequences: NT, FB. A biexponential analysis with the Bayesian method yielded true diffusion coefficient (D), pseudodiffusion coefficient (D*), and perfusion fraction (f) in small HCCs and liver parenchyma. Apparent diffusion coefficient (ADC) was also calculated. The acquisition time and image quality scores were assessed for 2 sequences. Independent sample t test was used to compare image quality, signal intensity ratio, IVIM parameters, and ADC values between the 2 sequences; reproducibility of IVIM parameters, and ADC values between 2 sequences was assessed with the Bland-Altman method (BA-LA). Image quality with NT sequence was superior to that with FB acquisition (P = 0.02). The mean acquisition time for FB scheme was shorter than that of NT sequence (6 minutes 14 seconds vs 10 minutes 21 seconds ± 10 seconds P < 0.01). The signal intensity ratio of small HCCs did not vary significantly between the 2 sequences. The ADC and IVIM parameters from the 2 sequences show no significant difference. Reproducibility of D*and f parameters in small HCC was poor (BA-LA: 95% confidence interval, -180.8% to 189.2% for D* and -133.8% to 174.9% for f). A moderate reproducibility of D and ADC parameters was observed (BA-LA: 95% confidence interval, -83.5% to 76.8% for D and -74.4% to 88.2% for ADC) between the 2 sequences. The NT DW imaging technique offers no advantage in IVIM parameters measurements of small HCC except better image quality, whereas FB technique offers greater confidence in fitted diffusion parameters for matched acquisition periods.
NASA Astrophysics Data System (ADS)
Choi, Jin-Ho; Seo, Kyong-Hwan
2017-06-01
This work seeks to find the most effective parameters in a deep convection scheme (relaxed Arakawa-Schubert scheme) of the National Centers of Environmental Prediction Climate Forecast System model for improved simulation of the Madden-Julian Oscillation (MJO). A suite of sensitivity experiments are performed by changing physical components such as the relaxation parameter of mass flux for adjustment of the environment, the evaporation rate from large-scale precipitation, the moisture trigger threshold using relative humidity of the boundary layer, and the fraction of re-evaporation of convective (subgrid-scale) rainfall. Among them, the last two parameters are found to produce a significant improvement. Increasing the strength of these two parameters reduces light rainfall that inhibits complete formation of the tropical convective system or supplies more moisture that help increase a potential energy to large-scale environment in the lower troposphere (especially at 700 hPa), leading to moisture preconditioning favorable for further development and eastward propagation of the MJO. In a more humid environment, more organized MJO structure (i.e., space-time spectral signal, eastward propagation, and tilted vertical structure) is produced.
NASA Astrophysics Data System (ADS)
Palkin, V. A.; Igoshin, I. S.
2017-01-01
The separation potentials suggested by various researchers for separating multicomponent isotopic mixtures are considered. An estimation of their applicability to determining the parameters of the efficiency of enrichment of a ternary mixture in a cascade with an optimum scheme of connection of stages made up of elements with three takeoffs is carried out. The separation potential most precisely characterizing the separative power and other efficiency parameters of stages and cascade schemes has been selected based on the results of the estimation made.
Dealing with the time-varying parameter problem of robot manipulators performing path tracking tasks
NASA Technical Reports Server (NTRS)
Song, Y. D.; Middleton, R. H.
1992-01-01
Many robotic applications involve time-varying payloads during the operation of the robot. It is therefore of interest to consider control schemes that deal with time-varying parameters. Using the properties of the element by element (or Hadarmad) product of matrices, we obtain the robot dynamics in parameter-isolated form, from which a new control scheme is developed. The controller proposed yields zero asymptotic tracking errors when applied to robotic systems with time-varying parameters by using a switching type control law. The results obtained are global in the initial state of the robot, and can be applied to rapidly varying systems.
Comparative Study of Three High Order Schemes for LES of Temporally Evolving Mixing Layers
NASA Technical Reports Server (NTRS)
Yee, Helen M. C.; Sjogreen, Biorn Axel; Hadjadj, C.
2012-01-01
Three high order shock-capturing schemes are compared for large eddy simulations (LES) of temporally evolving mixing layers (TML) for different convective Mach numbers (Mc) ranging from the quasi-incompressible regime to highly compressible supersonic regime. The considered high order schemes are fifth-order WENO (WENO5), seventh-order WENO (WENO7) and the associated eighth-order central spatial base scheme with the dissipative portion of WENO7 as a nonlinear post-processing filter step (WENO7fi). This high order nonlinear filter method (H.C. Yee and B. Sjogreen, Proceedings of ICOSAHOM09, June 22-26, 2009, Trondheim, Norway) is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The LES results by WENO7fi using the same scheme parameter agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) work of Rogers & Moser (1994) and Pantano & Sarkar (2002), whereas results by WENO5 and WENO7 compare poorly with experimental data and DNS computations.
Determination of the QCD Λ Parameter and the Accuracy of Perturbation Theory at High Energies.
Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer
2016-10-28
We discuss the determination of the strong coupling α_{MS[over ¯]}(m_{Z}) or, equivalently, the QCD Λ parameter. Its determination requires the use of perturbation theory in α_{s}(μ) in some scheme s and at some energy scale μ. The higher the scale μ, the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the Λ parameter in three-flavor QCD, we perform lattice computations in a scheme that allows us to nonperturbatively reach very high energies, corresponding to α_{s}=0.1 and below. We find that (continuum) perturbation theory is very accurate there, yielding a 3% error in the Λ parameter, while data around α_{s}≈0.2 are clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.
van Herpen, Erica; Trijp, Hans C M van
2011-08-01
Although front-of-pack nutrition labeling can help consumers make healthier food choices, lack of attention to these labels limits their effectiveness. This study examines consumer attention to and use of three different nutrition labeling schemes (logo, multiple traffic-light label, and nutrition table) when they face different goals and resource constraints. To understand attention and processing of labels, various measures are used including self-reported use, recognition, and eye-tracking measures. Results of two experiments in different countries show that although consumers evaluate the nutrition table most positively, it receives little attention and does not stimulate healthy choices. Traffic-light labels and especially logos enhance healthy product choice, even when consumers are put under time pressure. Additionally, health goals of consumers increase attention to and use of nutrition labels, especially when these health goals concern specific nutrients. Copyright © 2011 Elsevier Ltd. All rights reserved.
Wu, Jin-Lei; Ji, Xin; Zhang, Shou
2017-01-01
We propose a dressed-state scheme to achieve shortcuts to adiabaticity in atom-cavity quantum electrodynamics for speeding up adiabatic two-atom quantum state transfer and maximum entanglement generation. Compared with stimulated Raman adiabatic passage, the dressed-state scheme greatly shortens the operation time in a non-adiabatic way. By means of some numerical simulations, we determine the parameters which can guarantee the feasibility and efficiency both in theory and experiment. Besides, numerical simulations also show the scheme is robust against the variations in the parameters, atomic spontaneous emissions and the photon leakages from the cavity. PMID:28397793
A new third order finite volume weighted essentially non-oscillatory scheme on tetrahedral meshes
NASA Astrophysics Data System (ADS)
Zhu, Jun; Qiu, Jianxian
2017-11-01
In this paper a third order finite volume weighted essentially non-oscillatory scheme is designed for solving hyperbolic conservation laws on tetrahedral meshes. Comparing with other finite volume WENO schemes designed on tetrahedral meshes, the crucial advantages of such new WENO scheme are its simplicity and compactness with the application of only six unequal size spatial stencils for reconstructing unequal degree polynomials in the WENO type spatial procedures, and easy choice of the positive linear weights without considering the topology of the meshes. The original innovation of such scheme is to use a quadratic polynomial defined on a big central spatial stencil for obtaining third order numerical approximation at any points inside the target tetrahedral cell in smooth region and switch to at least one of five linear polynomials defined on small biased/central spatial stencils for sustaining sharp shock transitions and keeping essentially non-oscillatory property simultaneously. By performing such new procedures in spatial reconstructions and adopting a third order TVD Runge-Kutta time discretization method for solving the ordinary differential equation (ODE), the new scheme's memory occupancy is decreased and the computing efficiency is increased. So it is suitable for large scale engineering requirements on tetrahedral meshes. Some numerical results are provided to illustrate the good performance of such scheme.
Comparison of Grouping Schemes for Exposure to Total Dust in Cement Factories in Korea.
Koh, Dong-Hee; Kim, Tae-Woo; Jang, Seung Hee; Ryu, Hyang-Woo; Park, Donguk
2015-08-01
The purpose of this study was to evaluate grouping schemes for exposure to total dust in cement industry workers using non-repeated measurement data. In total, 2370 total dust measurements taken from nine Portland cement factories in 1995-2009 were analyzed. Various grouping schemes were generated based on work process, job, factory, or average exposure. To characterize variance components of each grouping scheme, we developed mixed-effects models with a B-spline time trend incorporated as fixed effects and a grouping variable incorporated as a random effect. Using the estimated variance components, elasticity was calculated. To compare the prediction performances of different grouping schemes, 10-fold cross-validation tests were conducted, and root mean squared errors and pooled correlation coefficients were calculated for each grouping scheme. The five exposure groups created a posteriori by ranking job and factory combinations according to average dust exposure showed the best prediction performance and highest elasticity among various grouping schemes. Our findings suggest a grouping method based on ranking of job, and factory combinations would be the optimal choice in this population. Our grouping method may aid exposure assessment efforts in similar occupational settings, minimizing the misclassification of exposures. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Sensitivity of boundary layer variables to PBL schemes over the central Tibetan Plateau
NASA Astrophysics Data System (ADS)
Xu, L.; Liu, H.; Wang, L.; Du, Q.; Liu, Y.
2017-12-01
Planetary Boundary Layer (PBL) parameterization schemes play critical role in numerical weather prediction and research. They describe physical processes associated with the momentum, heat and humidity exchange between land surface and atmosphere. In this study, two non-local (YSU and ACM2) and two local (MYJ and BouLac) planetary boundary layer parameterization schemes in the Weather Research and Forecasting (WRF) model have been tested over the central Tibetan Plateau regarding of their capability to model boundary layer parameters relevant for surface energy exchange. The model performance has been evaluated against measurements from the Third Tibetan Plateau atmospheric scientific experiment (TIPEX-III). Simulated meteorological parameters and turbulence fluxes have been compared with observations through standard statistical measures. Model results show acceptable behavior, but no particular scheme produces best performance for all locations and parameters. All PBL schemes underestimate near surface air temperatures over the Tibetan Plateau. By investigating the surface energy budget components, the results suggest that downward longwave radiation and sensible heat flux are the main factors causing the lower near surface temperature. Because the downward longwave radiation and sensible heat flux are respectively affected by atmosphere moisture and land-atmosphere coupling, improvements in water vapor distribution and land-atmosphere energy exchange is meaningful for better presentation of PBL physical processes over the central Tibetan Plateau.
Time‐efficient and flexible design of optimized multishell HARDI diffusion
Tournier, J. Donald; Price, Anthony N.; Cordero‐Grande, Lucilio; Hughes, Emer J.; Malik, Shaihan; Steinweg, Johannes; Bastiani, Matteo; Sotiropoulos, Stamatios N.; Jbabdi, Saad; Andersson, Jesper; Edwards, A. David; Hajnal, Joseph V.
2017-01-01
Purpose Advanced diffusion magnetic resonance imaging benefits from collecting as much data as is feasible but is highly sensitive to subject motion and the risk of data loss increases with longer acquisition times. Our purpose was to create a maximally time‐efficient and flexible diffusion acquisition capability with built‐in robustness to partially acquired or interrupted scans. Our framework has been developed for the developing Human Connectome Project, but different application domains are equally possible. Methods Complete flexibility in the sampling of diffusion space combined with free choice of phase‐encode‐direction and the temporal ordering of the sampling scheme was developed taking into account motion robustness, internal consistency, and hardware limits. A split‐diffusion‐gradient preparation, multiband acceleration, and a restart capacity were added. Results The framework was used to explore different parameters choices for the desired high angular resolution diffusion imaging diffusion sampling. For the developing Human Connectome Project, a high‐angular resolution, maximally time‐efficient (20 min) multishell protocol with 300 diffusion‐weighted volumes was acquired in >400 neonates. An optimal design of a high‐resolution (1.2 × 1.2 mm2) two‐shell acquisition with 54 diffusion weighted volumes was obtained using a split‐gradient design. Conclusion The presented framework provides flexibility to generate time‐efficient and motion‐robust diffusion magnetic resonance imaging acquisitions taking into account hardware constraints that might otherwise result in sub‐optimal choices. Magn Reson Med 79:1276–1292, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:28557055
Gyroaveraging operations using adaptive matrix operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dominski, Julien; Ku, Seung -Hoe; Chang, Choong -Seock
A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidalmore » equilibrium has been studied. As a result, a successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.« less
Gyroaveraging operations using adaptive matrix operators
Dominski, Julien; Ku, Seung -Hoe; Chang, Choong -Seock
2018-05-17
A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidalmore » equilibrium has been studied. As a result, a successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.« less
NASA Astrophysics Data System (ADS)
Pan, M.-Ch.; Chu, W.-Ch.; Le, Duc-Do
2016-12-01
The paper presents an alternative Vold-Kalman filter order tracking (VKF_OT) method, i.e. adaptive angular-velocity VKF_OT technique, to extract and characterize order components in an adaptive manner for the condition monitoring and fault diagnosis of rotary machinery. The order/spectral waveforms to be tracked can be recursively solved by using Kalman filter based on the one-step state prediction. The paper comprises theoretical derivation of computation scheme, numerical implementation, and parameter investigation. Comparisons of the adaptive VKF_OT scheme with two other ones are performed through processing synthetic signals of designated order components. Processing parameters such as the weighting factor and the correlation matrix of process noise, and data conditions like the sampling frequency, which influence tracking behavior, are explored. The merits such as adaptive processing nature and computation efficiency brought by the proposed scheme are addressed although the computation was performed in off-line conditions. The proposed scheme can simultaneously extract multiple spectral components, and effectively decouple close and crossing orders associated with multi-axial reference rotating speeds.
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.
2013-01-01
Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).
A special protection scheme utilizing trajectory sensitivity analysis in power transmission
NASA Astrophysics Data System (ADS)
Suriyamongkol, Dan
In recent years, new measurement techniques have provided opportunities to improve the North American Power System observability, control and protection. This dissertation discusses the formulation and design of a special protection scheme based on a novel utilization of trajectory sensitivity techniques with inputs consisting of system state variables and parameters. Trajectory sensitivity analysis (TSA) has been used in previous publications as a method for power system security and stability assessment, and the mathematical formulation of TSA lends itself well to some of the time domain power system simulation techniques. Existing special protection schemes often have limited sets of goals and control actions. The proposed scheme aims to maintain stability while using as many control actions as possible. The approach here will use the TSA in a novel way by using the sensitivities of system state variables with respect to state parameter variations to determine the state parameter controls required to achieve the desired state variable movements. The initial application will operate based on the assumption that the modeled power system has full system observability, and practical considerations will be discussed.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kunisch, K.
1982-01-01
Approximation results from linear semigroup theory are used to develop a general framework for convergence of approximation schemes in parameter estimation and optimal control problems for nonlinear partial differential equations. These ideas are used to establish theoretical convergence results for parameter identification using modal (eigenfunction) approximation techniques. Results from numerical investigations of these schemes for both hyperbolic and parabolic systems are given.
Zanchetti, A
1985-01-01
Diuretics have so far enjoyed a prominent position in all stepped-care programs, as the preferred first choice drug in most American schemes or as an alternative first choice drug with respect to beta-blockers in the WHO scheme. Among various reasons for this prominence has been that antihypertensive drugs available until recently all gradually led to sodium and water retention, and therefore required to be combined with a diuretic. This is no longer true: several antihypertensive agents are available now that do not require combination with diuretics, these new agents including not only beta-blockers but also angiotensin-converting enzyme (ACE) inhibitors and calcium entry blockers. Furthermore, some concern about the metabolic effects of diuretics has recently been raised, especially because of the failure to prevent coronary heart disease by the current diuretic-based antihypertensive regimens. Without denying the importance that diuretics have had in the past in making antihypertensive therapy successful and their continuing essential role in treating severe hypertension, it is likely, in my opinion, that in future years diuretics are going to be more often used as agents of second choice, mostly in combination with beta-blockers, ACE inhibitors, and, perhaps, some of the calcium blockers. In conclusion, although opinions of various experts about the sequence of choices between antihypertensive drugs may obviously differ, there is no doubt that the addition of new classes of effective agents, such as the ACE inhibitors and the calcium entry blockers, is making antihypertensive therapy more flexible and more easily suitable to the needs of individual patients.
de Andrade, Juliana Cunha; Nalério, Elen Silveira; Giongo, Citieli; de Barcellos, Marcia Dutra; Ares, Gastón; Deliza, Rosires
2017-08-01
The development of high-quality air-dried cured sheep meat products adapted to meet consumer demands represent an interesting option to add value to the meat of adult animals. The present study aimed to evaluate the influence of process parameters on consumer choice of two products from sheep meat under different evoked contexts, considering product concepts. A total of 375 Brazilian participants completed a choice-based conjoint task with three 2-level variables for each product: maturation time, smoking, and sodium reduction for dry-cured sheep ham, and natural antioxidant, smoking, and sodium reduction for sheep meat coppa. A between-subjects experimental design was used to evaluate the influence of consumption context on consumer choices. All the process parameters significantly influenced consumer choice. However, their relative importance was affected by evoked context. Copyright © 2017. Published by Elsevier Ltd.
The QKD network: model and routing scheme
NASA Astrophysics Data System (ADS)
Yang, Chao; Zhang, Hongqi; Su, Jinhai
2017-11-01
Quantum key distribution (QKD) technology can establish unconditional secure keys between two communicating parties. Although this technology has some inherent constraints, such as the distance and point-to-point mode limits, building a QKD network with multiple point-to-point QKD devices can overcome these constraints. Considering the development level of current technology, the trust relaying QKD network is the first choice to build a practical QKD network. However, the previous research didn't address a routing method on the trust relaying QKD network in detail. This paper focuses on the routing issues, builds a model of the trust relaying QKD network for easily analysing and understanding this network, and proposes a dynamical routing scheme for this network. From the viewpoint of designing a dynamical routing scheme in classical network, the proposed scheme consists of three components: a Hello protocol helping share the network topology information, a routing algorithm to select a set of suitable paths and establish the routing table and a link state update mechanism helping keep the routing table newly. Experiments and evaluation demonstrates the validity and effectiveness of the proposed routing scheme.
Butterfly Encryption Scheme for Resource-Constrained Wireless Networks †
Sampangi, Raghav V.; Sampalli, Srinivas
2015-01-01
Resource-constrained wireless networks are emerging networks such as Radio Frequency Identification (RFID) and Wireless Body Area Networks (WBAN) that might have restrictions on the available resources and the computations that can be performed. These emerging technologies are increasing in popularity, particularly in defence, anti-counterfeiting, logistics and medical applications, and in consumer applications with growing popularity of the Internet of Things. With communication over wireless channels, it is essential to focus attention on securing data. In this paper, we present an encryption scheme called Butterfly encryption scheme. We first discuss a seed update mechanism for pseudorandom number generators (PRNG), and employ this technique to generate keys and authentication parameters for resource-constrained wireless networks. Our scheme is lightweight, as in it requires less resource when implemented and offers high security through increased unpredictability, owing to continuously changing parameters. Our work focuses on accomplishing high security through simplicity and reuse. We evaluate our encryption scheme using simulation, key similarity assessment, key sequence randomness assessment, protocol analysis and security analysis. PMID:26389899
Butterfly Encryption Scheme for Resource-Constrained Wireless Networks.
Sampangi, Raghav V; Sampalli, Srinivas
2015-09-15
Resource-constrained wireless networks are emerging networks such as Radio Frequency Identification (RFID) and Wireless Body Area Networks (WBAN) that might have restrictions on the available resources and the computations that can be performed. These emerging technologies are increasing in popularity, particularly in defence, anti-counterfeiting, logistics and medical applications, and in consumer applications with growing popularity of the Internet of Things. With communication over wireless channels, it is essential to focus attention on securing data. In this paper, we present an encryption scheme called Butterfly encryption scheme. We first discuss a seed update mechanism for pseudorandom number generators (PRNG), and employ this technique to generate keys and authentication parameters for resource-constrained wireless networks. Our scheme is lightweight, as in it requires less resource when implemented and offers high security through increased unpredictability, owing to continuously changing parameters. Our work focuses on accomplishing high security through simplicity and reuse. We evaluate our encryption scheme using simulation, key similarity assessment, key sequence randomness assessment, protocol analysis and security analysis.
Spectroscopy of the odd-odd fp-shell nucleus {sup 52}Sc from secondary fragmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gade, A.; Bazin, D.; Mueller, W.F.
2006-03-15
The odd-odd fp-shell nucleus {sup 52}Sc was investigated using in-beam {gamma}-ray spectroscopy following secondary fragmentation of a {sup 55}V and {sup 57}Cr cocktail beam. Aside from the known {gamma}-ray transition at 674(5) keV, a new decay at E{sub {gamma}}=212(3) keV was observed. It is attributed to the depopulation of a low-lying excited level. This new state is discussed in the framework of shell-model calculations with the GXPF1, GXPF1A, and KB3G effective interactions. These calculations are found to be fairly robust for the low-lying level scheme of {sup 52}Sc irrespective of the choice of the effective interaction. In addition, the frequencymore » of spin values predicted by the shell model is successfully modeled by a spin distribution formulated in a statistical approach with an empirical, energy-independent spin-cutoff parameter.« less
Large memory capacity in chaotic artificial neural networks: a view of the anti-integrable limit.
Lin, Wei; Chen, Guanrong
2009-08-01
In the literature, it was reported that the chaotic artificial neural network model with sinusoidal activation functions possesses a large memory capacity as well as a remarkable ability of retrieving the stored patterns, better than the conventional chaotic model with only monotonic activation functions such as sigmoidal functions. This paper, from the viewpoint of the anti-integrable limit, elucidates the mechanism inducing the superiority of the model with periodic activation functions that includes sinusoidal functions. Particularly, by virtue of the anti-integrable limit technique, this paper shows that any finite-dimensional neural network model with periodic activation functions and properly selected parameters has much more abundant chaotic dynamics that truly determine the model's memory capacity and pattern-retrieval ability. To some extent, this paper mathematically and numerically demonstrates that an appropriate choice of the activation functions and control scheme can lead to a large memory capacity and better pattern-retrieval ability of the artificial neural network models.
Pendzialek, Jonas B; Danner, Marion; Simic, Dusan; Stock, Stephanie
2015-05-01
This paper investigates the change in price elasticity of health insurance choice in Germany after a reform of health insurance contributions. Using a comprehensive data set of all sickness funds between 2004 and 2013, price elasticities are calculated both before and after the reform for the entire market. The general price elasticity is found to be increased more than 4-fold from -0.81 prior to the reform to -3.53 after the reform. By introducing a new kind of health insurance contribution the reform seemingly increased the price elasticity of insured individuals to a more appropriate level under the given market parameters. However, further unintended consequences of the new contribution scheme were massive losses of market share for the more expensive sickness funds and therefore an undivided focus on pricing as the primary competitive element to the detriment of quality. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Numerical algorithms for computations of feedback laws arising in control of flexible systems
NASA Technical Reports Server (NTRS)
Lasiecka, Irena
1989-01-01
Several continuous models will be examined, which describe flexible structures with boundary or point control/observation. Issues related to the computation of feedback laws are examined (particularly stabilizing feedbacks) with sensors and actuators located either on the boundary or at specific point locations of the structure. One of the main difficulties is due to the great sensitivity of the system (hyperbolic systems with unbounded control actions), with respect to perturbations caused either by uncertainty of the model or by the errors introduced in implementing numerical algorithms. Thus, special care must be taken in the choice of the appropriate numerical schemes which eventually lead to implementable finite dimensional solutions. Finite dimensional algorithms are constructed on a basis of a priority analysis of the properties of the original, continuous (infinite diversional) systems with the following criteria in mind: (1) convergence and stability of the algorithms and (2) robustness (reasonable insensitivity with respect to the unknown parameters of the systems). Examples with mixed finite element methods and spectral methods are provided.
Analysis, synchronisation and circuit design of a new highly nonlinear chaotic system
NASA Astrophysics Data System (ADS)
Mobayen, Saleh; Kingni, Sifeu Takougang; Pham, Viet-Thanh; Nazarimehr, Fahimeh; Jafari, Sajad
2018-02-01
This paper investigates a three-dimensional autonomous chaotic flow without linear terms. Dynamical behaviour of the proposed system is investigated through eigenvalue structures, phase portraits, bifurcation diagram, Lyapunov exponents and basin of attraction. For a suitable choice of the parameters, the proposed system can exhibit anti-monotonicity, periodic oscillations and double-scroll chaotic attractor. Basin of attraction of the proposed system shows that the chaotic attractor is self-excited. Furthermore, feasibility of double-scroll chaotic attractor in the real word is investigated by using the OrCAD-PSpice software via an electronic implementation of the proposed system. A good qualitative agreement is illustrated between the numerical simulations and the OrCAD-PSpice results. Finally, a finite-time control method based on dynamic sliding surface for the synchronisation of master and slave chaotic systems in the presence of external disturbances is performed. Using the suggested control technique, the superior master-slave synchronisation is attained. Illustrative simulation results on the studied chaotic system are presented to indicate the effectiveness of the suggested scheme.
Jamil, Majid; Sharma, Sanjeev Kumar; Singh, Rajveer
2015-01-01
This paper focuses on the detection and classification of the faults on electrical power transmission line using artificial neural networks. The three phase currents and voltages of one end are taken as inputs in the proposed scheme. The feed forward neural network along with back propagation algorithm has been employed for detection and classification of the fault for analysis of each of the three phases involved in the process. A detailed analysis with varying number of hidden layers has been performed to validate the choice of the neural network. The simulation results concluded that the present method based on the neural network is efficient in detecting and classifying the faults on transmission lines with satisfactory performances. The different faults are simulated with different parameters to check the versatility of the method. The proposed method can be extended to the Distribution network of the Power System. The various simulations and analysis of signals is done in the MATLAB(®) environment.
Teleportation of entangled states without Bell-state measurement via a two-photon process
NASA Astrophysics Data System (ADS)
dSouza, A. D.; Cardoso, W. B.; Avelar, A. T.; Baseia, B.
2011-02-01
In this letter we propose a scheme using a two-photon process to teleport an entangled field state of a bimodal cavity to another one without Bell-state measurement. The quantum information is stored in a zero- and two-photon entangled state. This scheme requires two three-level atoms in a ladder configuration, two bimodal cavities, and selective atomic detectors. The fidelity and success probability do not depend on the coefficients of the state to be teleported. For convenient choices of interaction times, the teleportation occurs with fidelity close to the unity.
Evaluating Payments for Environmental Services: Methodological Challenges
2016-01-01
Over the last fifteen years, Payments for Environmental Services (PES) schemes have become very popular environmental policy instruments, but the academic literature has begun to question their additionality. The literature attempts to estimate the causal effect of these programs by applying impact evaluation (IE) techniques. However, PES programs are complex instruments and IE methods cannot be directly applied without adjustments. Based on a systematic review of the literature, this article proposes a framework for the methodological process of designing an IE for PES schemes. It revises and discusses the methodological choices at each step of the process and proposes guidelines for practitioners. PMID:26910850
The drift diffusion model as the choice rule in reinforcement learning.
Pedersen, Mads Lund; Frank, Michael J; Biele, Guido
2017-08-01
Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyperactivity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups.
The drift diffusion model as the choice rule in reinforcement learning
Frank, Michael J.
2017-01-01
Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyper-activity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups. PMID:27966103
Modeling the dynamics of choice.
Baum, William M; Davison, Michael
2009-06-01
A simple linear-operator model both describes and predicts the dynamics of choice that may underlie the matching relation. We measured inter-food choice within components of a schedule that presented seven different pairs of concurrent variable-interval schedules for 12 food deliveries each with no signals indicating which pair was in force. This measure of local choice was accurately described and predicted as obtained reinforcer sequences shifted it to favor one alternative or the other. The effect of a changeover delay was reflected in one parameter, the asymptote, whereas the effect of a difference in overall rate of food delivery was reflected in the other parameter, rate of approach to the asymptote. The model takes choice as a primary dependent variable, not derived by comparison between alternatives-an approach that agrees with the molar view of behaviour.
Challenging Social Hierarchy: Playing with Oppositional Identities in Family Talk
ERIC Educational Resources Information Center
Bani-Shoraka, Helena
2008-01-01
This study examines how bilingual family members use language choice and language alternation as a local scheme of interpretation to distinguish different and often contesting social identities in interaction. It is argued that the playful creation of oppositional identities in interaction relieves the speakers from responsibility and creates a…
Book Selection, Collection Development, and Bounded Rationality.
ERIC Educational Resources Information Center
Schwartz, Charles A.
1989-01-01
Reviews previously proposed schemes of classical rationality in book selection, describes new approaches to rational choice behavior, and presents a model of book selection based on bounded rationality in a garbage can decision process. The role of tacit knowledge and symbolic content in the selection process are also discussed. (102 references)…
A MULTIPLE GRID ALGORITHM FOR ONE-DIMENSIONAL TRANSIENT OPEN CHANNEL FLOWS. (R825200)
Numerical modeling of open channel flows with shocks using explicit finite difference schemes is constrained by the choice of time step, which is limited by the CFL stability criteria. To overcome this limitation, in this work we introduce the application of a multiple grid al...
Decision making and preferences for acoustic signals in choice situations by female crickets.
Gabel, Eileen; Kuntze, Janine; Hennig, R Matthias
2015-08-01
Multiple attributes usually have to be assessed when choosing a mate. Efficient choice of the best mate is complicated if the available cues are not positively correlated, as is often the case during acoustic communication. Because of varying distances of signalers, a female may be confronted with signals of diverse quality at different intensities. Here, we examined how available cues are weighted for a decision by female crickets. Two songs with different temporal patterns and/or sound intensities were presented in a choice paradigm and compared with female responses from a no-choice test. When both patterns were presented at equal intensity, preference functions became wider in choice situations compared with a no-choice paradigm. When the stimuli in two-choice tests were presented at different intensities, this effect was counteracted as preference functions became narrower compared with choice tests using stimuli of equal intensity. The weighting of intensity differences depended on pattern quality and was therefore non-linear. A simple computational model based on pattern and intensity cues reliably predicted female decisions. A comparison of processing schemes suggested that the computations for pattern recognition and directionality are performed in a network with parallel topology. However, the computational flow of information corresponded to serial processing. © 2015. Published by The Company of Biologists Ltd.
Influence of Context on Item Parameters in Forced-Choice Personality Assessments
ERIC Educational Resources Information Center
Lin, Yin; Brown, Anna
2017-01-01
A fundamental assumption in computerized adaptive testing is that item parameters are invariant with respect to context--items surrounding the administered item. This assumption, however, may not hold in forced-choice (FC) assessments, where explicit comparisons are made between items included in the same block. We empirically examined the…
The anatomy of choice: active inference and agency.
Friston, Karl; Schwartenbeck, Philipp; Fitzgerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J
2013-01-01
This paper considers agency in the setting of embodied or active inference. In brief, we associate a sense of agency with prior beliefs about action and ask what sorts of beliefs underlie optimal behavior. In particular, we consider prior beliefs that action minimizes the Kullback-Leibler (KL) divergence between desired states and attainable states in the future. This allows one to formulate bounded rationality as approximate Bayesian inference that optimizes a free energy bound on model evidence. We show that constructs like expected utility, exploration bonuses, softmax choice rules and optimism bias emerge as natural consequences of this formulation. Previous accounts of active inference have focused on predictive coding and Bayesian filtering schemes for minimizing free energy. Here, we consider variational Bayes as an alternative scheme that provides formal constraints on the computational anatomy of inference and action-constraints that are remarkably consistent with neuroanatomy. Furthermore, this scheme contextualizes optimal decision theory and economic (utilitarian) formulations as pure inference problems. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (of softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution-that minimizes free energy. This sensitivity corresponds to the precision of beliefs about behavior, such that attainable goals are afforded a higher precision or confidence. In turn, this means that optimal behavior entails a representation of confidence about outcomes that are under an agent's control.
NASA Astrophysics Data System (ADS)
Imran, H. M.; Kala, J.; Ng, A. W. M.; Muthukumaran, S.
2018-04-01
Appropriate choice of physics options among many physics parameterizations is important when using the Weather Research and Forecasting (WRF) model. The responses of different physics parameterizations of the WRF model may vary due to geographical locations, the application of interest, and the temporal and spatial scales being investigated. Several studies have evaluated the performance of the WRF model in simulating the mean climate and extreme rainfall events for various regions in Australia. However, no study has explicitly evaluated the sensitivity of the WRF model in simulating heatwaves. Therefore, this study evaluates the performance of a WRF multi-physics ensemble that comprises 27 model configurations for a series of heatwave events in Melbourne, Australia. Unlike most previous studies, we not only evaluate temperature, but also wind speed and relative humidity, which are key factors influencing heatwave dynamics. No specific ensemble member for all events explicitly showed the best performance, for all the variables, considering all evaluation metrics. This study also found that the choice of planetary boundary layer (PBL) scheme had largest influence, the radiation scheme had moderate influence, and the microphysics scheme had the least influence on temperature simulations. The PBL and microphysics schemes were found to be more sensitive than the radiation scheme for wind speed and relative humidity. Additionally, the study tested the role of Urban Canopy Model (UCM) and three Land Surface Models (LSMs). Although the UCM did not play significant role, the Noah-LSM showed better performance than the CLM4 and NOAH-MP LSMs in simulating the heatwave events. The study finally identifies an optimal configuration of WRF that will be a useful modelling tool for further investigations of heatwaves in Melbourne. Although our results are invariably region-specific, our results will be useful to WRF users investigating heatwave dynamics elsewhere.
NASA Astrophysics Data System (ADS)
Pandey, Gavendra; Sharan, Maithili
2018-01-01
Application of atmospheric dispersion models in air quality analysis requires a proper representation of the vertical and horizontal growth of the plume. For this purpose, various schemes for the parameterization of dispersion parameters σ‧s are described in both stable and unstable conditions. These schemes differ on the use of (i) extent of availability of on-site measurements (ii) formulations developed for other sites and (iii) empirical relations. The performance of these schemes is evaluated in an earlier developed IIT (Indian Institute of Technology) dispersion model with the data set in single and multiple releases conducted at Fusion Field Trials, Dugway Proving Ground, Utah 2007. Qualitative and quantitative evaluation of the relative performance of all the schemes is carried out in both stable and unstable conditions in the light of (i) peak/maximum concentrations, and (ii) overall concentration distribution. The blocked bootstrap resampling technique is adopted to investigate the statistical significance of the differences in performances of each of the schemes by computing 95% confidence limits on the parameters FB and NMSE. The various analysis based on some selected statistical measures indicated consistency in the qualitative and quantitative performances of σ schemes. The scheme which is based on standard deviation of wind velocity fluctuations and Lagrangian time scales exhibits a relatively better performance in predicting the peak as well as the lateral spread.
Zhang, BiTao; Pi, YouGuo; Luo, Ying
2012-09-01
A fractional order sliding mode control (FROSMC) scheme based on parameters auto-tuning for the velocity control of permanent magnet synchronous motor (PMSM) is proposed in this paper. The control law of the proposed F(R)OSMC scheme is designed according to Lyapunov stability theorem. Based on the property of transferring energy with adjustable type in F(R)OSMC, this paper analyzes the chattering phenomenon in classic sliding mode control (SMC) is attenuated with F(R)OSMC system. A fuzzy logic inference scheme (FLIS) is utilized to obtain the gain of switching control. Simulations and experiments demonstrate that the proposed FROSMC not only achieve better control performance with smaller chatting than that with integer order sliding mode control, but also is robust to external load disturbance and parameter variations. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Renormalization of QCD in the interpolating momentum subtraction scheme at three loops
NASA Astrophysics Data System (ADS)
Gracey, J. A.; Simms, R. M.
2018-04-01
We introduce a more general set of kinematic renormalization schemes than the original momentum subtraction schemes of Celmaster and Gonsalves. These new schemes will depend on a parameter ω , which tags the external momentum of one of the legs of the three-point vertex functions in QCD. In each of the three new schemes, we renormalize QCD in the Landau and maximal Abelian gauges and establish the three-loop renormalization group functions in each gauge. For an application, we evaluate two critical exponents at the Banks-Zaks fixed point and demonstrate that their values appear to be numerically scheme independent in a subrange of the conformal window.
Yang, Chui-Ping; Chu, Shih-I; Han, Siyuan
2004-03-19
We investigate the experimental feasibility of realizing quantum information transfer (QIT) and entanglement with SQUID qubits in a microwave cavity via dark states. Realistic system parameters are presented. Our results show that QIT and entanglement with two-SQUID qubits can be achieved with a high fidelity. The present scheme is tolerant to device parameter nonuniformity. We also show that the strong coupling limit can be achieved with SQUID qubits in a microwave cavity. Thus, cavity-SQUID systems provide a new way for production of nonclassical microwave source and quantum communication.
Integrability and Linear Stability of Nonlinear Waves
NASA Astrophysics Data System (ADS)
Degasperis, Antonio; Lombardo, Sara; Sommacal, Matteo
2018-03-01
It is well known that the linear stability of solutions of 1+1 partial differential equations which are integrable can be very efficiently investigated by means of spectral methods. We present here a direct construction of the eigenmodes of the linearized equation which makes use only of the associated Lax pair with no reference to spectral data and boundary conditions. This local construction is given in the general N× N matrix scheme so as to be applicable to a large class of integrable equations, including the multicomponent nonlinear Schrödinger system and the multiwave resonant interaction system. The analytical and numerical computations involved in this general approach are detailed as an example for N=3 for the particular system of two coupled nonlinear Schrödinger equations in the defocusing, focusing and mixed regimes. The instabilities of the continuous wave solutions are fully discussed in the entire parameter space of their amplitudes and wave numbers. By defining and computing the spectrum in the complex plane of the spectral variable, the eigenfrequencies are explicitly expressed. According to their topological properties, the complete classification of these spectra in the parameter space is presented and graphically displayed. The continuous wave solutions are linearly unstable for a generic choice of the coupling constants.
Scheduling Future Water Supply Investments Under Uncertainty
NASA Astrophysics Data System (ADS)
Huskova, I.; Matrosov, E. S.; Harou, J. J.; Kasprzyk, J. R.; Reed, P. M.
2014-12-01
Uncertain hydrological impacts of climate change, population growth and institutional changes pose a major challenge to planning of water supply systems. Planners seek optimal portfolios of supply and demand management schemes but also when to activate assets whilst considering many system goals and plausible futures. Incorporation of scheduling into the planning under uncertainty problem strongly increases its complexity. We investigate some approaches to scheduling with many-objective heuristic search. We apply a multi-scenario many-objective scheduling approach to the Thames River basin water supply system planning problem in the UK. Decisions include which new supply and demand schemes to implement, at what capacity and when. The impact of different system uncertainties on scheme implementation schedules are explored, i.e. how the choice of future scenarios affects the search process and its outcomes. The activation of schemes is influenced by the occurrence of extreme hydrological events in the ensemble of plausible scenarios and other factors. The approach and results are compared with a previous study where only the portfolio problem is addressed (without scheduling).
Equivalent ZF precoding scheme for downlink indoor MU-MIMO VLC systems
NASA Astrophysics Data System (ADS)
Fan, YangYu; Zhao, Qiong; Kang, BoChao; Deng, LiJun
2018-01-01
In indoor visible light communication (VLC) systems, the channels of photo detectors (PDs) at one user are highly correlated, which determines the choice of spatial diversity model for individual users. In a spatial diversity model, the signals received by PDs belonging to one user carry the same information, and can be combined directly. Based on the above, we propose an equivalent zero-forcing (ZF) precoding scheme for multiple-user multiple-input single-output (MU-MIMO) VLC systems by transforming an indoor MU-MIMO VLC system into an indoor multiple-user multiple-input single-output (MU-MISO) VLC system through simply processing. The power constraints of light emitting diodes (LEDs) are also taken into account. Comprehensive computer simulations in three scenarios indicate that our scheme can not only reduce the computational complexity, but also guarantee the system performance. Furthermore, the proposed scheme does not require noise information in the calculating of the precoding weights, and has no restrictions on the numbers of APs and PDs.
Children's schemes for anticipating the validity of nets for solids
NASA Astrophysics Data System (ADS)
Wright, Vince; Smith, Ken
2017-09-01
There is growing acknowledgement of the importance of spatial abilities to student achievement across a broad range of domains and disciplines. Nets are one way to connect three-dimensional shapes and their two-dimensional representations and are a common focus of geometry curricula. Thirty-four students at year 6 (upper primary school) were interviewed on two occasions about their anticipation of whether or not given nets for the cube- and square-based pyramid would fold to form the target solid. Vergnaud's ( Journal of Mathematical Behavior, 17(2), 167-181, 1998, Human Development, 52, 83-94, 2009) four characteristics of schemes were used as a theoretical lens to analyse the data. Successful schemes depended on the interaction of operational invariants, such as strategic choice of the base, rules for action, particularly rotation of shapes, and anticipations of composites of polygons in the net forming arrangements of faces in the solid. Inferences were rare. These data suggest that students need teacher support to make inferences, in order to create transferable schemes.
Playing quantum games by a scheme with pre- and post-selection
NASA Astrophysics Data System (ADS)
Weng, Guo-Fu; Yu, Yang
2016-01-01
We propose a scheme to play quantum games by assuming that the two players interact with each other. Thus, by pre-selection, two players can choose their initial states, and some dilemma in classical game may be removed by post-selection, which is particularly useful for the cooperative games. We apply the proposal to both of BoS and Prisoners' dilemma games in cooperative situations. The examples show that the proposal would guarantee a remarkably binding agreement between two parties. Any deviation during the game will be detected, and the game may be abnegated. By illuminating the examples, we find that the initial state in the cooperative game does not destroy process to get preferable payoffs by pre- and post-selections, which is not true in other schemes for implementing the quantum game. We point out that one player can use the scheme to detect his opponent's choices if he is advantageous in information theory and technology.
Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henricakson, Kristian C.; Xu, Maozeng; Wang, Yinhai
2016-01-01
This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior. PMID:26761209
Sensitivity of the s-process nucleosynthesis in AGB stars to the overshoot model
NASA Astrophysics Data System (ADS)
Goriely, S.; Siess, L.
2018-01-01
Context. S-process elements are observed at the surface of low- and intermediate-mass stars. These observations can be explained empirically by the so-called partial mixing of protons scenario leading to the incomplete operation of the CN cycle and a significant primary production of the neutron source. This scenario has been successful in qualitatively explaining the s-process enrichment in AGB stars. Even so, it remains difficult to describe both physically and numerically the mixing mechanisms taking place at the time of the third dredged-up between the convective envelope and the underlying C-rich radiative layer Aims: We aim to present new calculations of the s-process nucleosynthesis in AGB stars testing two different numerical implementations of chemical transport. These are based on a diffusion equation which depends on the second derivative of the composition and on a numerical algorithm where the transport of species depends linearly on the chemical gradient. Methods: The s-process nucleosynthesis resulting from these different mixing schemes is calculated with our stellar evolution code STAREVOL which has been upgraded to include an extended s-process network of 411 nuclei. Our investigation focuses on a fiducial 2 M⊙, [Fe/H] = -0.5 model star, but also includes four additional stars of different masses and metallicities. Results: We show that for the same set of parameters, the linear mixing approach produces a much larger 13C-pocket and consequently a substantially higher surface s-process enrichment compared to the diffusive prescription. Within the diffusive model, a quite extreme choice of parameters is required to account for surface s-process enrichment of 1-2 dex. These extreme conditions can not, however, be excluded at this stage. Conclusions: Both the diffusive and linear prescriptions of the overshoot mixing are suited to describe the s-process nucleosynthesis in AGB stars provided the profile of the diffusion coefficient below the convective envelope is carefully chosen. Both schemes give rise to relatively similar distributions of s-process elements, but depending on the parameters adopted, some differences may be obtained. These differences are in the element distribution, and most of all in the level of surface enrichment.
Combining states without scale hierarchies with ordered parton showers
Fischer, Nadine; Prestel, Stefan
2017-09-12
Here, we present a parameter-free scheme to combine fixed-order multi-jet results with parton-shower evolution. The scheme produces jet cross sections with leading-order accuracy in the complete phase space of multiple emissions, resumming large logarithms when appropriate, while not arbitrarily enforcing ordering on momentum configurations beyond the reach of the parton-shower evolution equation. This then requires the development of a matrix-element correction scheme for complex phase-spaces including ordering conditions as well as a systematic scale-setting procedure for unordered phase-space points. Our algorithm does not require a merging-scale parameter. We implement the new method in the Vincia framework and compare to LHCmore » data.« less
Tsai, Yi-Wen; Hu, Teh-Wei
2002-09-01
Taiwan's National Health Insurance Program (NHI) was implemented on March 1, 1995. This study analyzed the influences of the Case Payment method of reimbursement for inpatient care and of physician financial incentives on a woman's choice for primary cesarean delivery. Logistic regressions were used to analyze 11 788 first-time deliveries in a nonprofit hospital system between March 1, 1994, and February 29, 1996. After implementation of the NHI's Case Payment scheme, the likelihood that a woman would choose primary cesarean delivery increased by four to five times compared with the choice behavior of uninsured individuals prior to NHI (P <.0001). Out-of-pocket payment discourages the selection of primary cesarean delivery. No robust statistics were found relating physician financial incentives to delivery choice.
Using partial site aggregation to reduce bias in random utility travel cost models
NASA Astrophysics Data System (ADS)
Lupi, Frank; Feather, Peter M.
1998-12-01
We propose a "partial aggregation" strategy for defining the recreation sites that enter choice sets in random utility models. Under the proposal, the most popular sites and sites that will be the subject of policy analysis enter choice sets as individual sites while remaining sites are aggregated into groups of similar sites. The scheme balances the desire to include all potential substitute sites in the choice sets with practical data and modeling constraints. Unlike fully aggregate models, our analysis and empirical applications suggest that the partial aggregation approach reasonably approximates the results of a disaggregate model. The partial aggregation approach offers all of the data and computational advantages of models with aggregate sites but does not suffer from the same degree of bias as fully aggregate models.
Considerations of persistence and security in CHOICES, an object-oriented operating system
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Madany, Peter W.
1990-01-01
The current design of the CHOICES persistent object implementation is summarized, and research in progress is outlined. CHOICES is implemented as an object-oriented system, and persistent objects appear to simplify and unify many functions of the system. It is demonstrated that persistent data can be accessed through an object-oriented file system model as efficiently as by an existing optimized commercial file system. The object-oriented file system can be specialized to provide an object store for persistent objects. The problems that arise in building an efficient persistent object scheme in a 32-bit virtual address space that only uses paging are described. Despite its limitations, the solution presented allows quite large numbers of objects to be active simultaneously, and permits sharing and efficient method calls.
Nestly--a framework for running software with nested parameter choices and aggregating results.
McCoy, Connor O; Gallagher, Aaron; Hoffman, Noah G; Matsen, Frederick A
2013-02-01
The execution of a software application or pipeline using various combinations of parameters and inputs is a common task in bioinformatics. In the absence of a specialized tool to organize, streamline and formalize this process, scientists must write frequently complex scripts to perform these tasks. We present nestly, a Python package to facilitate running tools with nested combinations of parameters and inputs. nestly provides three components. First, a module to build nested directory structures corresponding to choices of parameters. Second, the nestrun script to run a given command using each set of parameter choices. Third, the nestagg script to aggregate results of the individual runs into a CSV file, as well as support for more complex aggregation. We also include a module for easily specifying nested dependencies for the SCons build tool, enabling incremental builds. Source, documentation and tutorial examples are available at http://github.com/fhcrc/nestly. nestly can be installed from the Python Package Index via pip; it is open source (MIT license).
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta
2009-07-01
Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.
On the dynamics of some grid adaption schemes
NASA Technical Reports Server (NTRS)
Sweby, Peter K.; Yee, Helen C.
1994-01-01
The dynamics of a one-parameter family of mesh equidistribution schemes coupled with finite difference discretisations of linear and nonlinear convection-diffusion model equations is studied numerically. It is shown that, when time marched to steady state, the grid adaption not only influences the stability and convergence rate of the overall scheme, but can also introduce spurious dynamics to the numerical solution procedure.
NASA Astrophysics Data System (ADS)
Zhao, F.; Veldkamp, T.; Frieler, K.; Schewe, J.; Ostberg, S.; Willner, S. N.; Schauberger, B.; Gosling, S.; Mueller Schmied, H.; Portmann, F. T.; Leng, G.; Huang, M.; Liu, X.; Tang, Q.; Hanasaki, N.; Biemans, H.; Gerten, D.; Satoh, Y.; Pokhrel, Y. N.; Stacke, T.; Ciais, P.; Chang, J.; Ducharne, A.; Guimberteau, M.; Wada, Y.; Kim, H.; Yamazaki, D.
2017-12-01
Global hydrological models (GHMs) have been applied to assess global flood hazards, but their capacity to capture the timing and amplitude of peak river discharge—which is crucial in flood simulations—has traditionally not been the focus of examination. Here we evaluate to what degree the choice of river routing scheme affects simulations of peak discharge and may help to provide better agreement with observations. To this end we use runoff and discharge simulations of nine GHMs forced by observational climate data (1971-2010) within the ISIMIP2a project. The runoff simulations were used as input for the global river routing model CaMa-Flood. The simulated daily discharge was compared to the discharge generated by each GHM using its native river routing scheme. For each GHM both versions of simulated discharge were compared to monthly and daily discharge observations from 1701 GRDC stations as a benchmark. CaMa-Flood routing shows a general reduction of peak river discharge and a delay of about two to three weeks in its occurrence, likely induced by the buffering capacity of floodplain reservoirs. For a majority of river basins, discharge produced by CaMa-Flood resulted in a better agreement with observations. In particular, maximum daily discharge was adjusted, with a multi-model averaged reduction in bias over about 2/3 of the analysed basin area. The increase in agreement was obtained in both managed and near-natural basins. Overall, this study demonstrates the importance of routing scheme choice in peak discharge simulation, where CaMa-Flood routing accounts for floodplain storage and backwater effects that are not represented in most GHMs. Our study provides important hints that an explicit parameterisation of these processes may be essential in future impact studies.
The Role of Intelligence and Feedback in Children's Strategy Competence
ERIC Educational Resources Information Center
Luwel, Koen; Foustana, Ageliki; Papadatos, Yiannis.; Verschaffel, Lieven
2011-01-01
A test-intervention-test study was conducted investigating the role of intelligence on four parameters of strategy competence in the context of a numerosity judgment task. Moreover, the effectiveness of two feedback types on these four parameters was tested. In the two test sessions, the choice/no-choice method was used to assess the strategy…
Yoshida, Hiroaki; Kobayashi, Takayuki; Hayashi, Hidemitsu; Kinjo, Tomoyuki; Washizu, Hitoshi; Fukuzawa, Kenji
2014-07-01
A boundary scheme in the lattice Boltzmann method (LBM) for the convection-diffusion equation, which correctly realizes the internal boundary condition at the interface between two phases with different transport properties, is presented. The difficulty in satisfying the continuity of flux at the interface in a transient analysis, which is inherent in the conventional LBM, is overcome by modifying the collision operator and the streaming process of the LBM. An asymptotic analysis of the scheme is carried out in order to clarify the role played by the adjustable parameters involved in the scheme. As a result, the internal boundary condition is shown to be satisfied with second-order accuracy with respect to the lattice interval, if we assign appropriate values to the adjustable parameters. In addition, two specific problems are numerically analyzed, and comparison with the analytical solutions of the problems numerically validates the proposed scheme.
Direct adaptive control of manipulators in Cartesian space
NASA Technical Reports Server (NTRS)
Seraji, H.
1987-01-01
A new adaptive-control scheme for direct control of manipulator end effector to achieve trajectory tracking in Cartesian space is developed in this article. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of adaptive feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for on-line implementation with high sampling rates. The control scheme is applied to a two-link manipulator for illustration.
NASA Astrophysics Data System (ADS)
Neverov, V. V.; Kozhukhov, Y. V.; Yablokov, A. M.; Lebedev, A. A.
2017-08-01
Nowadays the optimization using computational fluid dynamics (CFD) plays an important role in the design process of turbomachines. However, for the successful and productive optimization it is necessary to define a simulation model correctly and rationally. The article deals with the choice of a grid and computational domain parameters for optimization of centrifugal compressor impellers using computational fluid dynamics. Searching and applying optimal parameters of the grid model, the computational domain and solver settings allows engineers to carry out a high-accuracy modelling and to use computational capability effectively. The presented research was conducted using Numeca Fine/Turbo package with Spalart-Allmaras and Shear Stress Transport turbulence models. Two radial impellers was investigated: the high-pressure at ψT=0.71 and the low-pressure at ψT=0.43. The following parameters of the computational model were considered: the location of inlet and outlet boundaries, type of mesh topology, size of mesh and mesh parameter y+. Results of the investigation demonstrate that the choice of optimal parameters leads to the significant reduction of the computational time. Optimal parameters in comparison with non-optimal but visually similar parameters can reduce the calculation time up to 4 times. Besides, it is established that some parameters have a major impact on the result of modelling.
Yang, Ben; Qian, Yun; Berg, Larry K.; ...
2016-07-21
We evaluate the sensitivity of simulated turbine-height wind speeds to 26 parameters within the Mellor–Yamada–Nakanishi–Niino (MYNN) planetary boundary-layer scheme and MM5 surface-layer scheme of the Weather Research and Forecasting model over an area of complex terrain. An efficient sampling algorithm and generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of simulated turbine-height wind speeds. The results indicate that most of the variability in the ensemble simulations is due to parameters related to the dissipation of turbulent kinetic energy (TKE), Prandtl number, turbulent length scales, surface roughness, and the von Kármán constant. Themore » parameter associated with the TKE dissipation rate is found to be most important, and a larger dissipation rate produces larger hub-height wind speeds. A larger Prandtl number results in smaller nighttime wind speeds. Increasing surface roughness reduces the frequencies of both extremely weak and strong airflows, implying a reduction in the variability of wind speed. All of the above parameters significantly affect the vertical profiles of wind speed and the magnitude of wind shear. Lastly, the relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Ben; Qian, Yun; Berg, Larry K.
We evaluate the sensitivity of simulated turbine-height wind speeds to 26 parameters within the Mellor–Yamada–Nakanishi–Niino (MYNN) planetary boundary-layer scheme and MM5 surface-layer scheme of the Weather Research and Forecasting model over an area of complex terrain. An efficient sampling algorithm and generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of simulated turbine-height wind speeds. The results indicate that most of the variability in the ensemble simulations is due to parameters related to the dissipation of turbulent kinetic energy (TKE), Prandtl number, turbulent length scales, surface roughness, and the von Kármán constant. Themore » parameter associated with the TKE dissipation rate is found to be most important, and a larger dissipation rate produces larger hub-height wind speeds. A larger Prandtl number results in smaller nighttime wind speeds. Increasing surface roughness reduces the frequencies of both extremely weak and strong airflows, implying a reduction in the variability of wind speed. All of the above parameters significantly affect the vertical profiles of wind speed and the magnitude of wind shear. Lastly, the relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability.« less
Development of Implicit Methods in CFD NASA Ames Research Center 1970's - 1980's
NASA Technical Reports Server (NTRS)
Pulliam, Thomas H.
2010-01-01
The focus here is on the early development (mid 1970's-1980's) at NASA Ames Research Center of implicit methods in Computational Fluid Dynamics (CFD). A class of implicit finite difference schemes of the Beam and Warming approximate factorization type will be addressed. The emphasis will be on the Euler equations. A review of material pertinent to the solution of the Euler equations within the framework of implicit methods will be presented. The eigensystem of the equations will be used extensively in developing a framework for various methods applied to the Euler equations. The development and analysis of various aspects of this class of schemes will be given along with the motivations behind many of the choices. Various acceleration and efficiency modifications such as matrix reduction, diagonalization and flux split schemes will be presented.
Spin-orbit torque induced magnetic vortex polarity reversal utilizing spin-Hall effect
NASA Astrophysics Data System (ADS)
Li, Cheng; Cai, Li; Liu, Baojun; Yang, Xiaokuo; Cui, Huanqing; Wang, Sen; Wei, Bo
2018-05-01
We propose an effective magnetic vortex polarity reversal scheme that makes use of spin-orbit torque introduced by spin-Hall effect in heavy-metal/ferromagnet multilayers structure, which can result in subnanosecond polarity reversal without endangering the structural stability. Micromagnetic simulations are performed to investigate the spin-Hall effect driven dynamics evolution of magnetic vortex. The mechanism of magnetic vortex polarity reversal is uncovered by a quantitative analysis of exchange energy density, magnetostatic energy density, and their total energy density. The simulation results indicate that the magnetic vortex polarity is reversed through the nucleation-annihilation process of topological vortex-antivortex pair. This scheme is an attractive option for ultra-fast magnetic vortex polarity reversal, which can be used as the guidelines for the choice of polarity reversal scheme in vortex-based random access memory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berg, Larry K.; Gustafson, William I.; Kassianov, Evgueni I.
A new treatment for shallow clouds has been introduced into the Weather Research and Forecasting (WRF) model. The new scheme, called the cumulus potential (CuP) scheme, replaces the ad-hoc trigger function used in the Kain-Fritsch cumulus parameterization with a trigger function related to the distribution of temperature and humidity in the convective boundary layer via probability density functions (PDFs). An additional modification to the default version of WRF is the computation of a cumulus cloud fraction based on the time scales relevant for shallow cumuli. Results from three case studies over the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM)more » site in north central Oklahoma are presented. These days were selected because of the presence of shallow cumuli over the ARM site. The modified version of WRF does a much better job predicting the cloud fraction and the downwelling shortwave irradiance thancontrol simulations utilizing the default Kain-Fritsch scheme. The modified scheme includes a number of additional free parameters, including the number and size of bins used to define the PDF, the minimum frequency of a bin within the PDF before that bin is considered for shallow clouds to form, and the critical cumulative frequency of bins required to trigger deep convection. A series of tests were undertaken to evaluate the sensitivity of the simulations to these parameters. Overall, the scheme was found to be relatively insensitive to each of the parameters.« less
Hartzell, S.; Leeds, A.; Frankel, A.; Williams, R.A.; Odum, J.; Stephenson, W.; Silva, W.
2002-01-01
The Seattle fault poses a significant seismic hazard to the city of Seattle, Washington. A hybrid, low-frequency, high-frequency method is used to calculate broadband (0-20 Hz) ground-motion time histories for a M 6.5 earthquake on the Seattle fault. Low frequencies (1 Hz) are calculated by a stochastic method that uses a fractal subevent size distribution to give an ω-2 displacement spectrum. Time histories are calculated for a grid of stations and then corrected for the local site response using a classification scheme based on the surficial geology. Average shear-wave velocity profiles are developed for six surficial geologic units: artificial fill, modified land, Esperance sand, Lawton clay, till, and Tertiary sandstone. These profiles together with other soil parameters are used to compare linear, equivalent-linear, and nonlinear predictions of ground motion in the frequency band 0-15 Hz. Linear site-response corrections are found to yield unreasonably large ground motions. Equivalent-linear and nonlinear calculations give peak values similar to the 1994 Northridge, California, earthquake and those predicted by regression relationships. Ground-motion variance is estimated for (1) randomization of the velocity profiles, (2) variation in source parameters, and (3) choice of nonlinear model. Within the limits of the models tested, the results are found to be most sensitive to the nonlinear model and soil parameters, notably the over consolidation ratio.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohmi, K.
In recent high luminosity colliders, the finite crossing angle scheme becomes popular to gain the multiplicity of luminosity with multi-bunch or long bunch operation. Success of KEKB showed that the finite crossing angle scheme was no problem to achieve the beam-beam parameter up to 0.05. The authors have studied the beam-beam interactions with/without crossing angle toward higher luminosity. They discuss how the crossing angle affects the beam-beam parameter and luminosity in the present KEK B factory (KEKB) using computer simulations.
NASA Astrophysics Data System (ADS)
Bower, Keith; Choularton, Tom; Latham, John; Sahraei, Jalil; Salter, Stephen
2006-11-01
A simplified version of the model of marine stratocumulus clouds developed by Bower, Jones and Choularton [Bower, K.N., Jones, A., and Choularton, T.W., 1999. A modeling study of aerosol processing by stratocumulus clouds and its impact on GCM parameterisations of cloud and aerosol. Atmospheric Research, Vol. 50, Nos. 3-4, The Great Dun Fell Experiment, 1995-special issue, 317-344.] was used to examine the sensitivity of the albedo-enhancement global warming mitigation scheme proposed by Latham [Latham, J., 1990. Control of global warming? Nature 347, 339-340; Latham, J., 2002. Amelioration of global warming by controlled enhancement of the albedo and longevity of low-level maritime clouds. Atmos. Sci. Letters (doi:10.1006/Asle.2002.0048).] to the cloud and environmental aerosol characteristics, as well as those of the seawater aerosol of salt-mass ms and number concentration Δ N, which-under the scheme-are advertently introduced into the clouds. Values of albedo-change Δ A and droplet number concentration Nd were calculated for a wide range of values of ms, Δ N, updraught speed W, cloud thickness Δ Z and cloud-base temperature TB: for three measured aerosol spectra, corresponding to ambient air of negligible, moderate and high levels of pollution. Our choices of parameter value ranges were determined by the extent of their applicability to the mitigation scheme, whose current formulation is still somewhat preliminary, thus rendering unwarranted in this study the utilisation of refinements incorporated into other stratocumulus models. In agreement with earlier studies: (1) Δ A was found to be very sensitive to Δ N and (within certain constraints) insensitive to changes in ms, W, Δ Z and TB; (2) Δ A was greatest for clouds formed in pure air and least for highly polluted air. In many situations considered to be within the ambit of the mitigation scheme, the calculated Δ A values exceeded those estimated by earlier workers as being necessary to produce a cooling sufficient to compensate, globally, for the warming resulting from a doubling of the atmospheric carbon dioxide concentration. Our calculations provide quantitative support for the physical viability of the mitigation scheme and offer new insights into its technological requirements.
NASA Astrophysics Data System (ADS)
Miller, V. M.; Semiatin, S. L.; Szczepanski, C.; Pilchak, A. L.
2018-06-01
The ability to predict the evolution of crystallographic texture during hot work of titanium alloys in the α + β temperature regime is greatly significant to numerous engineering disciplines; however, research efforts are complicated by the rapid changes in phase volume fractions and flow stresses with temperature in addition to topological considerations. The viscoplastic self-consistent (VPSC) polycrystal plasticity model is employed to simulate deformation in the two phase field. Newly developed parameter selection schemes utilizing automated optimization based on two different error metrics are considered. In the first optimization scheme, which is commonly used in the literature, the VPSC parameters are selected based on the quality of fit between experiment and simulated flow curves at six hot-working temperatures. Under the second newly developed scheme, parameters are selected to minimize the difference between the simulated and experimentally measured α textures after accounting for the β → α transformation upon cooling. It is demonstrated that both methods result in good qualitative matches for the experimental α phase texture, but texture-based optimization results in a substantially better quantitative orientation distribution function match.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Sheng-Quan; Wu, Xing-Gang; Brodsky, Stanley J.
We present improved perturbative QCD (pQCD) predictions for Higgs boson hadroproduction at the LHC by applying the principle of maximum conformality (PMC), a procedure which resums the pQCD series using the renormalization group (RG), thereby eliminating the dependence of the predictions on the choice of the renormalization scheme while minimizing sensitivity to the initial choice of the renormalization scale. In previous pQCD predictions for Higgs boson hadroproduction, it has been conventional to assume that the renormalization scale μ r of the QCD coupling α s ( μ r ) is the Higgs mass and then to vary this choice overmore » the range 1 / 2 m H < μ r < 2 m H in order to estimate the theory uncertainty. However, this error estimate is only sensitive to the nonconformal β terms in the pQCD series, and thus it fails to correctly estimate the theory uncertainty in cases where a pQCD series has large higher-order contributions, as is the case for Higgs boson hadroproduction. Furthermore, this ad hoc choice of scale and range gives pQCD predictions which depend on the renormalization scheme being used, in contradiction to basic RG principles. In contrast, after applying the PMC, we obtain next-to-next-to-leading-order RG resummed pQCD predictions for Higgs boson hadroproduction which are renormalization-scheme independent and have minimal sensitivity to the choice of the initial renormalization scale. Taking m H = 125 GeV , the PMC predictions for the p p → H X Higgs inclusive hadroproduction cross sections for various LHC center-of-mass energies are σ Incl | 7 TeV = 21.2 1 + 1.36 - 1.32 pb , σ Incl | 8 TeV = 27.3 7 + 1.65 - 1.59 pb , and σ Incl | 13 TeV = 65.7 2 + 3.46 - 3.0 pb . We also predict the fiducial cross section σ fid ( p p → H → γ γ ) : σ fid | 7 TeV = 30.1 + 2.3 - 2.2 fb , σ fid | 8 TeV = 38.3 + 2.9 - 2.8 fb , and σ fid | 13 TeV = 85.8 + 5.7 - 5.3 fb . The error limits in these predictions include the small residual high-order renormalization-scale dependence plus the uncertainty from the factorization scale. The PMC predictions show better agreement with the ATLAS measurements than the LHC Higgs Cross Section Working Group predictions which are based on conventional renormalization-scale setting.« less
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Clark, Martyn P.
2010-10-01
Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.
Asian dust aerosol: Optical effect on satellite ocean color signal and a scheme of its correction
NASA Astrophysics Data System (ADS)
Fukushima, H.; Toratani, M.
1997-07-01
The paper first exhibits the influence of the Asian dust aerosol (KOSA) on a coastal zone color scanner (CZCS) image which records erroneously low or negative satellite-derived water-leaving radiance especially in a shorter wavelength region. This suggests the presence of spectrally dependent absorption which was disregarded in the past atmospheric correction algorithms. On the basis of the analysis of the scene, a semiempirical optical model of the Asian dust aerosol that relates aerosol single scattering albedo (ωA) to the spectral ratio of aerosol optical thickness between 550 nm and 670 nm is developed. Then, as a modification to a standard CZCS atmospheric correction algorithm (NASA standard algorithm), a scheme which estimates pixel-wise aerosol optical thickness, and in turn ωA, is proposed. The assumption of constant normalized water-leaving radiance at 550 nm is adopted together with a model of aerosol scattering phase function. The scheme is combined to the standard algorithm, performing atmospheric correction just the same as the standard version with a fixed Angstrom coefficient except in the case where the presence of Asian dust aerosol is detected by the lowered satellite-derived Angstrom exponent. Some of the model parameter values are determined so that the scheme does not produce any spatial discontinuity with the standard scheme. The algorithm was tested against the Japanese Asian dust CZCS scene with parameter values of the spectral dependency of ωA, first statistically determined and second optimized for selected pixels. Analysis suggests that the parameter values depend on the assumed Angstrom coefficient for standard algorithm, at the same time defining the spatial extent of the area to apply the Asian dust scheme. The algorithm was also tested for a Saharan dust scene, showing the relevance of the scheme but with different parameter setting. Finally, the algorithm was applied to a data set of 25 CZCS scenes to produce a monthly composite of pigment concentration for April 1981. Through these analyses, the modified algorithm is considered robust in the sense that it operates most compatibly with the standard algorithm yet performs adaptively in response to the magnitude of the dust effect.
Active Inference, Epistemic Value, and Vicarious Trial and Error
ERIC Educational Resources Information Center
Pezzulo, Giovanni; Cartoni, Emilio; Rigoli, Francesco; io-Lopez, Léo; Friston, Karl
2016-01-01
Balancing habitual and deliberate forms of choice entails a comparison of their respective merits--the former being faster but inflexible, and the latter slower but more versatile. Here, we show that arbitration between these two forms of control can be derived from first principles within an Active Inference scheme. We illustrate our arguments…
Identity Bargaining: A Policy Systems Research Model of Career Development.
ERIC Educational Resources Information Center
Slawski, Carl
A detailed, general and comprehensive accounting scheme is presented, consisting of nine stages of career development, three major sets of elements contributing to career choice (in terms of personal, cultural and situational roles), and 20 hypotheses relating the separate elements. Implicit in the model is a novel procedure and method for…
Women's Declining Employment with Access to Higher Education: Issues and Challenges
ERIC Educational Resources Information Center
Sangar, Sunita
2014-01-01
Access to higher education opened up avenues for more women workforce in decent employment contributing to the national economy. Government policies/schemes played a significant role in improving this significant indicator of women empowerment. This access also had an impact on their enrolment and choice of subjects but was accompanied by several…
Power Peaking Effect of OTTO Fuel Scheme Pebble Bed Reactor
NASA Astrophysics Data System (ADS)
Setiadipura, T.; Suwoto; Zuhair; Bakhri, S.; Sunaryo, G. R.
2018-02-01
Pebble Bed Reactor (PBR) type of Hight Temperature Gas-cooled Reactor (HTGR) is a very interesting nuclear reactor design to fulfill the growing electricity and heat demand with a superior passive safety features. Effort to introduce the PBR design to the market can be strengthen by simplifying its system with the Once-through-then-out (OTTO) cycle PBR in which the pebble fuel only pass the core once. Important challenge in the OTTO fuel scheme is the power peaking effect which limit the maximum nominal power or burnup of the design. Parametric survey is perform in this study to investigate the contribution of different design parameters to power peaking effect of OTTO cycle PBR. PEBBED code is utilized in this study to perform the equilibrium PBR core analysis for different design parameter and fuel scheme. The parameters include its core diameter, height-per-diameter (H/D), power density, and core nominal power. Results of this study show that diameter and H/D effectsare stronger compare to the power density and nominal core power. Results of this study might become an importance guidance for design optimization of OTTO fuel scheme PBR.
NASA Astrophysics Data System (ADS)
Kamata, S.
2017-12-01
Solid-state thermal convection plays a major role in the thermal evolution of solid planetary bodies. Solving the equation system for thermal evolution considering convection requires 2-D or 3-D modeling, resulting in large calculation costs. A 1-D calculation scheme based on mixing length theory (MLT) requires a much lower calculation cost and is suitable for parameter studies. A major concern for the MLT scheme is its accuracy due to a lack of detailed comparisons with higher dimensional schemes. In this study, I quantify its accuracy via comparisons of thermal profiles obtained by 1-D MLT and 3-D numerical schemes. To improve the accuracy, I propose a new definition of the mixing length (l), which is a parameter controlling the efficiency of heat transportation due to convection. Adopting this new definition of l, I investigate the thermal evolution of Dione and Enceladus under a wide variety of parameter conditions. Calculation results indicate that each satellite requires several tens of GW of heat to possess a 30-km-thick global subsurface ocean. Dynamical tides may be able to account for such an amount of heat, though their ices need to be highly viscous.
Energy levels scheme simulation of divalent cobalt doped bismuth germanate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreici, Emiliana-Laura, E-mail: andreicilaura@yahoo.com; Petkova, Petya; Avram, Nicolae M.
The aim of this paper is to simulate the energy levels scheme for Bismuth Germanate (BGO) doped with divalent cobalt, in order to give a reliable explanation for spectral experimental data. In the semiempirical crystal field theory we first modeled the Crystal Field Parameters (CFPs) of BGO:Cr{sup 2+} system, in the frame of Exchange Charge Model (ECM), with actually site symmetry of the impurity ions after doping. The values of CFPs depend on the geometry of doped host matrix and by parameter G of ECM. First, we optimized the geometry of undoped BGO host matrix and afterwards, that of dopedmore » BGO with divalent cobalt. The charges effect of ligands and covalence bonding between cobalt cations and oxygen anions, in the cluster approach, also were taken into account. With the obtained values of the CFPs we simulate the energy levels scheme of cobalt ions, by diagonalizing the matrix of the doped crystal Hamiltonian. Obviously, energy levels and estimated Racah parameters B and C were compared with the experimental spectroscopic data and discussed. Comparison of obtained results with experimental data shows quite satisfactory, which justify the model and simulation schemes used for the title system.« less
Advanced interactive display formats for terminal area traffic control
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.
1995-01-01
The basic design considerations for perspective Air Traffic Control displays are described. A software framework has been developed for manual viewing parameter setting (MVPS) in preparation for continued, ongoing developments on automated viewing parameter setting (AVPS) schemes. The MVPS system is based on indirect manipulation of the viewing parameters. Requests for changes in viewing parameter setting are entered manually by the operator by moving viewing parameter manipulation pointers on the screen. The motion of these pointers, which are an integral part of the 3-D scene, is limited to the boundaries of screen. This arrangement has been chosen, in order to preserve the correspondence between the new and the old viewing parameter setting, a feature which contributes to preventing spatial disorientation of the operator. For all viewing operations, e.g. rotation, translation and ranging, the actual change is executed automatically by the system, through gradual transitions with an exponentially damped, sinusoidal velocity profile, in this work referred to as 'slewing' motions. The slewing functions, which eliminate discontinuities in the viewing parameter changes, are designed primarily for enhancing the operator's impression that he, or she, is dealing with an actually existing physical system, rather than an abstract computer generated scene. Current, ongoing efforts deal with the development of automated viewing parameter setting schemes. These schemes employ an optimization strategy, aimed at identifying the best possible vantage point, from which the Air Traffic Control scene can be viewed, for a given traffic situation.
An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method
NASA Astrophysics Data System (ADS)
Lu, Dan; Ricciuto, Daniel; Evans, Katherine
2018-03-01
Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khare, V.; Fitz, D.E.; Kouri, D.J.
1980-09-15
The effect of phase choice and partial wave parameter choice on CS and IOS inelastic degeneracy averaged differential cross sections is studied. An approximate simplified CS scattering amplitude for l-bar=1/2(l'+l) is derived and is shown to have a form which closely resembles the McGuire--Kouri scattering amplitude for odd ..delta..j transitions and reduces to it for even ..delta..j transitions. The choice of phase in the CS wave function is shown to result in different approximations which yield significantly different shapes for the degeneracy averaged differential cross section. Time reversal symmetry arguments are employed to select the proper phase choice. IOS calculationsmore » of the degeneracy averaged differential cross sections of He--CO, He--Cl and Ne--HD using l-bar=1/2(l+l') and the phase choice which ensures proper time reversal symmetry are found to correct the phase disagreement which was previously noted for odd ..delta..j transitions using l-bar=l or l' and either the time reversal phase or other phase choices.« less
Exclusive queueing model including the choice of service windows
NASA Astrophysics Data System (ADS)
Tanaka, Masahiro; Yanagisawa, Daichi; Nishinari, Katsuhiro
2018-01-01
In a queueing system involving multiple service windows, choice behavior is a significant concern. This paper incorporates the choice of service windows into a queueing model with a floor represented by discrete cells. We contrived a logit-based choice algorithm for agents considering the numbers of agents and the distances to all service windows. Simulations were conducted with various parameters of agent choice preference for these two elements and for different floor configurations, including the floor length and the number of service windows. We investigated the model from the viewpoint of transit times and entrance block rates. The influences of the parameters on these factors were surveyed in detail and we determined that there are optimum floor lengths that minimize the transit times. In addition, we observed that the transit times were determined almost entirely by the entrance block rates. The results of the presented model are relevant to understanding queueing systems including the choice of service windows and can be employed to optimize facility design and floor management.
A PREFERENCE-OPPORTUNITY-CHOICE FRAMEWORK WITH APPLICATIONS TO INTERGROUP FRIENDSHIP*
Zeng, Zhen; Xie, Yu
2009-01-01
A longstanding objective of friendship research is to identify the effects of personal preference and structural opportunity on intergroup friendship choice. Although past studies have used various methods to separate preference from opportunity, researchers have not yet systematically compared the properties and implications of these methods. We put forward a general framework for discrete choice, where choice probability is specified as proportional to the product of preference and opportunity. To implement this framework, we propose a modification to the conditional logit model for estimating preference parameters free from the influence of opportunity structure. We then compare our approach to several alternative methods for separating preference and opportunity used in the friendship choice literature. As an empirical example, we test hypotheses of homophily and status asymmetry in friendship choice using data from the National Longitudinal Study of Adolescent Health. The example also demonstrates the approach of conducting a sensitivity analysis to examine how parameter estimates vary by specification of the opportunity structure. PMID:19569394
Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.
Lages, Martin; Scheel, Anne
2016-01-01
We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.
Elenchezhiyan, M; Prakash, J
2015-09-01
In this work, state estimation schemes for non-linear hybrid dynamic systems subjected to stochastic state disturbances and random errors in measurements using interacting multiple-model (IMM) algorithms are formulated. In order to compute both discrete modes and continuous state estimates of a hybrid dynamic system either an IMM extended Kalman filter (IMM-EKF) or an IMM based derivative-free Kalman filters is proposed in this study. The efficacy of the proposed IMM based state estimation schemes is demonstrated by conducting Monte-Carlo simulation studies on the two-tank hybrid system and switched non-isothermal continuous stirred tank reactor system. Extensive simulation studies reveal that the proposed IMM based state estimation schemes are able to generate fairly accurate continuous state estimates and discrete modes. In the presence and absence of sensor bias, the simulation studies reveal that the proposed IMM unscented Kalman filter (IMM-UKF) based simultaneous state and parameter estimation scheme outperforms multiple-model UKF (MM-UKF) based simultaneous state and parameter estimation scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Optimal feedback scheme and universal time scaling for Hamiltonian parameter estimation.
Yuan, Haidong; Fung, Chi-Hang Fred
2015-09-11
Time is a valuable resource and it is expected that a longer time period should lead to better precision in Hamiltonian parameter estimation. However, recent studies in quantum metrology have shown that in certain cases more time may even lead to worse estimations, which puts this intuition into question. In this Letter we show that by including feedback controls this intuition can be restored. By deriving asymptotically optimal feedback controls we quantify the maximal improvement feedback controls can provide in Hamiltonian parameter estimation and show a universal time scaling for the precision limit under the optimal feedback scheme. Our study reveals an intriguing connection between noncommutativity in the dynamics and the gain of feedback controls in Hamiltonian parameter estimation.
Scheme for quantum state manipulation in coupled cavities
NASA Astrophysics Data System (ADS)
Lin, Jin-Zhong
By controlling the parameters of the system, the effective interaction between different atoms is achieved in different cavities. Based on the interaction, scheme to generate three-atom Greenberger-Horne-Zeilinger (GHZ) is proposed in coupled cavities. Spontaneous emission of excited states and decay of cavity modes can be suppressed efficiently. In addition, the scheme is robust against the variation of hopping rate between cavities.
Comparison of two integration methods for dynamic causal modeling of electrophysiological data.
Lemaréchal, Jean-Didier; George, Nathalie; David, Olivier
2018-06-01
Dynamic causal modeling (DCM) is a methodological approach to study effective connectivity among brain regions. Based on a set of observations and a biophysical model of brain interactions, DCM uses a Bayesian framework to estimate the posterior distribution of the free parameters of the model (e.g. modulation of connectivity) and infer architectural properties of the most plausible model (i.e. model selection). When modeling electrophysiological event-related responses, the estimation of the model relies on the integration of the system of delay differential equations (DDEs) that describe the dynamics of the system. In this technical note, we compared two numerical schemes for the integration of DDEs. The first, and standard, scheme approximates the DDEs (more precisely, the state of the system, with respect to conduction delays among brain regions) using ordinary differential equations (ODEs) and solves it with a fixed step size. The second scheme uses a dedicated DDEs solver with adaptive step sizes to control error, making it theoretically more accurate. To highlight the effects of the approximation used by the first integration scheme in regard to parameter estimation and Bayesian model selection, we performed simulations of local field potentials using first, a simple model comprising 2 regions and second, a more complex model comprising 6 regions. In these simulations, the second integration scheme served as the standard to which the first one was compared. Then, the performances of the two integration schemes were directly compared by fitting a public mismatch negativity EEG dataset with different models. The simulations revealed that the use of the standard DCM integration scheme was acceptable for Bayesian model selection but underestimated the connectivity parameters and did not allow an accurate estimation of conduction delays. Fitting to empirical data showed that the models systematically obtained an increased accuracy when using the second integration scheme. We conclude that inference on connectivity strength and delay based on DCM for EEG/MEG requires an accurate integration scheme. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Grid-free density functional calculations on periodic systems.
Varga, Stefan
2007-09-21
Density fitting scheme is applied to the exchange part of the Kohn-Sham potential matrix in a grid-free local density approximation for infinite systems with translational periodicity. It is shown that within this approach the computational demands for the exchange part scale in the same way as for the Coulomb part. The efficiency of the scheme is demonstrated on a model infinite polymer chain. For simplicity, the implementation with Dirac-Slater Xalpha exchange functional is presented only. Several choices of auxiliary basis set expansion coefficients were tested with both Coulomb and overlap metric. Their effectiveness is discussed also in terms of robustness and norm preservation.
Grid-free density functional calculations on periodic systems
NASA Astrophysics Data System (ADS)
Varga, Štefan
2007-09-01
Density fitting scheme is applied to the exchange part of the Kohn-Sham potential matrix in a grid-free local density approximation for infinite systems with translational periodicity. It is shown that within this approach the computational demands for the exchange part scale in the same way as for the Coulomb part. The efficiency of the scheme is demonstrated on a model infinite polymer chain. For simplicity, the implementation with Dirac-Slater Xα exchange functional is presented only. Several choices of auxiliary basis set expansion coefficients were tested with both Coulomb and overlap metric. Their effectiveness is discussed also in terms of robustness and norm preservation.
Effects of nutrition label format and product assortment on the healthfulness of food choice.
Aschemann-Witzel, Jessica; Grunert, Klaus G; van Trijp, Hans C M; Bialkova, Svetlana; Raats, Monique M; Hodgkins, Charo; Wasowicz-Kirylo, Grazyna; Koenigstorfer, Joerg
2013-12-01
This study aims to find out whether front-of-pack nutrition label formats influence the healthfulness of consumers' food choices and important predictors of healthful choices, depending on the size of the choice set that is made available to consumers. The predictors explored were health motivation and perceived capability of making healthful choices. One thousand German and Polish consumers participated in the study that manipulated the format of nutrition labels. All labels referred to the content of calories and four negative nutrients and were presented on savoury and sweet snacks. The different formats included the percentage of guideline daily amount, colour coding schemes, and text describing low, medium and high content of each nutrient. Participants first chose from a set of 10 products and then from a set of 20 products, which was, on average, more healthful than the first choice set. The results showed that food choices were more healthful in the extended 20-product (vs. 10-product) choice set and that this effect is stronger than a random choice would produce. The formats colour coding and texts, particularly colour coding in Germany, increased the healthfulness of product choices when consumers were asked to choose a healthful product, but not when they were asked to choose according to their preferences. The formats did not influence consumers' motivation to choose healthful foods. Colour coding, however, increased consumers' perceived capability of making healthful choices. While the results revealed no consistent differences in the effects between the formats, they indicate that manipulating choice sets by including healthier options is an effective strategy to increase the healthfulness of food choices. Copyright © 2013 Elsevier Ltd. All rights reserved.
Improved Scheme of Modified Gaussian Deconvolution for Reflectance Spectra of Lunar Soils
NASA Technical Reports Server (NTRS)
Hiroi, T.; Pieters, C. M.; Noble, S. K.
2000-01-01
In our continuing effort for deconvolving reflectance spectra of lunar soils using the modified Gaussian model, a new scheme has been developed, including a new form of continuum. All the parameters are optimized with certain constraints.
NMRPipe: a multidimensional spectral processing system based on UNIX pipes.
Delaglio, F; Grzesiek, S; Vuister, G W; Zhu, G; Pfeifer, J; Bax, A
1995-11-01
The NMRPipe system is a UNIX software environment of processing, graphics, and analysis tools designed to meet current routine and research-oriented multidimensional processing requirements, and to anticipate and accommodate future demands and developments. The system is based on UNIX pipes, which allow programs running simultaneously to exchange streams of data under user control. In an NMRPipe processing scheme, a stream of spectral data flows through a pipeline of processing programs, each of which performs one component of the overall scheme, such as Fourier transformation or linear prediction. Complete multidimensional processing schemes are constructed as simple UNIX shell scripts. The processing modules themselves maintain and exploit accurate records of data sizes, detection modes, and calibration information in all dimensions, so that schemes can be constructed without the need to explicitly define or anticipate data sizes or storage details of real and imaginary channels during processing. The asynchronous pipeline scheme provides other substantial advantages, including high flexibility, favorable processing speeds, choice of both all-in-memory and disk-bound processing, easy adaptation to different data formats, simpler software development and maintenance, and the ability to distribute processing tasks on multi-CPU computers and computer networks.
Key Management Scheme Based on Route Planning of Mobile Sink in Wireless Sensor Networks.
Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Jiang, Shengming; Chen, Wei
2016-01-29
In many wireless sensor network application scenarios the key management scheme with a Mobile Sink (MS) should be fully investigated. This paper proposes a key management scheme based on dynamic clustering and optimal-routing choice of MS. The concept of Traveling Salesman Problem with Neighbor areas (TSPN) in dynamic clustering for data exchange is proposed, and the selection probability is used in MS route planning. The proposed scheme extends static key management to dynamic key management by considering the dynamic clustering and mobility of MSs, which can effectively balance the total energy consumption during the activities. Considering the different resources available to the member nodes and sink node, the session key between cluster head and MS is established by modified an ECC encryption with Diffie-Hellman key exchange (ECDH) algorithm and the session key between member node and cluster head is built with a binary symmetric polynomial. By analyzing the security of data storage, data transfer and the mechanism of dynamic key management, the proposed scheme has more advantages to help improve the resilience of the key management system of the network on the premise of satisfying higher connectivity and storage efficiency.
A classification scheme for edge-localized modes based on their probability distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shabbir, A., E-mail: aqsa.shabbir@ugent.be; Max Planck Institute for Plasma Physics, D-85748 Garching; Hornung, G.
We present here an automated classification scheme which is particularly well suited to scenarios where the parameters have significant uncertainties or are stochastic quantities. To this end, the parameters are modeled with probability distributions in a metric space and classification is conducted using the notion of nearest neighbors. The presented framework is then applied to the classification of type I and type III edge-localized modes (ELMs) from a set of carbon-wall plasmas at JET. This provides a fast, standardized classification of ELM types which is expected to significantly reduce the effort of ELM experts in identifying ELM types. Further, themore » classification scheme is general and can be applied to various other plasma phenomena as well.« less
Patient choice in opt-in, active choice, and opt-out HIV screening: randomized clinical trial.
Montoy, Juan Carlos C; Dow, William H; Kaplan, Beth C
2016-01-19
What is the effect of default test offers--opt-in, opt-out, and active choice--on the likelihood of acceptance of an HIV test among patients receiving care in an emergency department? This was a randomized clinical trial conducted in the emergency department of an urban teaching hospital and regional trauma center. Patients aged 13-64 years were randomized to opt-in, opt-out, and active choice HIV test offers. The primary outcome was HIV test acceptance percentage. The Denver Risk Score was used to categorize patients as being at low, intermediate, or high risk of HIV infection. 38.0% (611/1607) of patients in the opt-in testing group accepted an HIV test, compared with 51.3% (815/1628) in the active choice arm (difference 13.3%, 95% confidence interval 9.8% to 16.7%) and 65.9% (1031/1565) in the opt-out arm (difference 27.9%, 24.4% to 31.3%). Compared with active choice testing, opt-out testing led to a 14.6 (11.1 to 18.1) percentage point increase in test acceptance. Patients identified as being at intermediate and high risk were more likely to accept testing than were those at low risk in all arms (difference 6.4% (3.4% to 9.3%) for intermediate and 8.3% (3.3% to 13.4%) for high risk). The opt-out effect was significantly smaller among those reporting high risk behaviors, but the active choice effect did not significantly vary by level of reported risk behavior. Patients consented to inclusion in the study after being offered an HIV test, and inclusion varied slightly by treatment assignment. The study took place at a single county hospital in a city that is somewhat unique with respect to HIV testing; although the test acceptance percentages themselves might vary, a different pattern for opt-in versus active choice versus opt-out test schemes would not be expected. Active choice is a distinct test regimen, with test acceptance patterns that may best approximate patients' true preferences. Opt-out regimens can substantially increase HIV testing, and opt-in schemes may reduce testing, compared with active choice testing. This study was supported by grant NIA 1RC4AG039078 from the National Institute on Aging. The full dataset is available from the corresponding author. Consent for data sharing was not obtained, but the data are anonymized and risk of identification is low.Trial registration Clinical trials NCT01377857. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Three-moment representation of rain in a cloud microphysics model
NASA Astrophysics Data System (ADS)
Paukert, M.; Fan, J.; Rasch, P. J.; Morrison, H.; Milbrandt, J.; Khain, A.; Shpund, J.
2017-12-01
Two-moment microphysics schemes have been commonly used for cloud simulation in models across different scales - from large-eddy simulations to global climate models. These schemes have yielded valuable insights into cloud and precipitation processes, however the size distributions are limited to two degrees of freedom, and thus the shape parameter is typically fixed or diagnosed. We have developed a three-moment approach for the rain category in order to provide an additional degree of freedom to the size distribution and thereby improve the cloud microphysics representations for more accurate weather and climate simulations. The approach is applied to the Predicted Particle Properties (P3) scheme. In addition to the rain number and mass mixing ratios predicted in the two-moment P3, we now include prognostic equations for the sixth moment of the size distribution (radar reflectivity), thus allowing the shape parameter to evolve freely. We employ the spectral bin microphysics (SBM) model to formulate the three-moment process rates in P3 for drop collisions and breakup. We first test the three-moment scheme with a maritime stratocumulus case from the VOCALS field campaign, and compare the model results with respect to cloud and precipitation properties from the new P3 scheme, original two-moment P3 scheme, SBM, and in-situ aircraft measurements. The improved simulation results by the new P3 scheme will be discussed and physically explained.
Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano
2015-01-01
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926
Kreula, J. M.; Clark, S. R.; Jaksch, D.
2016-01-01
We propose a non-linear, hybrid quantum-classical scheme for simulating non-equilibrium dynamics of strongly correlated fermions described by the Hubbard model in a Bethe lattice in the thermodynamic limit. Our scheme implements non-equilibrium dynamical mean field theory (DMFT) and uses a digital quantum simulator to solve a quantum impurity problem whose parameters are iterated to self-consistency via a classically computed feedback loop where quantum gate errors can be partly accounted for. We analyse the performance of the scheme in an example case. PMID:27609673
Performance characteristics of an adaptive controller based on least-mean-square filters
NASA Technical Reports Server (NTRS)
Mehta, Rajiv S.; Merhav, Shmuel J.
1986-01-01
A closed loop, adaptive control scheme that uses a least mean square filter as the controller model is presented, along with simulation results that demonstrate the excellent robustness of this scheme. It is shown that the scheme adapts very well to unknown plants, even those that are marginally stable, responds appropriately to changes in plant parameters, and is not unduly affected by additive noise. A heuristic argument for the conditions necessary for convergence is presented. Potential applications and extensions of the scheme are also discussed.
The anatomy of choice: active inference and agency
Friston, Karl; Schwartenbeck, Philipp; FitzGerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J.
2013-01-01
This paper considers agency in the setting of embodied or active inference. In brief, we associate a sense of agency with prior beliefs about action and ask what sorts of beliefs underlie optimal behavior. In particular, we consider prior beliefs that action minimizes the Kullback–Leibler (KL) divergence between desired states and attainable states in the future. This allows one to formulate bounded rationality as approximate Bayesian inference that optimizes a free energy bound on model evidence. We show that constructs like expected utility, exploration bonuses, softmax choice rules and optimism bias emerge as natural consequences of this formulation. Previous accounts of active inference have focused on predictive coding and Bayesian filtering schemes for minimizing free energy. Here, we consider variational Bayes as an alternative scheme that provides formal constraints on the computational anatomy of inference and action—constraints that are remarkably consistent with neuroanatomy. Furthermore, this scheme contextualizes optimal decision theory and economic (utilitarian) formulations as pure inference problems. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (of softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution—that minimizes free energy. This sensitivity corresponds to the precision of beliefs about behavior, such that attainable goals are afforded a higher precision or confidence. In turn, this means that optimal behavior entails a representation of confidence about outcomes that are under an agent's control. PMID:24093015
An Architecture for Enabling Migration of Tactical Networks to Future Flexible Ad Hoc WBWF
2010-09-01
Requirements Several multiple access schemes TDMA OFDMA SC-OFDMA, FH- CDMA , DS - CDMA , hybrid access schemes, transitions between them Dynamic...parameters algorithms depend on the multiple access scheme If DS - CDMA : handling of macro-diversity (linked to cooperative routing) TDMA and/of OFDMA...Transport format Ciphering @MAC/RLC level : SCM Physical layer (PHY) : signal processing (mod, FEC, etc) CDMA : macro-diversity CDMA , OFDMA
A preference-ordered discrete-gaming approach to air-combat analysis
NASA Technical Reports Server (NTRS)
Kelley, H. J.; Lefton, L.
1978-01-01
An approach to one-on-one air-combat analysis is described which employs discrete gaming of a parameterized model featuring choice between several closed-loop control policies. A preference-ordering formulation due to Falco is applied to rational choice between outcomes: win, loss, mutual capture, purposeful disengagement, draw. Approximate optimization is provided by an active-cell scheme similar to Falco's obtained by a 'backing up' process similar to that of Kopp. The approach is designed primarily for short-duration duels between craft with large-envelope weaponry. Some illustrative computations are presented for an example modeled using constant-speed vehicles and very rough estimation of energy shifts.
A model of the impact of reimbursement schemes on health plan choice.
Keeler, E B; Carter, G; Newhouse, J P
1998-06-01
Flat capitation (uniform prospective payments) makes enrolling healthy enrollees profitable to health plans. Plans with relatively generous benefits may attract the sick and fail through a premium spiral. We simulate a model of idealized managed competition to explore the effect on market performance of alternatives to flat capitation such as severity-adjusted capitation and reduced supply-side cost-sharing. In our model flat capitation causes severe market problems. Severity adjustment and to a lesser extent reduced supply-side cost-sharing improve market performance, but outcomes are efficient only in cases in which people bear the marginal costs of their choices.
Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment
NASA Astrophysics Data System (ADS)
Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin
2017-10-01
Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.
NASA Astrophysics Data System (ADS)
Qu, Feng; Sun, Di; Zuo, Guang
2018-06-01
With the rapid development of the Computational Fluid Dynamics (CFD), Accurate computing hypersonic heating is in a high demand for the design of the new generation reusable space vehicle to conduct deep space exploration. In the past years, most researchers try to solve this problem by concentrating on the choice of the upwind schemes or the definition of the cell Reynolds number. However, the cell Reynolds number dependencies and limiter dependencies of the upwind schemes, which are of great importance to their performances in hypersonic heating computations, are concerned by few people. In this paper, we conduct a systematic study on these properties respectively. Results in our test cases show that SLAU (Simple Low-dissipation AUSM-family) is with a much higher level of accuracy and robustness in hypersonic heating predictions. Also, it performs much better in terms of the limiter dependency and the cell Reynolds number dependency.
NASA Astrophysics Data System (ADS)
Trask, Nathaniel; Maxey, Martin; Hu, Xiaozhe
2018-02-01
A stable numerical solution of the steady Stokes problem requires compatibility between the choice of velocity and pressure approximation that has traditionally proven problematic for meshless methods. In this work, we present a discretization that couples a staggered scheme for pressure approximation with a divergence-free velocity reconstruction to obtain an adaptive, high-order, finite difference-like discretization that can be efficiently solved with conventional algebraic multigrid techniques. We use analytic benchmarks to demonstrate equal-order convergence for both velocity and pressure when solving problems with curvilinear geometries. In order to study problems in dense suspensions, we couple the solution for the flow to the equations of motion for freely suspended particles in an implicit monolithic scheme. The combination of high-order accuracy with fully-implicit schemes allows the accurate resolution of stiff lubrication forces directly from the solution of the Stokes problem without the need to introduce sub-grid lubrication models.
A Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin for Diffusion
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2009-01-01
We introduce a new approach to high-order accuracy for the numerical solution of diffusion problems by solving the equations in differential form using a reconstruction technique. The approach has the advantages of simplicity and economy. It results in several new high-order methods including a simplified version of discontinuous Galerkin (DG). It also leads to new definitions of common value and common gradient quantities at each interface shared by the two adjacent cells. In addition, the new approach clarifies the relations among the various choices of new and existing common quantities. Fourier stability and accuracy analyses are carried out for the resulting schemes. Extensions to the case of quadrilateral meshes are obtained via tensor products. For the two-point boundary value problem (steady state), it is shown that these schemes, which include most popular DG methods, yield exact common interface quantities as well as exact cell average solutions for nearly all cases.
A Bookmarking Service for Organizing and Sharing URLs
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Wolfe, Shawn R.; Chen, James R.; Mathe, Nathalie; Rabinowitz, Joshua L.
1997-01-01
Web browser bookmarking facilities predominate as the method of choice for managing URLs. In this paper, we describe some deficiencies of current bookmarking schemes, and examine an alternative to current approaches. We present WebTagger(TM), an implemented prototype of a personal bookmarking service that provides both individuals and groups with a customizable means of organizing and accessing Web-based information resources. In addition, the service enables users to supply feedback on the utility of these resources relative to their information needs, and provides dynamically-updated ranking of resources based on incremental user feedback. Individuals may access the service from anywhere on the Internet, and require no special software. This service greatly simplifies the process of sharing URLs within groups, in comparison with manual methods involving email. The underlying bookmark organization scheme is more natural and flexible than current hierarchical schemes supported by the major Web browsers, and enables rapid access to stored bookmarks.
NEAMS-IPL MOOSE Framework Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaughter, Andrew Edward; Permann, Cody James; Kong, Fande
The Multiapp Picard iteration Milestone’s purpose was to support a framework level “tight-coupling” method within the hierarchical Multiapp’s execution scheme. This new solution scheme gives developers new choices for running multiphysics applications, particularly those with very strong nonlinear effects or those requiring coupling across disparate time or spatial scales. Figure 1 shows a typical Multiapp setup in MOOSE. Each node represents a separate simulation containing a separate equation system. MOOSE solves the equation system on each node in turn, in a user-controlled manner. Information can be aggregated or split and transferred from parent to child or child to parent asmore » needed between solves. Performing a tightly coupled execution scheme using this method wasn’t possible in the original implementation. This is was due to the inability to back up to a previous state once a converged solution was accepted at a particular Multiapp level.« less
Automatic classification of protein structures using physicochemical parameters.
Mohan, Abhilash; Rao, M Divya; Sunderrajan, Shruthi; Pennathur, Gautam
2014-09-01
Protein classification is the first step to functional annotation; SCOP and Pfam databases are currently the most relevant protein classification schemes. However, the disproportion in the number of three dimensional (3D) protein structures generated versus their classification into relevant superfamilies/families emphasizes the need for automated classification schemes. Predicting function of novel proteins based on sequence information alone has proven to be a major challenge. The present study focuses on the use of physicochemical parameters in conjunction with machine learning algorithms (Naive Bayes, Decision Trees, Random Forest and Support Vector Machines) to classify proteins into their respective SCOP superfamily/Pfam family, using sequence derived information. Spectrophores™, a 1D descriptor of the 3D molecular field surrounding a structure was used as a benchmark to compare the performance of the physicochemical parameters. The machine learning algorithms were modified to select features based on information gain for each SCOP superfamily/Pfam family. The effect of combining physicochemical parameters and spectrophores on classification accuracy (CA) was studied. Machine learning algorithms trained with the physicochemical parameters consistently classified SCOP superfamilies and Pfam families with a classification accuracy above 90%, while spectrophores performed with a CA of around 85%. Feature selection improved classification accuracy for both physicochemical parameters and spectrophores based machine learning algorithms. Combining both attributes resulted in a marginal loss of performance. Physicochemical parameters were able to classify proteins from both schemes with classification accuracy ranging from 90-96%. These results suggest the usefulness of this method in classifying proteins from amino acid sequences.
ERIC Educational Resources Information Center
Clarkson, W. W.; And Others
Land application systems are discussed with reference to the options available for applying wastewater and sludge to the site. Spray systems, surface flow methods, and sludge application schemes are all included with discussions of the advantages and disadvantages of each option within these categories. A distinction is made between the choice of…
University Choice: What Influences the Decisions of Academically Successful Post-16 Students?
ERIC Educational Resources Information Center
Whitehead, Joan M.; Raffan, John; Deaney, Rosemary
2006-01-01
The questionnaire survey reported in this paper is part of an ongoing evaluation of the effect of a bursary scheme on recruitment to Cambridge University. It sought to identify factors that encouraged or discouraged highly successful A Level students from applying to Cambridge. Findings reveal three main dimensions associated with the decision to…
The Influence of the Purpose of a Business Document on Its Syntax and Rhetorical Schemes.
ERIC Educational Resources Information Center
Myers, Marshall
1999-01-01
Investigates how the purpose of three types of business and technical documents (instructions, annual reports, and sales promotional letters) affects the syntactical and rhetorical choices authors make in writing these documents. Outlines partial syntactical and rhetorical "fingerprints" of these documents to offer students norms they can go by in…
ERIC Educational Resources Information Center
Boyd, William L.
2007-01-01
The long-sustained effort of conservatives and their think tanks and media outlets to win support for school choice, market forces, and privatization schemes in education is paying off. But it is encountering steady resistance from the public education establishment and its supporting teachers' unions. Actions, reactions, strategies, and the…
NASA Astrophysics Data System (ADS)
Zhang, Xu; Chen, Ye-Hong; Wu, Qi-Cheng; Shi, Zhi-Cheng; Song, Jie; Xia, Yan
2017-01-01
We present an efficient scheme to quickly generate three-qubit Greenberger-Horne-Zeilinger (GHZ) states by using three superconducting qubits (SQs) separated by two coplanar waveguide resonators (CPWRs) capacitively. The scheme is based on quantum Zeno dynamics and the approach of transitionless quantum driving to construct shortcuts to adiabatic passage. In order to highlight the advantages, we compare the present scheme with the traditional one with adiabatic passage. The comparison result shows the shortcut scheme is closely related to the adiabatic scheme but is better than it. Moreover, we discuss the influence of various decoherences with numerical simulation. The result proves that the present scheme is less sensitive to the energy relaxation, the decay of CPWRs and the deviations of the experimental parameters the same as the adiabatic passage. However, the shortcut scheme is effective and robust against the dephasing of SQs in comparison with the adiabatic scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Ben; Qian, Yun; Berg, Larry K.
We evaluate the sensitivity of simulated turbine-height winds to 26 parameters applied in a planetary boundary layer (PBL) scheme and a surface layer scheme of the Weather Research and Forecasting (WRF) model over an area of complex terrain during the Columbia Basin Wind Energy Study. An efficient sampling algorithm and a generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of modeled turbine-height winds. The results indicate that most of the variability in the ensemble simulations is contributed by parameters related to the dissipation of the turbulence kinetic energy (TKE), Prandtl number, turbulencemore » length scales, surface roughness, and the von Kármán constant. The relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability. The parameter associated with the TKE dissipation rate is found to be the most important one, and a larger dissipation rate can produce larger hub-height winds. A larger Prandtl number results in weaker nighttime winds. Increasing surface roughness reduces the frequencies of both extremely weak and strong winds, implying a reduction in the variability of the wind speed. All of the above parameters can significantly affect the vertical profiles of wind speed, the altitude of the low-level jet and the magnitude of the wind shear strength. The wind direction is found to be modulated by the same subset of influential parameters. Remainder of abstract is in attachment.« less
NASA Astrophysics Data System (ADS)
Johnson, M. T.
2010-10-01
The ocean-atmosphere flux of a gas can be calculated from its measured or estimated concentration gradient across the air-sea interface and the transfer velocity (a term representing the conductivity of the layers either side of the interface with respect to the gas of interest). Traditionally the transfer velocity has been estimated from empirical relationships with wind speed, and then scaled by the Schmidt number of the gas being transferred. Complex, physically based models of transfer velocity (based on more physical forcings than wind speed alone), such as the NOAA COARE algorithm, have more recently been applied to well-studied gases such as carbon dioxide and DMS (although many studies still use the simpler approach for these gases), but there is a lack of validation of such schemes for other, more poorly studied gases. The aim of this paper is to provide a flexible numerical scheme which will allow the estimation of transfer velocity for any gas as a function of wind speed, temperature and salinity, given data on the solubility and liquid molar volume of the particular gas. New and existing parameterizations (including a novel empirical parameterization of the salinity-dependence of Henry's law solubility) are brought together into a scheme implemented as a modular, extensible program in the R computing environment which is available in the supplementary online material accompanying this paper; along with input files containing solubility and structural data for ~90 gases of general interest, enabling the calculation of their total transfer velocities and component parameters. Comparison of the scheme presented here with alternative schemes and methods for calculating air-sea flux parameters shows good agreement in general. It is intended that the various components of this numerical scheme should be applied only in the absence of experimental data providing robust values for parameters for a particular gas of interest.
NASA Astrophysics Data System (ADS)
Shamarokov, A. S.; Zorin, V. M.; Dai, Fam Kuang
2016-03-01
At the current stage of development of nuclear power engineering, high demands on nuclear power plants (NPP), including on their economy, are made. In these conditions, improving the quality of NPP means, in particular, the need to reasonably choose the values of numerous managed parameters of technological (heat) scheme. Furthermore, the chosen values should correspond to the economic conditions of NPP operation, which are postponed usually a considerable time interval from the point of time of parameters' choice. The article presents the technique of optimization of controlled parameters of the heat circuit of a steam turbine plant for the future. Its particularity is to obtain the results depending on a complex parameter combining the external economic and operating parameters that are relatively stable under the changing economic environment. The article presents the results of optimization according to this technique of the minimum temperature driving forces in the surface heaters of the heat regeneration system of the steam turbine plant of a K-1200-6.8/50 type. For optimization, the collector-screen heaters of high and low pressure developed at the OAO All-Russia Research and Design Institute of Nuclear Power Machine Building, which, in the authors' opinion, have the certain advantages over other types of heaters, were chosen. The optimality criterion in the task was the change in annual reduced costs for NPP compared to the version accepted as the baseline one. The influence on the decision of the task of independent variables that are not included in the complex parameter was analyzed. An optimization task was decided using the alternating-variable descent method. The obtained values of minimum temperature driving forces can guide the design of new nuclear plants with a heat circuit, similar to that accepted in the considered task.
Laser control of reactions of photoswitching functional molecules.
Tamura, Hiroyuki; Nanbu, Shinkoh; Ishida, Toshimasa; Nakamura, Hiroki
2006-07-21
Laser control schemes of reactions of photoswitching functional molecules are proposed based on the quantum mechanical wave-packet dynamics and the design of laser parameters. The appropriately designed quadratically chirped laser pulses can achieve nearly complete transitions of wave packet among electronic states. The laser parameters can be optimized by using the Zhu-Nakamura theory of nonadiabatic transition. This method is effective not only for the initial photoexcitation process but also for the pump and dump scheme in the middle of the overall photoswitching process. The effects of momentum of the wave packet crossing a conical intersection on the branching ratio of products have also been clarified. These control schemes mentioned above are successfully applied to the cyclohexadiene/hexatriene photoisomerization (ring-opening) process which is the reaction center of practical photoswitching molecules such as diarylethenes. The overall efficiency of the ring opening can be appreciably increased by using the appropriately designed laser pulses compared to that of the natural photoisomerization without any control schemes.
Wang, Baosheng; Tao, Jing
2018-01-01
Revocation functionality and hierarchy key delegation are two necessary and crucial requirements to identity-based cryptosystems. Revocable hierarchical identity-based encryption (RHIBE) has attracted a lot of attention in recent years, many RHIBE schemes have been proposed but shown to be either insecure or bounded where they have to fix the maximum hierarchical depth of RHIBE at setup. In this paper, we propose a new unbounded RHIBE scheme with decryption key exposure resilience and with short public system parameters, and prove our RHIBE scheme to be adaptively secure. Our system model is scalable inherently to accommodate more levels of user adaptively with no adding workload or restarting the system. By carefully designing the hybrid games, we overcome the subtle obstacle in applying the dual system encryption methodology for the unbounded and revocable HIBE. To the best of our knowledge, this is the first construction of adaptively secure unbounded RHIBE scheme. PMID:29649326
Maximum likelihood: Extracting unbiased information from complex networks
NASA Astrophysics Data System (ADS)
Garlaschelli, Diego; Loffredo, Maria I.
2008-07-01
The choice of free parameters in network models is subjective, since it depends on what topological properties are being monitored. However, we show that the maximum likelihood (ML) principle indicates a unique, statistically rigorous parameter choice, associated with a well-defined topological feature. We then find that, if the ML condition is incompatible with the built-in parameter choice, network models turn out to be intrinsically ill defined or biased. To overcome this problem, we construct a class of safely unbiased models. We also propose an extension of these results that leads to the fascinating possibility to extract, only from topological data, the “hidden variables” underlying network organization, making them “no longer hidden.” We test our method on World Trade Web data, where we recover the empirical gross domestic product using only topological information.
NASA Astrophysics Data System (ADS)
Chen, Z.; Chen, J.; Zheng, X.; Jiang, F.; Zhang, S.; Ju, W.; Yuan, W.; Mo, G.
2014-12-01
In this study, we explore the feasibility of optimizing ecosystem photosynthetic and respiratory parameters from the seasonal variation pattern of the net carbon flux. An optimization scheme is proposed to estimate two key parameters (Vcmax and Q10) by exploiting the seasonal variation in the net ecosystem carbon flux retrieved by an atmospheric inversion system. This scheme is implemented to estimate Vcmax and Q10 of the Boreal Ecosystem Productivity Simulator (BEPS) to improve its NEP simulation in the Boreal North America (BNA) region. Simultaneously, in-situ NEE observations at six eddy covariance sites are used to evaluate the NEE simulations. The results show that the performance of the optimized BEPS is superior to that of the BEPS with the default parameter values. These results have the implication on using atmospheric CO2 data for optimizing ecosystem parameters through atmospheric inversion or data assimilation techniques.
A fast iterative scheme for the linearized Boltzmann equation
NASA Astrophysics Data System (ADS)
Wu, Lei; Zhang, Jun; Liu, Haihu; Zhang, Yonghao; Reese, Jason M.
2017-06-01
Iterative schemes to find steady-state solutions to the Boltzmann equation are efficient for highly rarefied gas flows, but can be very slow to converge in the near-continuum flow regime. In this paper, a synthetic iterative scheme is developed to speed up the solution of the linearized Boltzmann equation by penalizing the collision operator L into the form L = (L + Nδh) - Nδh, where δ is the gas rarefaction parameter, h is the velocity distribution function, and N is a tuning parameter controlling the convergence rate. The velocity distribution function is first solved by the conventional iterative scheme, then it is corrected such that the macroscopic flow velocity is governed by a diffusion-type equation that is asymptotic-preserving into the Navier-Stokes limit. The efficiency of this new scheme is assessed by calculating the eigenvalue of the iteration, as well as solving for Poiseuille and thermal transpiration flows. We find that the fastest convergence of our synthetic scheme for the linearized Boltzmann equation is achieved when Nδ is close to the average collision frequency. The synthetic iterative scheme is significantly faster than the conventional iterative scheme in both the transition and the near-continuum gas flow regimes. Moreover, due to its asymptotic-preserving properties, the synthetic iterative scheme does not need high spatial resolution in the near-continuum flow regime, which makes it even faster than the conventional iterative scheme. Using this synthetic scheme, with the fast spectral approximation of the linearized Boltzmann collision operator, Poiseuille and thermal transpiration flows between two parallel plates, through channels of circular/rectangular cross sections and various porous media are calculated over the whole range of gas rarefaction. Finally, the flow of a Ne-Ar gas mixture is solved based on the linearized Boltzmann equation with the Lennard-Jones intermolecular potential for the first time, and the difference between these results and those using the hard-sphere potential is discussed.
Wave impedance selection for passivity-based bilateral teleoperation
NASA Astrophysics Data System (ADS)
D'Amore, Nicholas John
When a task must be executed in a remote or dangerous environment, teleoperation systems may be employed to extend the influence of the human operator. In the case of manipulation tasks, haptic feedback of the forces experienced by the remote (slave) system is often highly useful in improving an operator's ability to perform effectively. In many of these cases (especially teleoperation over the internet and ground-to-space teleoperation), substantial communication latency exists in the control loop and has the strong tendency to cause instability of the system. The first viable solution to this problem in the literature was based on a scattering/wave transformation from transmission line theory. This wave transformation requires the designer to select a wave impedance parameter appropriate to the teleoperation system. It is widely recognized that a small value of wave impedance is well suited to free motion and a large value is preferable for contact tasks. Beyond this basic observation, however, very little guidance exists in the literature regarding the selection of an appropriate value. Moreover, prior research on impedance selection generally fails to account for the fact that in any realistic contact task there will simultaneously exist contact considerations (perpendicular to the surface of contact) and quasi-free-motion considerations (parallel to the surface of contact). The primary contribution of the present work is to introduce an approximate linearized optimum for the choice of wave impedance and to apply this quasi-optimal choice to the Cartesian reality of such a contact task, in which it cannot be expected that a given joint will be either perfectly normal to or perfectly parallel to the motion constraint. The proposed scheme selects a wave impedance matrix that is appropriate to the conditions encountered by the manipulator. This choice may be implemented as a static wave impedance value or as a time-varying choice updated according to the instantaneous conditions encountered. A Lyapunov-like analysis is presented demonstrating that time variation in wave impedance will not violate the passivity of the system. Experimental trials, both in simulation and on a haptic feedback device, are presented validating the technique. Consideration is also given to the case of an uncertain environment, in which an a priori impedance choice may not be possible.
Implicit Total Variation Diminishing (TVD) schemes for steady-state calculations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Warming, R. F.; Harten, A.
1983-01-01
The application of a new implicit unconditionally stable high resolution total variation diminishing (TVD) scheme to steady state calculations. It is a member of a one parameter family of explicit and implicit second order accurate schemes developed by Harten for the computation of weak solutions of hyperbolic conservation laws. This scheme is guaranteed not to generate spurious oscillations for a nonlinear scalar equation and a constant coefficient system. Numerical experiments show that this scheme not only has a rapid convergence rate, but also generates a highly resolved approximation to the steady state solution. A detailed implementation of the implicit scheme for the one and two dimensional compressible inviscid equations of gas dynamics is presented. Some numerical computations of one and two dimensional fluid flows containing shocks demonstrate the efficiency and accuracy of this new scheme.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-04-21
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.
Campus, Marco; Bonaglini, Elia; Cappuccinelli, Roberto; Porcu, Maria Cristina; Tonelli, Roberto; Roggio, Tonina
2011-04-01
A Quality Index Method (QIM) scheme was developed for modified atmosphere packaging (MAP) packed gilthead seabream, and the effect of MAP gas mixtures (60% CO2 and 40% N2; 60% CO2, 30% O2, and 10% N2), temperature (2, 4, and 8 °C), and time of storage on QI scores was assessed. QI scores were crossed with sensory evaluation of cooked fish according to a modified Torry scheme to establish the rejection point. In order to reduce redundant parameters, a principal component analysis was applied on preliminary QIM parameters scores coming from the best performing MAP among those tested. The final QIM scheme consists of 13 parameters and a maximum demerit score of 25. The maximum storage time was found to be 13 d at 4 °C for MAP 60% CO2 and 40% N2. Storage at 2 °C do not substantially improved sensory parameters scores, while storage under temperature abuse (8 °C) accelerated drastically the rate of increase of QI scores and reduced the maximum storage time to 6 d.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-01-01
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Ji-Young; Hong, Song-You; Sunny Lim, Kyo-Sun
The sensitivity of a cumulus parameterization scheme (CPS) to a representation of precipitation production is examined. To do this, the parameter that determines the fraction of cloud condensate converted to precipitation in the simplified Arakawa–Schubert (SAS) convection scheme is modified following the results from a cloud-resolving simulation. While the original conversion parameter is assumed to be constant, the revised parameter includes a temperature dependency above the freezing level, whichleadstolessproductionoffrozenprecipitating condensate with height. The revised CPS has been evaluated for a heavy rainfall event over Korea as well as medium-range forecasts using the Global/Regional Integrated Model system (GRIMs). The inefficient conversionmore » of cloud condensate to convective precipitation at colder temperatures generally leads to a decrease in pre-cipitation, especially in the category of heavy rainfall. The resultant increase of detrained moisture induces moistening and cooling at the top of clouds. A statistical evaluation of the medium-range forecasts with the revised precipitation conversion parameter shows an overall improvement of the forecast skill in precipitation and large-scale fields, indicating importance of more realistic representation of microphysical processes in CPSs.« less
NASA Astrophysics Data System (ADS)
Toyokuni, G.; Takenaka, H.
2007-12-01
We propose a method to obtain effective grid parameters for the finite-difference (FD) method with standard Earth models using analytical ways. In spite of the broad use of the heterogeneous FD formulation for seismic waveform modeling, accurate treatment of material discontinuities inside the grid cells has been a serious problem for many years. One possible way to solve this problem is to introduce effective grid elastic moduli and densities (effective parameters) calculated by the volume harmonic averaging of elastic moduli and volume arithmetic averaging of density in grid cells. This scheme enables us to put a material discontinuity into an arbitrary position in the spatial grids. Most of the methods used for synthetic seismogram calculation today receives the blessing of the standard Earth models, such as the PREM, IASP91, SP6, and AK135, represented as functions of normalized radius. For the FD computation of seismic waveform with such models, we first need accurate treatment of material discontinuities in radius. This study provides a numerical scheme for analytical calculations of the effective parameters for an arbitrary spatial grids in radial direction as to these major four standard Earth models making the best use of their functional features. This scheme can analytically obtain the integral volume averages through partial fraction decompositions (PFDs) and integral formulae. We have developed a FORTRAN subroutine to perform the computations, which is opened to utilization in a large variety of FD schemes ranging from 1-D to 3-D, with conventional- and staggered-grids. In the presentation, we show some numerical examples displaying the accuracy of the FD synthetics simulated with the analytical effective parameters.
Index files for Belle II - very small skim containers
NASA Astrophysics Data System (ADS)
Sevior, Martin; Bloomfield, Tristan; Kuhr, Thomas; Ueda, I.; Miyake, H.; Hara, T.
2017-10-01
The Belle II experiment[1] employs the root file format[2] for recording data and is investigating the use of “index-files” to reduce the size of data skims. These files contain pointers to the location of interesting events within the total Belle II data set and reduce the size of data skims by 2 orders of magnitude. We implement this scheme on the Belle II grid by recording the parent file metadata and the event location within the parent file. While the scheme works, it is substantially slower than a normal sequential read of standard skim files using default root file parameters. We investigate the performance of the scheme by adjusting the “splitLevel” and “autoflushsize” parameters of the root files in the parent data files.
Discriminative Cooperative Networks for Detecting Phase Transitions
NASA Astrophysics Data System (ADS)
Liu, Ye-Hua; van Nieuwenburg, Evert P. L.
2018-04-01
The classification of states of matter and their corresponding phase transitions is a special kind of machine-learning task, where physical data allow for the analysis of new algorithms, which have not been considered in the general computer-science setting so far. Here we introduce an unsupervised machine-learning scheme for detecting phase transitions with a pair of discriminative cooperative networks (DCNs). In this scheme, a guesser network and a learner network cooperate to detect phase transitions from fully unlabeled data. The new scheme is efficient enough for dealing with phase diagrams in two-dimensional parameter spaces, where we can utilize an active contour model—the snake—from computer vision to host the two networks. The snake, with a DCN "brain," moves and learns actively in the parameter space, and locates phase boundaries automatically.
Adaptive quantization-parameter clip scheme for smooth quality in H.264/AVC.
Hu, Sudeng; Wang, Hanli; Kwong, Sam
2012-04-01
In this paper, we investigate the issues over the smooth quality and the smooth bit rate during rate control (RC) in H.264/AVC. An adaptive quantization-parameter (Q(p)) clip scheme is proposed to optimize the quality smoothness while keeping the bit-rate fluctuation at an acceptable level. First, the frame complexity variation is studied by defining a complexity ratio between two nearby frames. Second, the range of the generated bits is analyzed to prevent the encoder buffer from overflow and underflow. Third, based on the safe range of the generated bits, an optimal Q(p) clip range is developed to reduce the quality fluctuation. Experimental results demonstrate that the proposed Q(p) clip scheme can achieve excellent performance in quality smoothness and buffer regulation.
A Proposed Change to ITU-R Recommendation 681
NASA Technical Reports Server (NTRS)
Davarian, F.
1996-01-01
Recommendation 681 of the International Telecommunications Union (ITU) provides five models for the prediction of propagation effects on land mobile satellite links: empirical roadside shadowing (ERS), attenuation frequency scaling, fade duration distribution, non-fade duration distribution, and fading due to multipath. Because the above prediction models have been empirically derived using a limited amount of data, these schemes work only for restricted ranges of link parameters. With the first two models, for example, the frequency and elevation angle parameters are restricted to 0.8 to 2.7 GHz and 20 to 60 degrees, respectively. Recently measured data have enabled us to enhance the range of the first two schemes. Moreover, for convenience, they have been combined into a single scheme named the extended empirical roadside shadowing (EERS) model.
NASA Astrophysics Data System (ADS)
Campbell, Lucy J.; Shepherd, Theodore G.
2005-12-01
This study examines the effect of combining equatorial planetary wave drag and gravity wave drag in a one-dimensional zonal mean model of the quasi-biennial oscillation (QBO). Several different combinations of planetary wave and gravity wave drag schemes are considered in the investigations, with the aim being to assess which aspects of the different schemes affect the nature of the modeled QBO. Results show that it is possible to generate a realistic-looking QBO with various combinations of drag from the two types of waves, but there are some constraints on the wave input spectra and amplitudes. For example, if the phase speeds of the gravity waves in the input spectrum are large relative to those of the equatorial planetary waves, critical level absorption of the equatorial planetary waves may occur. The resulting mean-wind oscillation, in that case, is driven almost exclusively by the gravity wave drag, with only a small contribution from the planetary waves at low levels. With an appropriate choice of wave input parameters, it is possible to obtain a QBO with a realistic period and to which both types of waves contribute. This is the regime in which the terrestrial QBO appears to reside. There may also be constraints on the initial strength of the wind shear, and these are similar to the constraints that apply when gravity wave drag is used without any planetary wave drag.In recent years, it has been observed that, in order to simulate the QBO accurately, general circulation models require parameterized gravity wave drag, in addition to the drag from resolved planetary-scale waves, and that even if the planetary wave amplitudes are incorrect, the gravity wave drag can be adjusted to compensate. This study provides a basis for knowing that such a compensation is possible.
NASA Astrophysics Data System (ADS)
Guillemot, G.; Avettand-Fènoël, M.-N.; Iosta, A.; Foct, J.
2011-01-01
Hot-dipping galvanizing process is a widely used and efficient way to protect steel from corrosion. We propose to master the microstructure of zinc grains by investigating the relevant process parameters. In order to improve the texture of this coating, we model grain nucleation and growth processes and simulate the zinc solid phase development. A coupling scheme model has been applied with this aim. This model improves a previous two-dimensional model of the solidification process. It couples a cellular automaton (CA) approach and a finite element (FE) method. CA grid and FE mesh are superimposed on the same domain. The grain development is simulated at the micro-scale based on the CA grid. A nucleation law is defined using a Gaussian probability and a random set of nucleating cells. A crystallographic orientation is defined for each one with a choice of Euler's angle (Ψ,θ,φ). A small growing shape is then associated to each cell in the mushy domain and a dendrite tip kinetics is defined using the model of Kurz [2]. The six directions of basal plane and the two perpendicular directions develop in each mushy cell. During each time step, cell temperature and solid fraction are then determined at micro-scale using the enthalpy conservation relation and variations are reassigned at macro-scale. This coupling scheme model enables to simulate the three-dimensional growing kinetics of the zinc grain in a two-dimensional approach. Grain structure evolutions for various cooling times have been simulated. Final grain structure has been compared to EBSD measurements. We show that the preferentially growth of dendrite arms in the basal plane of zinc grains is correctly predicted. The described coupling scheme model could be applied for simulated other product or manufacturing processes. It constitutes an approach gathering both micro and macro scale models.
Marketing Secondary Schools to Parents--Some Lessons from the Research on Parental Choice.
ERIC Educational Resources Information Center
Smedley, Don
1995-01-01
Reviews the literature on parental choice and suggests implications for the marketing of secondary schools in England. The parameters of parental choice may change as schools become more active at marketing and parents become more sophisticated in their choosing strategies. However, schools may become increasingly disenchanted with the competitive…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Kyo-Sun; Hong, Song You; Yoon, Jin-Ho
2014-10-01
The most recent version of Simplified Arakawa-Schubert (SAS) cumulus scheme in National Center for Environmental Prediction (NCEP) Global Forecast System (GFS) (GFS SAS) has been implemented into the Weather and Research Forecasting (WRF) model with a modification of triggering condition and convective mass flux to become depending on model’s horizontal grid spacing. East Asian Summer Monsoon of 2006 from June to August is selected to evaluate the performance of the modified GFS SAS scheme. Simulated monsoon rainfall with the modified GFS SAS scheme shows better agreement with observation compared to the original GFS SAS scheme. The original GFS SAS schememore » simulates the similar ratio of subgrid-scale precipitation, which is calculated from a cumulus scheme, against total precipitation regardless of model’s horizontal grid spacing. This is counter-intuitive because the portion of resolved clouds in a grid box should be increased as the model grid spacing decreases. This counter-intuitive behavior of the original GFS SAS scheme is alleviated by the modified GFS SAS scheme. Further, three different cumulus schemes (Grell and Freitas, Kain and Fritsch, and Betts-Miller-Janjic) are chosen to investigate the role of a horizontal resolution on simulated monsoon rainfall. The performance of high-resolution modeling is not always enhanced as the spatial resolution becomes higher. Even though improvement of probability density function of rain rate and long wave fluxes by the higher-resolution simulation is robust regardless of a choice of cumulus parameterization scheme, the overall skill score of surface rainfall is not monotonically increasing with spatial resolution.« less
Simulation of the West African Monsoon using the MIT Regional Climate Model
NASA Astrophysics Data System (ADS)
Im, Eun-Soon; Gianotti, Rebecca L.; Eltahir, Elfatih A. B.
2013-04-01
We test the performance of the MIT Regional Climate Model (MRCM) in simulating the West African Monsoon. MRCM introduces several improvements over Regional Climate Model version 3 (RegCM3) including coupling of Integrated Biosphere Simulator (IBIS) land surface scheme, a new albedo assignment method, a new convective cloud and rainfall auto-conversion scheme, and a modified boundary layer height and cloud scheme. Using MRCM, we carried out a series of experiments implementing two different land surface schemes (IBIS and BATS) and three convection schemes (Grell with the Fritsch-Chappell closure, standard Emanuel, and modified Emanuel that includes the new convective cloud scheme). Our analysis primarily focused on comparing the precipitation characteristics, surface energy balance and large scale circulations against various observations. We document a significant sensitivity of the West African monsoon simulation to the choices of the land surface and convection schemes. In spite of several deficiencies, the simulation with the combination of IBIS and modified Emanuel schemes shows the best performance reflected in a marked improvement of precipitation in terms of spatial distribution and monsoon features. In particular, the coupling of IBIS leads to representations of the surface energy balance and partitioning that are consistent with observations. Therefore, the major components of the surface energy budget (including radiation fluxes) in the IBIS simulations are in better agreement with observation than those from our BATS simulation, or from previous similar studies (e.g Steiner et al., 2009), both qualitatively and quantitatively. The IBIS simulations also reasonably reproduce the dynamical structure of vertically stratified behavior of the atmospheric circulation with three major components: westerly monsoon flow, African Easterly Jet (AEJ), and Tropical Easterly Jet (TEJ). In addition, since the modified Emanuel scheme tends to reduce the precipitation amount, it improves the precipitation over regions suffering from systematic wet bias.
Conservative and bounded volume-of-fluid advection on unstructured grids
NASA Astrophysics Data System (ADS)
Ivey, Christopher B.; Moin, Parviz
2017-12-01
This paper presents a novel Eulerian-Lagrangian piecewise-linear interface calculation (PLIC) volume-of-fluid (VOF) advection method, which is three-dimensional, unsplit, and discretely conservative and bounded. The approach is developed with reference to a collocated node-based finite-volume two-phase flow solver that utilizes the median-dual mesh constructed from non-convex polyhedra. The proposed advection algorithm satisfies conservation and boundedness of the liquid volume fraction irrespective of the underlying flux polyhedron geometry, which differs from contemporary unsplit VOF schemes that prescribe topologically complicated flux polyhedron geometries in efforts to satisfy conservation. Instead of prescribing complicated flux-polyhedron geometries, which are prone to topological failures, our VOF advection scheme, the non-intersecting flux polyhedron advection (NIFPA) method, builds the flux polyhedron iteratively such that its intersection with neighboring flux polyhedra, and any other unavailable volume, is empty and its total volume matches the calculated flux volume. During each iteration, a candidate nominal flux polyhedron is extruded using an iteration dependent scalar. The candidate is subsequently intersected with the volume guaranteed available to it at the time of the flux calculation to generate the candidate flux polyhedron. The difference in the volume of the candidate flux polyhedron and the actual flux volume is used to calculate extrusion during the next iteration. The choice in nominal flux polyhedron impacts the cost and accuracy of the scheme; however, it does not impact the methods underlying conservation and boundedness. As such, various robust nominal flux polyhedron are proposed and tested using canonical periodic kinematic test cases: Zalesak's disk and two- and three-dimensional deformation. The tests are conducted on the median duals of a quadrilateral and triangular primal mesh, in two-dimensions, and on the median duals of a hexahedral, wedge and tetrahedral primal mesh, in three-dimensions. Comparisons are made with the adaptation of a conventional unsplit VOF advection scheme to our collocated node-based flow solver. Depending on the choice in the nominal flux polyhedron, the NIFPA scheme presented accuracies ranging from zeroth to second order and calculation times that differed by orders of magnitude. For the nominal flux polyhedra which demonstrate second-order accuracy on all tests and meshes, the NIFPA method's cost was comparable to the traditional topologically complex second-order accurate VOF advection scheme.
Filtration, haze and foam characteristics of fermented wort mediated by yeast strain.
Douglas, P; Meneses, F J; Jiranek, V
2006-01-01
To investigate the influence of the choice of yeast strain on the haze, shelf life, filterability and foam quality characteristics of fermented products. Twelve strains were used to ferment a chemically defined wort and hopped ale or stout wort. Fermented products were assessed for foam using the Rudin apparatus, and filterability and haze characteristics using the European Brewing Convention methods, to reveal differences in these parameters as a consequence of the choice of yeast strain and growth medium. Under the conditions used, the choice of strain of Saccharomyces cerevisiae effecting the primary fermentation has an impact on all of the parameters investigated, most notably when the fermentation medium is devoid of macromolecular material. The filtration of fermented products has a large cost implication for many brewers and wine makers, and the haze of the resulting filtrate is a key quality criterion. Also of importance to the quality of beer and some wines is the foaming and head retention of these beverages. The foam characteristics, filterability and potential for haze formation in a fermented product have long been known to be dependant on the raw materials used, as well as other production parameters. The choice of Saccharomyces cerevisiae strain used to ferment has itself been shown here to influence these parameters.
Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking
Lages, Martin; Scheel, Anne
2016-01-01
We investigated the proposition of a two-systems Theory of Mind in adults’ belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking. PMID:27853440
2009-01-01
Background Discrete choice experiments (DCEs) allow systematic assessment of preferences by asking respondents to choose between scenarios. We conducted a labelled discrete choice experiment with realistic choices to investigate patients' trade-offs between the expected health gains and the burden of testing in surveillance of Barrett esophagus (BE). Methods Fifteen choice scenarios were selected based on 2 attributes: 1) type of test (endoscopy and two less burdensome fictitious tests), 2) frequency of surveillance. Each test-frequency combination was associated with its own realistic decrease in risk of dying from esophageal adenocarcinoma. A conditional logit model was fitted. Results Of 297 eligible patients (155 BE and 142 with non-specific upper GI symptoms), 247 completed the questionnaire (84%). Patients preferred surveillance to no surveillance. Current surveillance schemes of once every 1–2 years were amongst the most preferred alternatives. Higher health gains were preferred over those with lower health gains, except when test frequencies exceeded once a year. For similar health gains, patients preferred video-capsule over saliva swab and least preferred endoscopy. Conclusion This first example of a labelled DCE using realistic scenarios in a healthcare context shows that such experiments are feasible. A comparison of labelled and unlabelled designs taking into account setting and research question is recommended. PMID:19454022
Event-scale power law recession analysis: quantifying methodological uncertainty
NASA Astrophysics Data System (ADS)
Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.
2017-01-01
The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.
Data-Gathering Scheme Using AUVs in Large-Scale Underwater Sensor Networks: A Multihop Approach
Khan, Jawaad Ullah; Cho, Ho-Shin
2016-01-01
In this paper, we propose a data-gathering scheme for hierarchical underwater sensor networks, where multiple Autonomous Underwater Vehicles (AUVs) are deployed over large-scale coverage areas. The deployed AUVs constitute an intermittently connected multihop network through inter-AUV synchronization (in this paper, synchronization means an interconnection between nodes for communication) for forwarding data to the designated sink. In such a scenario, the performance of the multihop communication depends upon the synchronization among the vehicles. The mobility parameters of the vehicles vary continuously because of the constantly changing underwater currents. The variations in the AUV mobility parameters reduce the inter-AUV synchronization frequency contributing to delays in the multihop communication. The proposed scheme improves the AUV synchronization frequency by permitting neighboring AUVs to share their status information via a pre-selected node called an agent-node at the static layer of the network. We evaluate the proposed scheme in terms of the AUV synchronization frequency, vertical delay (node→AUV), horizontal delay (AUV→AUV), end-to-end delay, and the packet loss ratio. Simulation results show that the proposed scheme significantly reduces the aforementioned delays without the synchronization time-out process employed in conventional works. PMID:27706042
Data-Gathering Scheme Using AUVs in Large-Scale Underwater Sensor Networks: A Multihop Approach.
Khan, Jawaad Ullah; Cho, Ho-Shin
2016-09-30
In this paper, we propose a data-gathering scheme for hierarchical underwater sensor networks, where multiple Autonomous Underwater Vehicles (AUVs) are deployed over large-scale coverage areas. The deployed AUVs constitute an intermittently connected multihop network through inter-AUV synchronization (in this paper, synchronization means an interconnection between nodes for communication) for forwarding data to the designated sink. In such a scenario, the performance of the multihop communication depends upon the synchronization among the vehicles. The mobility parameters of the vehicles vary continuously because of the constantly changing underwater currents. The variations in the AUV mobility parameters reduce the inter-AUV synchronization frequency contributing to delays in the multihop communication. The proposed scheme improves the AUV synchronization frequency by permitting neighboring AUVs to share their status information via a pre-selected node called an agent-node at the static layer of the network. We evaluate the proposed scheme in terms of the AUV synchronization frequency, vertical delay (node→AUV), horizontal delay (AUV→AUV), end-to-end delay, and the packet loss ratio. Simulation results show that the proposed scheme significantly reduces the aforementioned delays without the synchronization time-out process employed in conventional works.
Computing the multifractal spectrum from time series: an algorithmic approach.
Harikrishnan, K P; Misra, R; Ambika, G; Amritkar, R E
2009-12-01
We show that the existing methods for computing the f(alpha) spectrum from a time series can be improved by using a new algorithmic scheme. The scheme relies on the basic idea that the smooth convex profile of a typical f(alpha) spectrum can be fitted with an analytic function involving a set of four independent parameters. While the standard existing schemes [P. Grassberger et al., J. Stat. Phys. 51, 135 (1988); A. Chhabra and R. V. Jensen, Phys. Rev. Lett. 62, 1327 (1989)] generally compute only an incomplete f(alpha) spectrum (usually the top portion), we show that this can be overcome by an algorithmic approach, which is automated to compute the D(q) and f(alpha) spectra from a time series for any embedding dimension. The scheme is first tested with the logistic attractor with known f(alpha) curve and subsequently applied to higher-dimensional cases. We also show that the scheme can be effectively adapted for analyzing practical time series involving noise, with examples from two widely different real world systems. Moreover, some preliminary results indicating that the set of four independent parameters may be used as diagnostic measures are also included.
Sensitivity of the simulation of tropical cyclone size to microphysics schemes
NASA Astrophysics Data System (ADS)
Chan, Kelvin T. F.; Chan, Johnny C. L.
2016-09-01
The sensitivity of the simulation of tropical cyclone (TC) size to microphysics schemes is studied using the Advanced Hurricane Weather Research and Forecasting Model (WRF). Six TCs during the 2013 western North Pacific typhoon season and three mainstream microphysics schemes-Ferrier (FER), WRF Single-Moment 5-class (WSM5) and WRF Single-Moment 6-class (WSM6)-are investigated. The results consistently show that the simulated TC track is not sensitive to the choice of microphysics scheme in the early simulation, especially in the open ocean. However, the sensitivity is much greater for TC intensity and inner-core size. The TC intensity and size simulated using the WSM5 and WSM6 schemes are respectively higher and larger than those using the FER scheme in general, which likely results from more diabatic heating being generated outside the eyewall in rainbands. More diabatic heating in rainbands gives higher inflow in the lower troposphere and higher outflow in the upper troposphere, with higher upward motion outside the eyewall. The lower-tropospheric inflow would transport absolute angular momentum inward to spin up tangential wind predominantly near the eyewall, leading to the increment in TC intensity and size (the inner-core size, especially). In addition, the inclusion of graupel microphysics processes (as in WSM6) may not have a significant impact on the simulation of TC track, intensity and size.
Patient choice in opt-in, active choice, and opt-out HIV screening: randomized clinical trial
Dow, William H; Kaplan, Beth C
2016-01-01
Study question What is the effect of default test offers—opt-in, opt-out, and active choice—on the likelihood of acceptance of an HIV test among patients receiving care in an emergency department? Methods This was a randomized clinical trial conducted in the emergency department of an urban teaching hospital and regional trauma center. Patients aged 13-64 years were randomized to opt-in, opt-out, and active choice HIV test offers. The primary outcome was HIV test acceptance percentage. The Denver Risk Score was used to categorize patients as being at low, intermediate, or high risk of HIV infection. Study answer and limitations 38.0% (611/1607) of patients in the opt-in testing group accepted an HIV test, compared with 51.3% (815/1628) in the active choice arm (difference 13.3%, 95% confidence interval 9.8% to 16.7%) and 65.9% (1031/1565) in the opt-out arm (difference 27.9%, 24.4% to 31.3%). Compared with active choice testing, opt-out testing led to a 14.6 (11.1 to 18.1) percentage point increase in test acceptance. Patients identified as being at intermediate and high risk were more likely to accept testing than were those at low risk in all arms (difference 6.4% (3.4% to 9.3%) for intermediate and 8.3% (3.3% to 13.4%) for high risk). The opt-out effect was significantly smaller among those reporting high risk behaviors, but the active choice effect did not significantly vary by level of reported risk behavior. Patients consented to inclusion in the study after being offered an HIV test, and inclusion varied slightly by treatment assignment. The study took place at a single county hospital in a city that is somewhat unique with respect to HIV testing; although the test acceptance percentages themselves might vary, a different pattern for opt-in versus active choice versus opt-out test schemes would not be expected. What this paper adds Active choice is a distinct test regimen, with test acceptance patterns that may best approximate patients’ true preferences. Opt-out regimens can substantially increase HIV testing, and opt-in schemes may reduce testing, compared with active choice testing. Funding, competing interests, data sharing This study was supported by grant NIA 1RC4AG039078 from the National Institute on Aging. The full dataset is available from the corresponding author. Consent for data sharing was not obtained, but the data are anonymized and risk of identification is low. Trial registration Clinical trials NCT01377857. PMID:26786744
Dynamic Self-Locking of an OEO Containing a VCSEL
NASA Technical Reports Server (NTRS)
Strekalov, Dmitry; Matsko, Andrey; Yu, Nan; Savchenkov, Anatoliy; Maleki, Lute
2009-01-01
A method of dynamic self-locking has been demonstrated to be effective as a means of stabilizing the wavelength of light emitted by a vertical-cavity surface-emitting laser (VCSEL) that is an active element in the frequency-control loop of an optoelectronic oscillator (OEO) designed to implement an atomic clock based on an electromagnetically- induced-transparency (EIT) resonance. This scheme can be considered an alternative to the one described in Optical Injection Locking of a VCSEL in an OEO (NPO-43454), NASA Tech Briefs, Vol. 33, No. 7 (July 2009), page 33. Both schemes are expected to enable the development of small, low-power, high-stability atomic clocks that would be suitable for use in applications involving precise navigation and/or communication. To recapitulate from the cited prior article: In one essential aspect of operation of an OEO of the type described above, a microwave modulation signal is coupled into the VCSEL. Heretofore, it has been well known that the wavelength of light emitted by a VCSEL depends on its temperature and drive current, necessitating thorough stabilization of these operational parameters. Recently, it was discovered that the wavelength also depends on the microwave power coupled into the VCSEL. This concludes the background information. From the perspective that led to the conception of the optical injection-locking scheme described in the cited prior article, the variation of the VCSEL wavelength with the microwave power circulating in the frequency-control loop is regarded as a disadvantage and optical injection locking is a solution of the problem of stabilizing the wavelength in the presence of uncontrolled fluctuations in the microwave power. The present scheme for dynamic self-locking emerges from a different perspective, in which the dependence of VCSEL wavelength on microwave power is regarded as an advantageous phenomenon that can be exploited as a means of controlling the wavelength. The figure schematically depicts an atomic-clock OEO of the type in question, wherein (1) the light from the VCSEL is used to excite an EIT resonance in selected atoms in a gas cell (e.g., 87Rb atoms in a low-pressure mixture of Ar and Ne) and (2) the power supplied to the VCSEL is modulated by a microwave signal that includes components at beat frequencies among the VCSEL wavelength and modulation sidebands. As the VCSEL wavelength changes, it moves closer to or farther from a nearby absorption spectral line, and the optical power transmitted through the cell (and thus the loop gain) changes accordingly. A change in the loop gain causes a change in the microwave power and, thus, in the VCSEL wavelength. It is possible to choose a set of design and operational parameters (most importantly, the electronic part of the loop gain) such that the OEO stabilizes itself in the sense that an increase in circulating microwave power causes the VCSEL wavelength to change in a direction that results in an increase in optical absorption and thus a decrease in circulating microwave power. Typically, such an appropriate choice of operational parameters involves setting the nominal VCSEL wavelength to a point on the shorter-wavelength wing of an absorption spectral line.
Numerical scheme approximating solution and parameters in a beam equation
NASA Astrophysics Data System (ADS)
Ferdinand, Robert R.
2003-12-01
We present a mathematical model which describes vibration in a metallic beam about its equilibrium position. This model takes the form of a nonlinear second-order (in time) and fourth-order (in space) partial differential equation with boundary and initial conditions. A finite-element Galerkin approximation scheme is used to estimate model solution. Infinite-dimensional model parameters are then estimated numerically using an inverse method procedure which involves the minimization of a least-squares cost functional. Numerical results are presented and future work to be done is discussed.
DFTB Parameters for the Periodic Table: Part 1, Electronic Structure.
Wahiduzzaman, Mohammad; Oliveira, Augusto F; Philipsen, Pier; Zhechkov, Lyuben; van Lenthe, Erik; Witek, Henryk A; Heine, Thomas
2013-09-10
A parametrization scheme for the electronic part of the density-functional based tight-binding (DFTB) method that covers the periodic table is presented. A semiautomatic parametrization scheme has been developed that uses Kohn-Sham energies and band structure curvatures of real and fictitious homoatomic crystal structures as reference data. A confinement potential is used to tighten the Kohn-Sham orbitals, which includes two free parameters that are used to optimize the performance of the method. The method is tested on more than 100 systems and shows excellent overall performance.
High-Order Energy Stable WENO Schemes
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2009-01-01
A third-order Energy Stable Weighted Essentially Non-Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables 'energy stable' modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.
Cryptanalysis of Chatterjee-Sarkar Hierarchical Identity-Based Encryption Scheme at PKC 06
NASA Astrophysics Data System (ADS)
Park, Jong Hwan; Lee, Dong Hoon
In 2006, Chatterjee and Sarkar proposed a hierarchical identity-based encryption (HIBE) scheme which can support an unbounded number of identity levels. This property is particularly useful in providing forward secrecy by embedding time components within hierarchical identities. In this paper we show that their scheme does not provide the claimed property. Our analysis shows that if the number of identity levels becomes larger than the value of a fixed public parameter, an unintended receiver can reconstruct a new valid ciphertext and decrypt the ciphertext using his or her own private key. The analysis is similarly applied to a multi-receiver identity-based encryption scheme presented as an application of Chatterjee and Sarkar's HIBE scheme.
Regional climate modeling over the Maritime Continent: Assessment of RegCM3-BATS1e and RegCM3-IBIS
NASA Astrophysics Data System (ADS)
Gianotti, R. L.; Zhang, D.; Eltahir, E. A.
2010-12-01
Despite its importance to global rainfall and circulation processes, the Maritime Continent remains a region that is poorly simulated by climate models. Relatively few studies have been undertaken using a model with fine enough resolution to capture the small-scale spatial heterogeneity of this region and associated land-atmosphere interactions. These studies have shown that even regional climate models (RCMs) struggle to reproduce the climate of this region, particularly the diurnal cycle of rainfall. This study builds on previous work by undertaking a more thorough evaluation of RCM performance in simulating the timing and intensity of rainfall over the Maritime Continent, with identification of major sources of error. An assessment was conducted of the Regional Climate Model Version 3 (RegCM3) used in a coupled system with two land surface schemes: Biosphere Atmosphere Transfer System Version 1e (BATS1e) and Integrated Biosphere Simulator (IBIS). The model’s performance in simulating precipitation was evaluated against the 3-hourly TRMM 3B42 product, with some validation provided of this TRMM product against ground station meteorological data. It is found that the model suffers from three major errors in the rainfall histogram: underestimation of the frequency of dry periods, overestimation of the frequency of low intensity rainfall, and underestimation of the frequency of high intensity rainfall. Additionally, the model shows error in the timing of the diurnal rainfall peak, particularly over land surfaces. These four errors were largely insensitive to the choice of boundary conditions, convective parameterization scheme or land surface scheme. The presence of a wet or dry bias in the simulated volumes of rainfall was, however, dependent on the choice of convection scheme and boundary conditions. This study also showed that the coupled model system has significant error in overestimation of latent heat flux and evapotranspiration from the land surface, and specifically overestimation of interception loss with concurrent underestimation of transpiration, irrespective of the land surface scheme used. Discussion of the origin of these errors is provided, with some suggestions for improvement.
Application of multi-grid methods for solving the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Demuren, A. O.
1989-01-01
The application of a class of multi-grid methods to the solution of the Navier-Stokes equations for two-dimensional laminar flow problems is discussed. The methods consist of combining the full approximation scheme-full multi-grid technique (FAS-FMG) with point-, line-, or plane-relaxation routines for solving the Navier-Stokes equations in primitive variables. The performance of the multi-grid methods is compared to that of several single-grid methods. The results show that much faster convergence can be procured through the use of the multi-grid approach than through the various suggestions for improving single-grid methods. The importance of the choice of relaxation scheme for the multi-grid method is illustrated.
Ultrashort polarization-tailored bichromatic fields
NASA Astrophysics Data System (ADS)
Kerbstadt, Stefanie; Englert, Lars; Bayer, Tim; Wollenhaupt, Matthias
2017-06-01
We present a novel concept for the generation of ultrashort polarization-shaped bichromatic laser fields. The scheme utilizes a 4f polarization pulse shaper based on a liquid crystal spatial light modulator for independent amplitude and phase modulation of femtosecond laser pulses. By choice of either a conventional (p) or a composite (p-s) polarizer in the Fourier plane, the shaper setup enables the generation of parallel linearly and orthogonal linearly polarized bichromatic fields. Additional use of a ? wave plate behind the setup yields co-rotating and counter-rotating circularly polarized bichromatic fields. The scheme allows to independently control the spectral amplitude, phase and polarization profile of the output fields, offering an enormous versatility of bichromatic waveforms.
Application of multi-grid methods for solving the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Demuren, A. O.
1989-01-01
This paper presents the application of a class of multi-grid methods to the solution of the Navier-Stokes equations for two-dimensional laminar flow problems. The methods consists of combining the full approximation scheme-full multi-grid technique (FAS-FMG) with point-, line- or plane-relaxation routines for solving the Navier-Stokes equations in primitive variables. The performance of the multi-grid methods is compared to those of several single-grid methods. The results show that much faster convergence can be procured through the use of the multi-grid approach than through the various suggestions for improving single-grid methods. The importance of the choice of relaxation scheme for the multi-grid method is illustrated.
Non-linear eigensolver-based alternative to traditional SCF methods
NASA Astrophysics Data System (ADS)
Gavin, B.; Polizzi, E.
2013-05-01
The self-consistent procedure in electronic structure calculations is revisited using a highly efficient and robust algorithm for solving the non-linear eigenvector problem, i.e., H({ψ})ψ = Eψ. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm to account for the non-linearity of the Hamiltonian with the occupied eigenvectors. Using a series of numerical examples and the density functional theory-Kohn/Sham model, it will be shown that our approach can outperform the traditional SCF mixing-scheme techniques by providing a higher converge rate, convergence to the correct solution regardless of the choice of the initial guess, and a significant reduction of the eigenvalue solve time in simulations.
Bernstein, Diana N.; Neelin, J. David
2016-04-28
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Diana N.; Neelin, J. David
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less
Neural correlates of value, risk, and risk aversion contributing to decision making under risk.
Christopoulos, George I; Tobler, Philippe N; Bossaerts, Peter; Dolan, Raymond J; Schultz, Wolfram
2009-10-07
Decision making under risk is central to human behavior. Economic decision theory suggests that value, risk, and risk aversion influence choice behavior. Although previous studies identified neural correlates of decision parameters, the contribution of these correlates to actual choices is unknown. In two different experiments, participants chose between risky and safe options. We identified discrete blood oxygen level-dependent (BOLD) correlates of value and risk in the ventral striatum and anterior cingulate, respectively. Notably, increasing inferior frontal gyrus activity to low risk and safe options correlated with higher risk aversion. Importantly, the combination of these BOLD responses effectively decoded the behavioral choice. Striatal value and cingulate risk responses increased the probability of a risky choice, whereas inferior frontal gyrus responses showed the inverse relationship. These findings suggest that the BOLD correlates of decision factors are appropriate for an ideal observer to detect behavioral choices. More generally, these biological data contribute to the validity of the theoretical decision parameters for actual decisions under risk.
Efficient Construction of Mesostate Networks from Molecular Dynamics Trajectories.
Vitalis, Andreas; Caflisch, Amedeo
2012-03-13
The coarse-graining of data from molecular simulations yields conformational space networks that may be used for predicting the system's long time scale behavior, to discover structural pathways connecting free energy basins in the system, or simply to represent accessible phase space regions of interest and their connectivities in a two-dimensional plot. In this contribution, we present a tree-based algorithm to partition conformations of biomolecules into sets of similar microstates, i.e., to coarse-grain trajectory data into mesostates. On account of utilizing an architecture similar to that of established tree-based algorithms, the proposed scheme operates in near-linear time with data set size. We derive expressions needed for the fast evaluation of mesostate properties and distances when employing typical choices for measures of similarity between microstates. Using both a pedagogically useful and a real-word application, the algorithm is shown to be robust with respect to tree height, which in addition to mesostate threshold size is the main adjustable parameter. It is demonstrated that the derived mesostate networks can preserve information regarding the free energy basins and barriers by which the system is characterized.
Convective Propagation Characteristics Using a Simple Representation of Convective Organization
NASA Astrophysics Data System (ADS)
Neale, R. B.; Mapes, B. E.
2016-12-01
Observed equatorial wave propagation is intimately linked to convective organization and it's coupling to features of the larger-scale flow. In this talk we a use simple 4 level model to accommodate vertical modes of a mass flux convection scheme (shallow, mid-level and deep). Two paradigms of convection are used to represent convective processes. One that has only both random (unorganized) diagnosed fluctuations of convective properties and one with organized fluctuations of convective properties that are amplified by previously existing convection and has an explicit moistening impact on the local convecting environment We show a series of model simulations in single-column, 2D and 3D configurations, where the role of convective organization in wave propagation is shown to be fundamental. For the optimal choice of parameters linking organization to local atmospheric state, a broad array of convective wave propagation emerges. Interestingly the key characteristics of propagating modes are the low-level moistening followed by deep convection followed by mature 'large-scale' heating. This organization structure appears to hold firm across timescales from 5-day wave disturbances to MJO-like wave propagation.
Matveev, Alexei V; Rösch, Notker
2008-06-28
We suggest an approximate relativistic model for economical all-electron calculations on molecular systems that exploits an atomic ansatz for the relativistic projection transformation. With such a choice, the projection transformation matrix is by definition both transferable and independent of the geometry. The formulation is flexible with regard to the level at which the projection transformation is approximated; we employ the free-particle Foldy-Wouthuysen and the second-order Douglas-Kroll-Hess variants. The (atomic) infinite-order decoupling scheme shows little effect on structural parameters in scalar-relativistic calculations; also, the use of a screened nuclear potential in the definition of the projection transformation shows hardly any effect in the context of the present work. Applications to structural and energetic parameters of various systems (diatomics AuH, AuCl, and Au(2), two structural isomers of Ir(4), and uranyl dication UO(2) (2+) solvated by 3-6 water ligands) show that the atomic approximation to the conventional second-order Douglas-Kroll-Hess projection (ADKH) transformation yields highly accurate results at substantial computational savings, in particular, when calculating energy derivatives of larger systems. The size-dependence of the intrinsic error of the ADKH method in extended systems of heavy elements is analyzed for the atomization energies of Pd(n) clusters (n=116).
Black Hole Formation in Failing Core-Collapse Supernovae
NASA Astrophysics Data System (ADS)
O'Connor, Evan; Ott, Christian D.
2011-04-01
We present results of a systematic study of failing core-collapse supernovae and the formation of stellar-mass black holes (BHs). Using our open-source general-relativistic 1.5D code GR1D equipped with a three-species neutrino leakage/heating scheme and over 100 presupernova models, we study the effects of the choice of nuclear equation of state (EOS), zero-age main sequence (ZAMS) mass and metallicity, rotation, and mass-loss prescription on BH formation. We find that the outcome, for a given EOS, can be estimated, to first order, by a single parameter, the compactness of the stellar core at bounce. By comparing protoneutron star (PNS) structure at the onset of gravitational instability with solutions of the Tolman-Oppenheimer-Volkof equations, we find that thermal pressure support in the outer PNS core is responsible for raising the maximum PNS mass by up to 25% above the cold NS value. By artificially increasing neutrino heating, we find the critical neutrino heating efficiency required for exploding a given progenitor structure and connect these findings with ZAMS conditions, establishing, albeit approximately, for the first time based on actual collapse simulations, the mapping between ZAMS parameters and the outcome of core collapse. We also study the effect of progenitor rotation and find that the dimensionless spin of nascent BHs may be robustly limited below a* = Jc/GM 2 = 1 by the appearance of nonaxisymmetric rotational instabilities.
Indirect adaptive output feedback control of a biorobotic AUV using pectoral-like mechanical fins.
Naik, Mugdha S; Singh, Sahjendra N; Mittal, Rajat
2009-06-01
This paper treats the question of servoregulation of autonomous underwater vehicles (AUVs) in the yaw plane using pectoral-like mechanical fins. The fins attached to the vehicle have oscillatory swaying and yawing motion. The bias angle of the angular motion of the fin is used for the purpose of control. Of course, the design approach considered here is applicable to AUVs for other choices of oscillation patterns of the fins, which produce periodic forces and moments. It is assumed that the vehicle parameters, hydrodynamic coefficients, as well the fin forces and moments are unknown. For the trajectory control of the yaw angle, a sampled-data indirect adaptive control system using output (yaw angle) feedback is derived. The control system has a modular structure, which includes a parameter identifier and a stabilizer. For the control law derivation, an internal model of the exosignals (reference signal (constant or ramp) and constant disturbance) is included. Unlike the direct adaptive control scheme, the derived control law is applicable to minimum as well as nonminimum phase biorobotic AUVs (BAUVs). This is important, because for most of the fin locations on the vehicle, the model is a nonminimum phase. In the closed-loop system, the yaw angle trajectory tracking error converges to zero and the remaining state variables remain bounded. Simulation results are presented which show that the derived modular control system accomplishes precise set point yaw angle control and turning maneuvers in spite of the uncertainties in the system parameters using only yaw angle feedback.
A surface analysis nudging scheme coupling atmospheric and land surface thermodynamic parameters has been implemented into WRF v3.8 (latest version) for use with retrospective weather and climate simulations, as well as for applications in air quality, hydrology, and ecosystem mo...
Finite difference methods for the solution of unsteady potential flows
NASA Technical Reports Server (NTRS)
Caradonna, F. X.
1982-01-01
Various problems which are confronted in the development of an unsteady finite difference potential code are reviewed mainly in the context of what is done for a typical small disturbance and full potential method. The issues discussed include choice of equations, linearization and conservation, differencing schemes, and algorithm development. A number of applications, including unsteady three dimensional rotor calculations, are demonstrated.
ERIC Educational Resources Information Center
Griggs, Clive
2009-01-01
In the early 1980s the Conservative Administration introduced legislation to promote private personal pension plans for public sector workers. An army of commission-driven sales staff from the financial services industry sought to persuade teachers and others to abandon their inflation-proof pension schemes for those offered by private companies.…
Cai, Wenli; Lee, June-Goo; Fikry, Karim; Yoshida, Hiroyuki; Novelline, Robert; de Moya, Marc
2013-01-01
It is commonly believed that the size of a pneumothorax is an important determinant of treatment decision, in particular regarding whether chest tube drainage (CTD) is required. However, the volumetric quantification of pneumothoraces has not routinely been performed in clinics. In this paper, we introduced an automated computer-aided volumetry (CAV) scheme for quantification of volume of pneumothoraces in chest multi-detect CT (MDCT) images. Moreover, we investigated the impact of accurate volume of pneumothoraces in the improvement of the performance in decision-making regarding CTD in the management of traumatic pneumothoraces. For this purpose, an occurrence frequency map was calculated for quantitative analysis of the importance of each clinical parameter in the decision-making regarding CTD by a computer simulation of decision-making using a genetic algorithm (GA) and a support vector machine (SVM). A total of 14 clinical parameters, including volume of pneumothorax calculated by our CAV scheme, was collected as parameters available for decision-making. The results showed that volume was the dominant parameter in decision-making regarding CTD, with an occurrence frequency value of 1.00. The results also indicated that the inclusion of volume provided the best performance that was statistically significant compared to the other tests in which volume was excluded from the clinical parameters. This study provides the scientific evidence for the application of CAV scheme in MDCT volumetric quantification of pneumothoraces in the management of clinically stable chest trauma patients with traumatic pneumothorax. PMID:22560899
The anatomy of choice: dopamine and decision-making
Friston, Karl; Schwartenbeck, Philipp; FitzGerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J.
2014-01-01
This paper considers goal-directed decision-making in terms of embodied or active inference. We associate bounded rationality with approximate Bayesian inference that optimizes a free energy bound on model evidence. Several constructs such as expected utility, exploration or novelty bonuses, softmax choice rules and optimism bias emerge as natural consequences of free energy minimization. Previous accounts of active inference have focused on predictive coding. In this paper, we consider variational Bayes as a scheme that the brain might use for approximate Bayesian inference. This scheme provides formal constraints on the computational anatomy of inference and action, which appear to be remarkably consistent with neuroanatomy. Active inference contextualizes optimal decision theory within embodied inference, where goals become prior beliefs. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (associated with softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution. Crucially, this sensitivity corresponds to the precision of beliefs about behaviour. The changes in precision during variational updates are remarkably reminiscent of empirical dopaminergic responses—and they may provide a new perspective on the role of dopamine in assimilating reward prediction errors to optimize decision-making. PMID:25267823
The anatomy of choice: dopamine and decision-making.
Friston, Karl; Schwartenbeck, Philipp; FitzGerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J
2014-11-05
This paper considers goal-directed decision-making in terms of embodied or active inference. We associate bounded rationality with approximate Bayesian inference that optimizes a free energy bound on model evidence. Several constructs such as expected utility, exploration or novelty bonuses, softmax choice rules and optimism bias emerge as natural consequences of free energy minimization. Previous accounts of active inference have focused on predictive coding. In this paper, we consider variational Bayes as a scheme that the brain might use for approximate Bayesian inference. This scheme provides formal constraints on the computational anatomy of inference and action, which appear to be remarkably consistent with neuroanatomy. Active inference contextualizes optimal decision theory within embodied inference, where goals become prior beliefs. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (associated with softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution. Crucially, this sensitivity corresponds to the precision of beliefs about behaviour. The changes in precision during variational updates are remarkably reminiscent of empirical dopaminergic responses-and they may provide a new perspective on the role of dopamine in assimilating reward prediction errors to optimize decision-making.
ERIC Educational Resources Information Center
Carnegie, Jacqueline A.
2017-01-01
Summative evaluation for large classes of first- and second-year undergraduate courses often involves the use of multiple choice question (MCQ) exams in order to provide timely feedback. Several versions of those exams are often prepared via computer-based question scrambling in an effort to deter cheating. An important parameter to consider when…
ERIC Educational Resources Information Center
Stein, Jeffrey S.; Pinkston, Jonathan W.; Brewer, Adam T.; Francisco, Monica T.; Madden, Gregory J.
2012-01-01
Lewis rats have been shown to make more impulsive choices than Fischer 344 rats in discrete trial choice procedures that arrange fixed (i.e., nontitrating) reinforcement parameters. However, nontitrating procedures yield only gross estimates of preference, as choice measures in animal subjects are rarely graded at the level of the individual…
Image processing via level set curvature flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malladi, R.; Sethian, J.A.
We present a controlled image smoothing and enhancement method based on a curvature flow interpretation of the geometric heat equation. Compared to existing techniques, the model has several distinct advantages. (i) It contains just one enhancement parameter. (ii) The scheme naturally inherits a stopping criterion from the image; continued application of the scheme produces no further change. (iii) The method is one of the fastest possible schemes based on a curvature-controlled approach. 15 ref., 6 figs.
NASA Astrophysics Data System (ADS)
Pötz, Walter
2017-11-01
A single-cone finite-difference lattice scheme is developed for the (2+1)-dimensional Dirac equation in presence of general electromagnetic textures. The latter is represented on a (2+1)-dimensional staggered grid using a second-order-accurate finite difference scheme. A Peierls-Schwinger substitution to the wave function is used to introduce the electromagnetic (vector) potential into the Dirac equation. Thereby, the single-cone energy dispersion and gauge invariance are carried over from the continuum to the lattice formulation. Conservation laws and stability properties of the formal scheme are identified by comparison with the scheme for zero vector potential. The placement of magnetization terms is inferred from consistency with the one for the vector potential. Based on this formal scheme, several numerical schemes are proposed and tested. Elementary examples for single-fermion transport in the presence of in-plane magnetization are given, using material parameters typical for topological insulator surfaces.
Reverse engineering of a Hamiltonian by designing the evolution operators
NASA Astrophysics Data System (ADS)
Kang, Yi-Hao; Chen, Ye-Hong; Wu, Qi-Cheng; Huang, Bi-Hua; Xia, Yan; Song, Jie
2016-07-01
We propose an effective and flexible scheme for reverse engineering of a Hamiltonian by designing the evolution operators to eliminate the terms of Hamiltonian which are hard to be realized in practice. Different from transitionless quantum driving (TQD), the present scheme is focus on only one or parts of moving states in a D-dimension (D ≥ 3) system. The numerical simulation shows that the present scheme not only contains the results of TQD, but also has more free parameters, which make this scheme more flexible. An example is given by using this scheme to realize the population transfer for a Rydberg atom. The influences of various decoherence processes are discussed by numerical simulation and the result shows that the scheme is fast and robust against the decoherence and operational imperfection. Therefore, this scheme may be used to construct a Hamiltonian which can be realized in experiments.
Niemi, Jarkko K; Heikkilä, Jaakko
2011-06-01
The participation of agricultural producers in financing losses caused by livestock epidemics has been debated in many countries. One of the issues raised is how reluctant producers are to participate voluntarily in the financing of disease losses before an outbreak occurs. This study contributes to the literature by examining whether disease losses should be financed through pre- or post-outbreak premiums or their combination. A Monte Carlo simulation was employed to illustrate the costs of financing two diseases of different profiles. The profiles differed in the probability in which the damage occurs and in the average damage per event. Three hypothetical financing schemes were compared based on their ability to reduce utility losses in the case of risk-neutral and risk-averse producer groups. The schemes were examined in a dynamic setting where premiums depended on the compensation history of the sector. If producers choose the preferred financing scheme based on utility losses, results suggest that the timing of the premiums, the transaction costs of the scheme, the degree of risk aversion of the producer, and the level and the volatility of premiums affect the choice of the financing scheme. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Schroeder, Sascha Thorsten; Costa, Ana; Obé, Elisabeth
In recent years, fuel cell based micro-combined heat and power (mCHP) has received increasing attention due to its potential contribution to European energy policy goals, i.e., sustainability, competitiveness and security of supply. Besides technical advances, regulatory framework and ownership structures are of crucial importance in order to achieve greater diffusion of the technology in residential applications. This paper analyses the interplay of policy and ownership structures for the future deployment of mCHP. Furthermore, it regards the three country cases Denmark, France and Portugal. Firstly, the implications of different kinds of support schemes on investment risk and the diffusion of a technology are explained conceptually. Secondly, ownership arrangements are addressed. Then, a cross-country comparison on present support schemes for mCHP and competing technologies discusses the national implementation of European legislation in Denmark, France and Portugal. Finally, resulting implications for ownership arrangements on the choice of support scheme are explained. From a conceptual point of view, investment support, feed-in tariffs and price premiums are the most appropriate schemes for fuel cell mCHP. This can be used for improved analysis of operational strategies. The interaction of this plethora of elements necessitates careful balancing from a private- and socio-economic point of view.
NASA Astrophysics Data System (ADS)
Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Schneider, Tapio; Teixeira, João.
2018-03-01
Large-scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid-scale turbulence and convection—such as that they adjust instantaneously to changes in resolved-scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary-layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large-scale models. Here we lay the theoretical foundations for an extended eddy-diffusivity mass-flux (EDMF) scheme that has explicit time-dependence and memory of subgrid-scale variables and is designed to represent all subgrid-scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross-sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large-scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time-dependent life cycle.
Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Teixeira, João
2018-01-01
Abstract Large‐scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid‐scale turbulence and convection—such as that they adjust instantaneously to changes in resolved‐scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary‐layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large‐scale models. Here we lay the theoretical foundations for an extended eddy‐diffusivity mass‐flux (EDMF) scheme that has explicit time‐dependence and memory of subgrid‐scale variables and is designed to represent all subgrid‐scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross‐sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large‐scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time‐dependent life cycle. PMID:29780442
A Scalar Product Model for the Multidimensional Scaling of Choice
ERIC Educational Resources Information Center
Bechtel, Gordon G.; And Others
1971-01-01
Contains a solution for the multidimensional scaling of pairwise choice when individuals are represented as dimensional weights. The analysis supplies an exact least squares solution and estimates of group unscalability parameters. (DG)
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
Prakash, Jaya; Yalavarthy, Phaneendra K
2013-03-01
Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.
Scheduled Relaxation Jacobi method: Improvements and applications
NASA Astrophysics Data System (ADS)
Adsuara, J. E.; Cordero-Carrión, I.; Cerdá-Durán, P.; Aloy, M. A.
2016-09-01
Elliptic partial differential equations (ePDEs) appear in a wide variety of areas of mathematics, physics and engineering. Typically, ePDEs must be solved numerically, which sets an ever growing demand for efficient and highly parallel algorithms to tackle their computational solution. The Scheduled Relaxation Jacobi (SRJ) is a promising class of methods, atypical for combining simplicity and efficiency, that has been recently introduced for solving linear Poisson-like ePDEs. The SRJ methodology relies on computing the appropriate parameters of a multilevel approach with the goal of minimizing the number of iterations needed to cut down the residuals below specified tolerances. The efficiency in the reduction of the residual increases with the number of levels employed in the algorithm. Applying the original methodology to compute the algorithm parameters with more than 5 levels notably hinders obtaining optimal SRJ schemes, as the mixed (non-linear) algebraic-differential system of equations from which they result becomes notably stiff. Here we present a new methodology for obtaining the parameters of SRJ schemes that overcomes the limitations of the original algorithm and provide parameters for SRJ schemes with up to 15 levels and resolutions of up to 215 points per dimension, allowing for acceleration factors larger than several hundreds with respect to the Jacobi method for typical resolutions and, in some high resolution cases, close to 1000. Most of the success in finding SRJ optimal schemes with more than 10 levels is based on an analytic reduction of the complexity of the previously mentioned system of equations. Furthermore, we extend the original algorithm to apply it to certain systems of non-linear ePDEs.
Explicit evaluation of discontinuities in 2-D unsteady flows solved by the method of characteristics
NASA Astrophysics Data System (ADS)
Osnaghi, C.
When shock waves appear in the numerical solution of flows, a choice is necessary between shock capturing techniques, possible when equations are written in conservative form, and shock fitting techniques. If the second one is preferred, e.g. in order to obtain better definition and more physical description of the shock evolution in time, the method of characteristics is advantageous in the vicinity of the shock and it seems natural to use this method everywhere. This choice requires to improve the efficiency of the numerical scheme in order to produce competitive codes, preserving accuracy and flexibility, which are intrinsic features of the method: this is the goal of the present work.
Active Thermal Extraction and Temperature Sensing of Near-field Thermal Radiation
Ding, D.; Kim, T.; Minnich, A. J.
2016-09-06
Recently, we proposed an active thermal extraction (ATX) scheme that enables thermally populated surface phonon polaritons to escape into the far-field. The concept is based on a fluorescence upconversion process that also occurs in laser cooling of solids (LCS). Here, we present a generalized analysis of our scheme using the theoretical framework for LCS. We show that both LCS and ATX can be described with the same mathematical formalism by replacing the electron-phonon coupling parameter in LCS with the electron-photon coupling parameter in ATX. Using this framework, we compare the ideal efficiency and power extracted for the two schemes andmore » examine the parasitic loss mechanisms. As a result, this work advances the application of ATX to manipulate near-field thermal radiation for applications such as temperature sensing and active radiative cooling.« less
NASA Astrophysics Data System (ADS)
Lorite, I. J.; Mateos, L.; Fereres, E.
2005-01-01
SummaryThe simulations of dynamic, spatially distributed non-linear models are impacted by the degree of spatial and temporal aggregation of their input parameters and variables. This paper deals with the impact of these aggregations on the assessment of irrigation scheme performance by simulating water use and crop yield. The analysis was carried out on a 7000 ha irrigation scheme located in Southern Spain. Four irrigation seasons differing in rainfall patterns were simulated (from 1996/1997 to 1999/2000) with the actual soil parameters and with hypothetical soil parameters representing wider ranges of soil variability. Three spatial aggregation levels were considered: (I) individual parcels (about 800), (II) command areas (83) and (III) the whole irrigation scheme. Equally, five temporal aggregation levels were defined: daily, weekly, monthly, quarterly and annually. The results showed little impact of spatial aggregation in the predictions of irrigation requirements and of crop yield for the scheme. The impact of aggregation was greater in rainy years, for deep-rooted crops (sunflower) and in scenarios with heterogeneous soils. The highest impact on irrigation requirement estimations was in the scenario of most heterogeneous soil and in 1999/2000, a year with frequent rainfall during the irrigation season: difference of 7% between aggregation levels I and III was found. Equally, it was found that temporal aggregation had only significant impact on irrigation requirements predictions for time steps longer than 4 months. In general, simulated annual irrigation requirements decreased as the time step increased. The impact was greater in rainy years (specially with abundant and concentrated rain events) and in crops which cycles coincide in part with the rainy season (garlic, winter cereals and olive). It is concluded that in this case, average, representative values for the main inputs of the model (crop, soil properties and sowing dates) can generate results within 1% of those obtained by providing spatially specific values for about 800 parcels.
NASA Technical Reports Server (NTRS)
Zhao, Fang; Veldkamp, Ted I. E.; Frieler, Katja; Schewe, Jacob; Ostberg, Sebastian; Willner, Sven; Schauberger, Bernhard; Gosling, Simon N.; Schmied, Hannes Muller; Portmann, Felix T.;
2017-01-01
Global hydrological models (GHMs) have been applied to assess global flood hazards, but their capacity to capture the timing and amplitude of peak river discharge which is crucial in flood simulations has traditionally not been the focus of examination. Here we evaluate to what degree the choice of river routing scheme affects simulations of peak discharge and may help to provide better agreement with observations. To this end we use runoff and discharge simulations of nine GHMs forced by observational climate data (1971-2010) within the ISIMIP2a (Inter-Sectoral Impact Model Intercomparison Project phase 2a) project. The runoff simulations were used as input for the global river routing model CaMa-Flood (Catchment-based Macro-scale Floodplain). The simulated daily discharge was compared to the discharge generated by each GHM using its native river routing scheme. For each GHM both versions of simulated discharge were compared to monthly and daily discharge observations from 1701 GRDC (Global Runoff Data Centre) stations as a benchmark. CaMa-Flood routing shows a general reduction of peak river discharge and a delay of about two to three weeks in its occurrence, likely induced by the buffering capacity of floodplain reservoirs. For a majority of river basins, discharge produced by CaMa-Flood resulted in a better agreement with observations. In particular, maximum daily discharge was adjusted, with a multi-model averaged reduction in bias over about two-thirds of the analysed basin area. The increase in agreement was obtained in both managed and near-natural basins. Overall, this study demonstrates the importance of routing scheme choice in peak discharge simulation, where CaMa-Flood routing accounts for floodplain storage and backwater effects that are not represented in most GHMs. Our study provides important hints that an explicit parameterisation of these processes may be essential in future impact studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Shi-bing, E-mail: wang-shibing@dlut.edu.cn, E-mail: wangxy@dlut.edu.cn; Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024; Wang, Xing-yuan, E-mail: wang-shibing@dlut.edu.cn, E-mail: wangxy@dlut.edu.cn
With comprehensive consideration of generalized synchronization, combination synchronization and adaptive control, this paper investigates a novel adaptive generalized combination complex synchronization (AGCCS) scheme for different real and complex nonlinear systems with unknown parameters. On the basis of Lyapunov stability theory and adaptive control, an AGCCS controller and parameter update laws are derived to achieve synchronization and parameter identification of two real drive systems and a complex response system, as well as two complex drive systems and a real response system. Two simulation examples, namely, ACGCS for chaotic real Lorenz and Chen systems driving a hyperchaotic complex Lü system, and hyperchaoticmore » complex Lorenz and Chen systems driving a real chaotic Lü system, are presented to verify the feasibility and effectiveness of the proposed scheme.« less
Experimental determination of J-Q in the two-parameter characterization of fracture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, S.; Chiang, F.P.
1995-11-01
It is well recognized that using a single parameter to characterize crack tip deformation is no long adequate if constraint is present. Several approaches of two-parameter characterization scheme have been proposed. There are the J-T approach, the J-Q approach of Shih et al and the J-Q approach of Sharma and Aravas. The authors propose a scheme to measure the J and Q of the J-Q theory of Sharma and Aravas. They find that with the addition of Q term the experimentally measured U-field displacement component agrees well with the theoretical prediction. The agreement increases as the crack tip constraint increases.more » The results of a SEN and a CN specimen are presented.« less
Choice of phase in the CS and IOS approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snider, R.F.
1982-04-01
With the recognition that the angular momentum representations of unit position and momentum directional states must have different but uniquely related phases, the previously presented expression of scattering amplitude in terms of IOS angle dependent phase shifts must be modified. This resolves a major disagreement between IOS and close coupled degeneracy averaged differential cross sections. It is found that the phase factors appearing in the differential cross section have nothing to do with any particular choice of decoupling parameter. As a consequence, the differential cross section is relatively insensitive to the choice of CS decoupling parameter. The phase relations obtainedmore » are also in agreement with those deduced from the Born approximation.« less
Schreiner, J A; Latacz-Lohmann, U
2015-11-01
This paper investigates farmers' willingness to participate in a genetically modified organism (GMO)-free milk production scheme offered by some German dairy companies. The empirical analysis is based upon discrete choice experiments with 151 dairy farmers from 2 regions in Germany. A conditional logit estimation reveals a strong positive effect of the price premium on offer. Reliable feed monitoring and free technical support increase the likelihood of scheme adoption, the latter however only in farms that have been receiving technical support in other fields. By contrast, any interference with the entrepreneurial autonomy of farmers, through pre-arranged feed procurement or prescriptive advice on the part of the dairy company, lowers acceptance probabilities. Farmers' attitudes toward cultivation of genetically modified soy, their assessment of the market potential of GMO-free milk and future feed prices were found to be significant determinants of adoption, as are farmer age, educational status, and current feeding regimens. Respondents requested on average a mark-up of 0.80 eurocents per kilogram of milk to accept a contract. Comparison of the estimates for the 2 regions suggests that farmers in northern Germany are, on average, more likely to convert to genetically modified-free production; however, farmers in the south are, ceteris paribus, more responsive to an increase in the price premium offered. A latent class model reveals significant differences in the valuation of scheme attributes between 2 latent classes of adopters and nonadopters. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Key considerations in designing a speech brain-computer interface.
Bocquelet, Florent; Hueber, Thomas; Girin, Laurent; Chabardès, Stéphan; Yvert, Blaise
2016-11-01
Restoring communication in case of aphasia is a key challenge for neurotechnologies. To this end, brain-computer strategies can be envisioned to allow artificial speech synthesis from the continuous decoding of neural signals underlying speech imagination. Such speech brain-computer interfaces do not exist yet and their design should consider three key choices that need to be made: the choice of appropriate brain regions to record neural activity from, the choice of an appropriate recording technique, and the choice of a neural decoding scheme in association with an appropriate speech synthesis method. These key considerations are discussed here in light of (1) the current understanding of the functional neuroanatomy of cortical areas underlying overt and covert speech production, (2) the available literature making use of a variety of brain recording techniques to better characterize and address the challenge of decoding cortical speech signals, and (3) the different speech synthesis approaches that can be considered depending on the level of speech representation (phonetic, acoustic or articulatory) envisioned to be decoded at the core of a speech BCI paradigm. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Rothendler, James A; Rose, Adam J; Reisman, Joel I; Berlowitz, Dan R; Kazis, Lewis E
2012-01-01
While developed for managing individuals with atrial fibrillation, risk stratification schemes for stroke, such as CHADS2, may be useful in population-based studies, including those assessing process of care. We investigated how certain decisions in identifying diagnoses from administrative data affect the apparent prevalence of CHADS2-associated diagnoses and distribution of scores. Two sets of ICD-9 codes (more restrictive/ more inclusive) were defined for each CHADS2-associated diagnosis. For stroke/transient ischemic attack (TIA), the more restrictive set was applied to only inpatient data. We varied the number of years (1-3) in searching for relevant codes, and, except for stroke/TIA, the number of instances (1 vs. 2) that diagnoses were required to appear. The impact of choices on apparent disease prevalence varied by type of choice and condition, but was often substantial. Choices resulting in substantial changes in prevalence also tended to be associated with more substantial effects on the distribution of CHADS2 scores. PMID:22937488
Advanced interactive display formats for terminal area traffic control
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.
1996-01-01
This report describes the basic design considerations for perspective air traffic control displays. A software framework has been developed for manual viewing parameter setting (MVPS) in preparation for continued, ongoing developments on automated viewing parameter setting (AVPS) schemes. Two distinct modes of MVPS operations are considered, both of which utilize manipulation pointers imbedded in the three-dimensional scene: (1) direct manipulation of the viewing parameters -- in this mode the manipulation pointers act like the control-input device, through which the viewing parameter changes are made. Part of the parameters are rate controlled, and part of them position controlled. This mode is intended for making fast, iterative small changes in the parameters. (2) Indirect manipulation of the viewing parameters -- this mode is intended primarily for introducing large, predetermined changes in the parameters. Requests for changes in viewing parameter setting are entered manually by the operator by moving viewing parameter manipulation pointers on the screen. The motion of these pointers, which are an integral part of the 3-D scene, is limited to the boundaries of the screen. This arrangement has been chosen in order to preserve the correspondence between the spatial lay-outs of the new and the old viewing parameter setting, a feature which contributes to preventing spatial disorientation of the operator. For all viewing operations, e.g. rotation, translation and ranging, the actual change is executed automatically by the system, through gradual transitions with an exponentially damped, sinusoidal velocity profile, in this work referred to as 'slewing' motions. The slewing functions, which eliminate discontinuities in the viewing parameter changes, are designed primarily for enhancing the operator's impression that he, or she, is dealing with an actually existing physical system, rather than an abstract computer-generated scene. The proposed, continued research efforts will deal with the development of automated viewing parameter setting schemes. These schemes employ an optimization strategy, aimed at identifying the best possible vantage point, from which the air traffic control scene can be viewed for a given traffic situation. They determine whether a change in viewing parameter setting is required and determine the dynamic path along which the change to the new viewing parameter setting should take place.
A Systematic Methodology for Constructing High-Order Energy-Stable WENO Schemes
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2008-01-01
A third-order Energy Stable Weighted Essentially Non-Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter (AIAA 2008-2876, 2008) was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables \\energy stable" modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.
A Systematic Methodology for Constructing High-Order Energy Stable WENO Schemes
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2009-01-01
A third-order Energy Stable Weighted Essentially Non{Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter [1] was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables "energy stable" modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.
Nagy-Soper subtraction scheme for multiparton final states
NASA Astrophysics Data System (ADS)
Chung, Cheng-Han; Robens, Tania
2013-04-01
In this work, we present the extension of an alternative subtraction scheme for next-to-leading order QCD calculations to the case of an arbitrary number of massless final state partons. The scheme is based on the splitting kernels of an improved parton shower and comes with a reduced number of final state momentum mappings. While a previous publication including the setup of the scheme has been restricted to cases with maximally two massless partons in the final state, we here provide the final state real emission and integrated subtraction terms for processes with any number of massless partons. We apply our scheme to three jet production at lepton colliders at next-to-leading order and present results for the differential C parameter distribution.
Tran, Anh Phuong; Dafflon, Baptiste; Hubbard, Susan S.; ...
2016-04-25
Improving our ability to estimate the parameters that control water and heat fluxes in the shallow subsurface is particularly important due to their strong control on recharge, evaporation and biogeochemical processes. The objectives of this study are to develop and test a new inversion scheme to simultaneously estimate subsurface hydrological, thermal and petrophysical parameters using hydrological, thermal and electrical resistivity tomography (ERT) data. The inversion scheme-which is based on a nonisothermal, multiphase hydrological model-provides the desired subsurface property estimates in high spatiotemporal resolution. A particularly novel aspect of the inversion scheme is the explicit incorporation of the dependence of themore » subsurface electrical resistivity on both moisture and temperature. The scheme was applied to synthetic case studies, as well as to real datasets that were autonomously collected at a biogeochemical field study site in Rifle, Colorado. At the Rifle site, the coupled hydrological-thermal-geophysical inversion approach well predicted the matric potential, temperature and apparent resistivity with the Nash-Sutcliffe efficiency criterion greater than 0.92. Synthetic studies found that neglecting the subsurface temperature variability, and its effect on the electrical resistivity in the hydrogeophysical inversion, may lead to an incorrect estimation of the hydrological parameters. The approach is expected to be especially useful for the increasing number of studies that are taking advantage of autonomously collected ERT and soil measurements to explore complex terrestrial system dynamics.« less
Decentralized digital adaptive control of robot motion
NASA Technical Reports Server (NTRS)
Tarokh, M.
1990-01-01
A decentralized model reference adaptive scheme is developed for digital control of robot manipulators. The adaptation laws are derived using hyperstability theory, which guarantees asymptotic trajectory tracking despite gross robot parameter variations. The control scheme has a decentralized structure in the sense that each local controller receives only its joint angle measurement to produce its joint torque. The independent joint controllers have simple structures and can be programmed using a very simple and computationally fast algorithm. As a result, the scheme is suitable for real-time motion control.
NASA Technical Reports Server (NTRS)
Lakshmanan, Balakrishnan; Tiwari, Surendra N.
1992-01-01
A robust, discontinuity-resolving TVD MacCormack scheme containing no dependent parameters requiring adjustment is presently used to investigate the 3D separation of wing/body junction flows at supersonic speeds. Many production codes employing MacCormack schemes can be adapted to use this method. A numerical simulation of laminar supersonic junction flow is found to yield improved separation location predictions, as well as the axial velocity profiles in the separated flow region.
NASA Astrophysics Data System (ADS)
Bhardwaj, Manish; McCaughan, Leon; Olkhovets, Anatoli; Korotky, Steven K.
2006-12-01
We formulate an analytic framework for the restoration performance of path-based restoration schemes in planar mesh networks. We analyze various switch architectures and signaling schemes and model their total restoration interval. We also evaluate the network global expectation value of the time to restore a demand as a function of network parameters. We analyze a wide range of nominally capacity-optimal planar mesh networks and find our analytic model to be in good agreement with numerical simulation data.
Noiseless Vlasov-Poisson simulations with linearly transformed particles
Pinto, Martin C.; Sonnendrucker, Eric; Friedman, Alex; ...
2014-06-25
We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development ofmore » Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Lastly, benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.« less
NASA Astrophysics Data System (ADS)
Ugon, B.; Nandong, J.; Zang, Z.
2017-06-01
The presence of unstable dead-time systems in process plants often leads to a daunting challenge in the design of standard PID controllers, which are not only intended to provide close-loop stability but also to give good performance-robustness overall. In this paper, we conduct stability analysis on a double-loop control scheme based on the Routh-Hurwitz stability criteria. We propose to use this unstable double-loop control scheme which employs two P/PID controllers to control first-order or second-order unstable dead-time processes typically found in process industries. Based on the Routh-Hurwitz stability necessary and sufficient criteria, we establish several stability regions which enclose within them the P/PID parameter values that guarantee close-loop stability of the double-loop control scheme. A systematic tuning rule is developed for the purpose of obtaining the optimal P/PID parameter values within the established regions. The effectiveness of the proposed tuning rule is demonstrated using several numerical examples and the result are compared with some well-established tuning methods reported in the literature.
Entanglement of remote material qubits through nonexciting interaction with single photons
NASA Astrophysics Data System (ADS)
Li, Gang; Zhang, Pengfei; Zhang, Tiancai
2018-05-01
We propose a scheme to entangle multiple material qubits through interaction with single photons via nonexciting processes associated with strongly coupling systems. The basic idea is based on the material state dependent reflection and transmission for the input photons. Thus, the material qubits in several systems can be entangled when one photon interacts with each system in cascade and the photon paths are mixed by the photon detection. The character of nonexciting of material qubits does not change the state of the material qubit and thus ensures the possibility of purifying entangled states by using more photons under realistic imperfect parameters. It also guarantees directly scaling up the scheme to entangle more qubits. Detailed analysis of fidelity and success probability of the scheme in the frame of an optical Fabry-Pérot cavity based strongly coupling system is presented. It is shown that a two-qubit entangled state with fidelity above 0.99 is promised with only two photons by using currently feasible experimental parameters. Our scheme can also be directly implemented on other strongly coupled system.
A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks
Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O.
2017-01-01
This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks. PMID:28555023
A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks.
Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O
2017-05-27
This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks.
NASA Astrophysics Data System (ADS)
Matsunaga, Y.; Sugita, Y.
2018-06-01
A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.
Talib, R; Ali, O; Arshad, F; Kadir, K A
1997-06-01
A study was undertaken in FELDA (Federal Land Development Authority) resettlement scheme areas in Pahang, Malaysia, to determine the effectiveness of group dietary counselling in motivating diabetic patients to achieve good dietary habits, and weight and diabetes control. Sixty-one non-insulin dependent diabetes mellitus (NIDDM) patients were randomly assigned to either the experimental or control group. The experimental group received six sessions of group dietary counselling over 5 months and the control group received mass media diabetes-educational program during the same period. The one hour group dietary counselling sessions discussed general knowledge of diabetes, food groups for meal planning, the importance of dietary fibre-rich foods, types of fat in food, exercise and weight control. The experimental group met monthly with a dietitian as a counsellor. Effectiveness was assessed by improvement in food choice, and decline in percentage glycated haemoglobin (total HbA1) or body mass index (BMI). Measurements were made at a baseline visit, every two months during the six month program, and six months afterwards. Patients in the experimental group improved their food choices, resulting in a healthier diet high in unrefined carbohydrates and dietary fibre rich foods, and low in fat. There were significant reductions of their percentage total HbA1 levels and BMI following the counselling sessions, which decreased further six months after the program compared with patients in the control group. Thus group dietary counselling is effective in motivating NIDDM patients to achieve better food choice, and related weight and glycaemic control in a Malaysian setting.
Machín, Leandro; Aschemann-Witzel, Jessica; Curutchet, María Rosa; Giménez, Ana; Ares, Gastón
2018-02-01
The inclusion of more attention-grabbing and easily interpretable front-of-pack (FOP) nutrition information is one of the public policies that can be implemented to empower consumers to identify unhealthful food products and to make more informed food choices. The aim of the present work was to evaluate the influence of two FOP nutrition labelling schemes - the traffic light labelling and the warning scheme - on consumer food purchases when facing a health goal. The study was conducted with 1182 people from Montevideo (Uruguay), recruited using a Facebook advertisement. Participants were randomly allocated to one of three between-subjects experimental conditions: (i) a control condition with no FOP nutrition information, (ii) FOP nutrition information using a modified version of the traffic light system including information about calorie, saturated fat, sugars and sodium content per portion, and (iii) FOP nutrition information using the Chilean warning system including separate signs for high calorie, saturated fat, sugars and sodium content. Respondents were asked to imagine that they had to purchase food in order to prepare a healthy dinner for themselves and their family, using the website of an online grocery store. Results showed that FOP nutrition information effectively improved the average healthfulness of participants' choices compared to the control condition, both in terms of the average nutritional composition of the purchased products and expenditure in specific product categories. No relevant differences between the effect of the traffic light and the warning system were found. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lang, Jun
2012-01-30
In this paper, we propose a novel secure image sharing scheme based on Shamir's three-pass protocol and the multiple-parameter fractional Fourier transform (MPFRFT), which can safely exchange information with no advance distribution of either secret keys or public keys between users. The image is encrypted directly by the MPFRFT spectrum without the use of phase keys, and information can be shared by transmitting the encrypted image (or message) three times between users. Numerical simulation results are given to verify the performance of the proposed algorithm.