NASA Astrophysics Data System (ADS)
Kataev, A. L.; Kazantsev, A. E.; Stepanyantz, K. V.
2018-01-01
We calculate the Adler D-function for N = 1 SQCD in the three-loop approximation using the higher covariant derivative regularization and the NSVZ-like subtraction scheme. The recently formulated all-order relation between the Adler function and the anomalous dimension of the matter superfields defined in terms of the bare coupling constant is first considered and generalized to the case of an arbitrary representation for the chiral matter superfields. The correctness of this all-order relation is explicitly verified at the three-loop level. The special renormalization scheme in which this all-order relation remains valid for the D-function and the anomalous dimension defined in terms of the renormalized coupling constant is constructed in the case of using the higher derivative regularization. The analytic expression for the Adler function for N = 1 SQCD is found in this scheme to the order O (αs2). The problem of scheme-dependence of the D-function and the NSVZ-like equation is briefly discussed.
Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen
2016-06-01
High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.
Selection of regularization parameter in total variation image restoration.
Liao, Haiyong; Li, Fang; Ng, Michael K
2009-11-01
We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.
Hessian-based norm regularization for image restoration with biomedical applications.
Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael
2012-03-01
We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.
NASA Astrophysics Data System (ADS)
Lu, Shih-Yuan; Yen, Yi-Ming
2002-02-01
A first-passage scheme is devised to determine the overall rate constant of suspensions under the non-diffusion-limited condition. The original first-passage scheme developed for diffusion-limited processes is modified to account for the finite incorporation rate at the inclusion surface by using a concept of the nonzero survival probability of the diffusing entity at entity-inclusion encounters. This nonzero survival probability is obtained from solving a relevant boundary value problem. The new first-passage scheme is validated by an excellent agreement between overall rate constant results from the present development and from an accurate boundary collocation calculation for the three common spherical arrays [J. Chem. Phys. 109, 4985 (1998)], namely simple cubic, body-centered cubic, and face-centered cubic arrays, for a wide range of P and f. Here, P is a dimensionless quantity characterizing the relative rate of diffusion versus surface incorporation, and f is the volume fraction of the inclusion. The scheme is further applied to random spherical suspensions and to investigate the effect of inclusion coagulation on overall rate constants. It is found that randomness in inclusion arrangement tends to lower the overall rate constant for f up to the near close-packing value of the regular arrays because of the inclusion screening effect. This screening effect turns stronger for regular arrays when f is near and above the close-packing value of the regular arrays, and consequently the overall rate constant of the random array exceeds that of the regular array. Inclusion coagulation too induces the inclusion screening effect, and leads to lower overall rate constants.
Numerical simulation of a shear-thinning fluid through packed spheres
NASA Astrophysics Data System (ADS)
Liu, Hai Long; Moon, Jong Sin; Hwang, Wook Ryol
2012-12-01
Flow behaviors of a non-Newtonian fluid in spherical microstructures have been studied by a direct numerical simulation. A shear-thinning (power-law) fluid through both regular and randomly packed spheres has been numerically investigated in a representative unit cell with the tri-periodic boundary condition, employing a rigorous three-dimensional finite-element scheme combined with fictitious-domain mortar-element methods. The present scheme has been validated for the classical spherical packing problems with literatures. The flow mobility of regular packing structures, including simple cubic (SC), body-centered cubic (BCC), face-centered cubic (FCC), as well as randomly packed spheres, has been investigated quantitatively by considering the amount of shear-thinning, the pressure gradient and the porosity as parameters. Furthermore, the mechanism leading to the main flow path in a highly shear-thinning fluid through randomly packed spheres has been discussed.
NASA Astrophysics Data System (ADS)
Kazantsev, A. E.; Shakhmanov, V. Yu.; Stepanyantz, K. V.
2018-04-01
We investigate a recently proposed new form of the exact NSVZ β-function, which relates the β-function to the anomalous dimensions of the quantum gauge superfield, of the Faddeev-Popov ghosts, and of the chiral matter superfields. Namely, for the general renormalizable N = 1 supersymmetric gauge theory, regularized by higher covariant derivatives, the sum of all three-loop contributions to the β-function containing the Yukawa couplings is compared with the corresponding two-loop contributions to the anomalous dimensions of the quantum superfields. It is demonstrated that for the considered terms both new and original forms of the NSVZ relation are valid independently of the subtraction scheme if the renormalization group functions are defined in terms of the bare couplings. This result is obtained from the equality relating the loop integrals, which, in turn, follows from the factorization of the integrals for the β-function into integrals of double total derivatives. For the renormalization group functions defined in terms of the renormalized couplings we verify that the NSVZ scheme is obtained with the higher covariant derivative regularization supplemented by the subtraction scheme in which only powers of ln Λ /μ are included into the renormalization constants.
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
Regularization of soft-X-ray imaging in the DIII-D tokamak
Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...
2015-03-02
We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less
Effective field theory dimensional regularization
NASA Astrophysics Data System (ADS)
Lehmann, Dirk; Prézeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.
Dynamic coupling of subsurface and seepage flows solved within a regularized partition formulation
NASA Astrophysics Data System (ADS)
Marçais, J.; de Dreuzy, J.-R.; Erhel, J.
2017-11-01
Hillslope response to precipitations is characterized by sharp transitions from purely subsurface flow dynamics to simultaneous surface and subsurface flows. Locally, the transition between these two regimes is triggered by soil saturation. Here we develop an integrative approach to simultaneously solve the subsurface flow, locate the potential fully saturated areas and deduce the generated saturation excess overland flow. This approach combines the different dynamics and transitions in a single partition formulation using discontinuous functions. We propose to regularize the system of partial differential equations and to use classic spatial and temporal discretization schemes. We illustrate our methodology on the 1D hillslope storage Boussinesq equations (Troch et al., 2003). We first validate the numerical scheme on previous numerical experiments without saturation excess overland flow. Then we apply our model to a test case with dynamic transitions from purely subsurface flow dynamics to simultaneous surface and subsurface flows. Our results show that discretization respects mass balance both locally and globally, converges when the mesh or time step are refined. Moreover the regularization parameter can be taken small enough to ensure accuracy without suffering of numerical artefacts. Applied to some hundreds of realistic hillslope cases taken from Western side of France (Brittany), the developed method appears to be robust and efficient.
On regularizing the MCTDH equations of motion
NASA Astrophysics Data System (ADS)
Meyer, Hans-Dieter; Wang, Haobin
2018-03-01
The Multiconfiguration Time-Dependent Hartree (MCTDH) approach leads to equations of motion (EOM) which become singular when there are unoccupied so-called single-particle functions (SPFs). Starting from a Hartree product, all SPFs, except the first one, are unoccupied initially. To solve the MCTDH-EOMs numerically, one therefore has to remove the singularity by a regularization procedure. Usually the inverse of a density matrix is regularized. Here we argue and show that regularizing the coefficient tensor, which in turn regularizes the density matrix as well, leads to an improved performance of the EOMs. The initially unoccupied SPFs are rotated faster into their "correct direction" in Hilbert space and the final results are less sensitive to the choice of the value of the regularization parameter. For a particular example (a spin-boson system studied with a transformed Hamiltonian), we could even show that only with the new regularization scheme could one obtain correct results. Finally, in Appendix A, a new integration scheme for the MCTDH-EOMs developed by Lubich and co-workers is discussed. It is argued that this scheme does not solve the problem of the unoccupied natural orbitals because this scheme ignores the latter and does not propagate them at all.
NASA Astrophysics Data System (ADS)
Yang, Hongxin; Su, Fulin
2018-01-01
We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.
Constrained H1-regularization schemes for diffeomorphic image registration
Mang, Andreas; Biros, George
2017-01-01
We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361
On epicardial potential reconstruction using regularization schemes with the L1-norm data term.
Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart
2011-01-07
The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.
NASA Technical Reports Server (NTRS)
Smith, R. C.; Bowers, K. L.
1991-01-01
A fully Sinc-Galerkin method for recovering the spatially varying stiffness and damping parameters in Euler-Bernoulli beam models is presented. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which converges exponentially and is valid on the infinite time interval. Hence the method avoids the time-stepping which is characteristic of many of the forward schemes which are used in parameter recovery algorithms. Tikhonov regularization is used to stabilize the resulting inverse problem, and the L-curve method for determining an appropriate value of the regularization parameter is briefly discussed. Numerical examples are given which demonstrate the applicability of the method for both individual and simultaneous recovery of the material parameters.
Finite-Difference Lattice Boltzmann Scheme for High-Speed Compressible Flow: Two-Dimensional Case
NASA Astrophysics Data System (ADS)
Gan, Yan-Biao; Xu, Ai-Guo; Zhang, Guang-Cai; Zhang, Ping; Zhang, Lei; Li, Ying-Jun
2008-07-01
Lattice Boltzmann (LB) modeling of high-speed compressible flows has long been attempted by various authors. One common weakness of most of previous models is the instability problem when the Mach number of the flow is large. In this paper we present a finite-difference LB model, which works for flows with flexible ratios of specific heats and a wide range of Mach number, from 0 to 30 or higher. Besides the discrete-velocity-model by Watari [Physica A 382 (2007) 502], a modified Lax Wendroff finite difference scheme and an artificial viscosity are introduced. The combination of the finite-difference scheme and the adding of artificial viscosity must find a balance of numerical stability versus accuracy. The proposed model is validated by recovering results of some well-known benchmark tests: shock tubes and shock reflections. The new model may be used to track shock waves and/or to study the non-equilibrium procedure in the transition between the regular and Mach reflections of shock waves, etc.
Proper time regularization and the QCD chiral phase transition
Cui, Zhu-Fang; Zhang, Jin-Li; Zong, Hong-Shi
2017-01-01
We study the QCD chiral phase transition at finite temperature and finite quark chemical potential within the two flavor Nambu–Jona-Lasinio (NJL) model, where a generalization of the proper-time regularization scheme is motivated and implemented. We find that in the chiral limit the whole transition line in the phase diagram is of second order, whereas for finite quark masses a crossover is observed. Moreover, if we take into account the influence of quark condensate to the coupling strength (which also provides a possible way of how the effective coupling varies with temperature and quark chemical potential), it is found that a CEP may appear. These findings differ substantially from other NJL results which use alternative regularization schemes, some explanation and discussion are given at the end. This indicates that the regularization scheme can have a dramatic impact on the study of the QCD phase transition within the NJL model. PMID:28401889
γ5 in the four-dimensional helicity scheme
NASA Astrophysics Data System (ADS)
Gnendiger, C.; Signer, A.
2018-05-01
We investigate the regularization-scheme dependent treatment of γ5 in the framework of dimensional regularization, mainly focusing on the four-dimensional helicity scheme (fdh). Evaluating distinctive examples, we find that for one-loop calculations, the recently proposed four-dimensional formulation (fdf) of the fdh scheme constitutes a viable and efficient alternative compared to more traditional approaches. In addition, we extend the considerations to the two-loop level and compute the pseudoscalar form factors of quarks and gluons in fdh. We provide the necessary operator renormalization and discuss at a practical level how the complexity of intermediate calculational steps can be reduced in an efficient way.
Characterizing the functional MRI response using Tikhonov regularization.
Vakorin, Vasily A; Borowsky, Ron; Sarty, Gordon E
2007-09-20
The problem of evaluating an averaged functional magnetic resonance imaging (fMRI) response for repeated block design experiments was considered within a semiparametric regression model with autocorrelated residuals. We applied functional data analysis (FDA) techniques that use a least-squares fitting of B-spline expansions with Tikhonov regularization. To deal with the noise autocorrelation, we proposed a regularization parameter selection method based on the idea of combining temporal smoothing with residual whitening. A criterion based on a generalized chi(2)-test of the residuals for white noise was compared with a generalized cross-validation scheme. We evaluated and compared the performance of the two criteria, based on their effect on the quality of the fMRI response. We found that the regularization parameter can be tuned to improve the noise autocorrelation structure, but the whitening criterion provides too much smoothing when compared with the cross-validation criterion. The ultimate goal of the proposed smoothing techniques is to facilitate the extraction of temporal features in the hemodynamic response for further analysis. In particular, these FDA methods allow us to compute derivatives and integrals of the fMRI signal so that fMRI data may be correlated with behavioral and physiological models. For example, positive and negative hemodynamic responses may be easily and robustly identified on the basis of the first derivative at an early time point in the response. Ultimately, these methods allow us to verify previously reported correlations between the hemodynamic response and the behavioral measures of accuracy and reaction time, showing the potential to recover new information from fMRI data. 2007 John Wiley & Sons, Ltd
Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.
Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata
2008-09-01
A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.
Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography
NASA Astrophysics Data System (ADS)
Chu, Pan; Lei, Jing
2017-11-01
The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.
NASA Astrophysics Data System (ADS)
Chen, Ying; Lowengrub, John; Shen, Jie; Wang, Cheng; Wise, Steven
2018-07-01
We develop efficient energy stable numerical methods for solving isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization. The scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is constructed based on a convex splitting approach. We prove that, for the isotropic Cahn-Hilliard system with the Willmore regularization, the total free energy of the system is non-increasing for any time step and mesh sizes. A straightforward modification of the scheme is then used to solve the regularized strongly anisotropic Cahn-Hilliard system, and it is numerically verified that the discrete energy of the anisotropic system is also non-increasing, and can be efficiently solved by using the modified stable method. We present numerical results in both two and three dimensions that are in good agreement with those in earlier work on the topics. Numerical simulations are presented to demonstrate the accuracy and efficiency of the proposed methods.
Reputation-Based Secure Sensor Localization in Wireless Sensor Networks
He, Jingsha; Xu, Jing; Zhu, Xingye; Zhang, Yuqiang; Zhang, Ting; Fu, Wanqing
2014-01-01
Location information of sensor nodes in wireless sensor networks (WSNs) is very important, for it makes information that is collected and reported by the sensor nodes spatially meaningful for applications. Since most current sensor localization schemes rely on location information that is provided by beacon nodes for the regular sensor nodes to locate themselves, the accuracy of localization depends on the accuracy of location information from the beacon nodes. Therefore, the security and reliability of the beacon nodes become critical in the localization of regular sensor nodes. In this paper, we propose a reputation-based security scheme for sensor localization to improve the security and the accuracy of sensor localization in hostile or untrusted environments. In our proposed scheme, the reputation of each beacon node is evaluated based on a reputation evaluation model so that regular sensor nodes can get credible location information from highly reputable beacon nodes to accomplish localization. We also perform a set of simulation experiments to demonstrate the effectiveness of the proposed reputation-based security scheme. And our simulation results show that the proposed security scheme can enhance the security and, hence, improve the accuracy of sensor localization in hostile or untrusted environments. PMID:24982940
Discrete maximal regularity of time-stepping schemes for fractional evolution equations.
Jin, Bangti; Li, Buyang; Zhou, Zhi
2018-01-01
In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.
Biswas, Samir Kumar; Kanhirodan, Rajan; Vasu, Ram Mohan; Roy, Debasish
2011-08-01
We explore a pseudodynamic form of the quadratic parameter update equation for diffuse optical tomographic reconstruction from noisy data. A few explicit and implicit strategies for obtaining the parameter updates via a semianalytical integration of the pseudodynamic equations are proposed. Despite the ill-posedness of the inverse problem associated with diffuse optical tomography, adoption of the quadratic update scheme combined with the pseudotime integration appears not only to yield higher convergence, but also a muted sensitivity to the regularization parameters, which include the pseudotime step size for integration. These observations are validated through reconstructions with both numerically generated and experimentally acquired data.
Dimension-5 C P -odd operators: QCD mixing and renormalization
Bhattacharya, Tanmoy; Cirigliano, Vincenzo; Gupta, Rajan; ...
2015-12-23
Here, we study the off-shell mixing and renormalization of flavor-diagonal dimension-five T- and P-odd operators involving quarks, gluons, and photons, including quark electric dipole and chromoelectric dipole operators. Furthermore, we present the renormalization matrix to one loop in themore » $$\\bar{MS}$$ scheme. We also provide a definition of the quark chromoelectric dipole operator in a regularization-independent momentum-subtraction scheme suitable for nonperturbative lattice calculations and present the matching coefficients with the $$\\bar{MS}$$ scheme to one loop in perturbation theory, using both the naïve dimensional regularization and ’t Hooft–Veltman prescriptions for γ 5.« less
Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE
Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2013-01-01
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478
Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids
NASA Astrophysics Data System (ADS)
Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.
2017-12-01
Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and geology differentiation using guided fuzzy c-means clustering. Geophysics, 80(4): ID1-ID18. Okabe M. 1979. Analytical expressions for gravity anomalies due to homogeneous polyhedral bodies and translations into magnetic anomalies. Geophysics, 44(4), 730-741.
Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application
NASA Astrophysics Data System (ADS)
Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni
2018-06-01
Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.
Adiabatic regularization for gauge fields and the conformal anomaly
NASA Astrophysics Data System (ADS)
Chu, Chong-Sun; Koyama, Yoji
2017-03-01
Adiabatic regularization for quantum field theory in conformally flat spacetime is known for scalar and Dirac fermion fields. In this paper, we complete the construction by establishing the adiabatic regularization scheme for the gauge field. We show that the adiabatic expansion for the mode functions and the adiabatic vacuum can be defined in a similar way using Wentzel-Kramers-Brillouin-type (WKB-type) solutions as the scalar fields. As an application of the adiabatic method, we compute the trace of the energy momentum tensor and reproduce the known result for the conformal anomaly obtained by the other regularization methods. The availability of the adiabatic expansion scheme for the gauge field allows one to study various renormalized physical quantities of theories coupled to (non-Abelian) gauge fields in conformally flat spacetime, such as conformal supersymmetric Yang Mills, inflation, and cosmology.
On the convergence of nonconvex minimization methods for image recovery.
Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei
2015-05-01
Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.
Semi-regular remeshing based trust region spherical geometry image for 3D deformed mesh used MLWNN
NASA Astrophysics Data System (ADS)
Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Ben Amar, Chokri
2017-03-01
Triangular surface are now widely used for modeling three-dimensional object, since these models are very high resolution and the geometry of the mesh is often very dense, it is then necessary to remesh this object to reduce their complexity, the mesh quality (connectivity regularity) must be ameliorated. In this paper, we review the main methods of semi-regular remeshing of the state of the art, given the semi-regular remeshing is mainly relevant for wavelet-based compression, then we present our method for re-meshing based trust region spherical geometry image to have good scheme of 3d mesh compression used to deform 3D meh based on Multi library Wavelet Neural Network structure (MLWNN). Experimental results show that the progressive re-meshing algorithm capable of obtaining more compact representations and semi-regular objects and yield an efficient compression capabilities with minimal set of features used to have good 3D deformation scheme.
NASA Astrophysics Data System (ADS)
Edjlali, Ehsan; Bérubé-Lauzière, Yves
2018-01-01
We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.
A comprehensive numerical analysis of background phase correction with V-SHARP.
Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand
2017-04-01
Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Sun, Jin; Kelbert, Anna; Egbert, G.D.
2015-01-01
Long-period global-scale electromagnetic induction studies of deep Earth conductivity are based almost exclusively on magnetovariational methods and require accurate models of external source spatial structure. We describe approaches to inverting for both the external sources and three-dimensional (3-D) conductivity variations and apply these methods to long-period (T≥1.2 days) geomagnetic observatory data. Our scheme involves three steps: (1) Observatory data from 60 years (only partly overlapping and with many large gaps) are reduced and merged into dominant spatial modes using a scheme based on frequency domain principal components. (2) Resulting modes are inverted for corresponding external source spatial structure, using a simplified conductivity model with radial variations overlain by a two-dimensional thin sheet. The source inversion is regularized using a physically based source covariance, generated through superposition of correlated tilted zonal (quasi-dipole) current loops, representing ionospheric source complexity smoothed by Earth rotation. Free parameters in the source covariance model are tuned by a leave-one-out cross-validation scheme. (3) The estimated data modes are inverted for 3-D Earth conductivity, assuming the source excitation estimated in step 2. Together, these developments constitute key components in a practical scheme for simultaneous inversion of the catalogue of historical and modern observatory data for external source spatial structure and 3-D Earth conductivity.
NASA Astrophysics Data System (ADS)
Geng, Weihua; Zhao, Shan
2017-12-01
We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.
A Tikhonov Regularization Scheme for Focus Rotations with Focused Ultrasound Phased Arrays
Hughes, Alec; Hynynen, Kullervo
2016-01-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually-driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations. PMID:27913323
A Tikhonov Regularization Scheme for Focus Rotations With Focused Ultrasound-Phased Arrays.
Hughes, Alec; Hynynen, Kullervo
2016-12-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound-phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almeida, Leandro G.; Physics Department, Brookhaven National Laboratory, Upton, New York 11973; Sturm, Christian
2010-09-01
Light quark masses can be determined through lattice simulations in regularization invariant momentum-subtraction (RI/MOM) schemes. Subsequently, matching factors, computed in continuum perturbation theory, are used in order to convert these quark masses from a RI/MOM scheme to the MS scheme. We calculate the two-loop corrections in QCD to these matching factors as well as the three-loop mass anomalous dimensions for the RI/SMOM and RI/SMOM{sub {gamma}{sub {mu}} }schemes. These two schemes are characterized by a symmetric subtraction point. Providing the conversion factors in the two different schemes allows for a better understanding of the systematic uncertainties. The two-loop expansion coefficients ofmore » the matching factors for both schemes turn out to be small compared to the traditional RI/MOM schemes. For n{sub f}=3 quark flavors they are about 0.6%-0.7% and 2%, respectively, of the leading order result at scales of about 2 GeV. Therefore, they will allow for a significant reduction of the systematic uncertainty of light quark mass determinations obtained through this approach. The determination of these matching factors requires the computation of amputated Green's functions with the insertions of quark bilinear operators. As a by-product of our calculation we also provide the corresponding results for the tensor operator.« less
Stimulated Deep Neural Network for Speech Recognition
2016-09-08
making network regularization and robust adaptation challenging. Stimulated training has recently been proposed to address this problem by encouraging...potential to improve regularization and adaptation. This paper investigates stimulated training of DNNs for both of these options. These schemes take
Image segmentation with a novel regularized composite shape prior based on surrogate study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu
Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulatedmore » in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.« less
NASA Astrophysics Data System (ADS)
Ravat, D.; Purucker, M.; Olsen, N.; Finlay, C.
2017-12-01
We derive new models of the lunar crustal magnetic field at the lunar surface with data from Lunar Prospector (LP) and SELENE/Kaguya (K) satellite using a global set of 35820 1° equal area monopoles (O'Brien and Parker, 1994; Olsen et al., 2017). The resulting fields have similar features to surface fields obtained by Tsunakawa et al. (2015) using 230 subset regions and the primary differences are due to our stringent data selection (see below). The use of monopoles allows closer spacing than dipoles with lesser amount of regularization and moderate cluster computer resources. We use the scheme of iteratively reweighted least-squares inversion to compute the initial model. Then the amplitudes of these monopoles are determined by minimizing the misfit to the components together with the global average of |Br| at the ellipsoid surface (i.e. applying a L1 model regularization of Br). In a final step we transform the point-source representation to a spherical harmonic expansion. We extract high quality data segments using a processing scheme based on internal/external dipole field removal, low order polynomial removal, and a new processing scheme called Joint Equivalent Source Cross-validation. In the cross-validation procedure we analyze the fit of modeled components to data in 10° latitudinal segments from an inversion of triplets of nearby passes to a single set of dipoles along the passes. We evaluate the fit using four criteria in each segment: correlation coefficient, amplitude ratio, RMS of the misfit, and standard deviation of field values themselves. We fine-tune the criteria to the choice we would have made in visually retaining pass segments and this yields a global dataset of more than 2.87 million (x 3 components) points at altitudes <60 km. The selected Lunar Prospector and Kaguya magnetic data independently show similar features and statistics for altitudes, observed and modeled components, and their misfit (number of observation locations: LP 1.8 million and K 1.07 million x 3 components). We use these data to make a regional assessment of key magnetic features on the Moon (including impacts and swirls), the depth of magnetization of regional sources, and source parameters of isolated anomalies.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
ATHENA 3D: A finite element code for ultrasonic wave propagation
NASA Astrophysics Data System (ADS)
Rose, C.; Rupin, F.; Fouquet, T.; Chassignole, B.
2014-04-01
The understanding of wave propagation phenomena requires use of robust numerical models. 3D finite element (FE) models are generally prohibitively time consuming. However, advances in computing processor speed and memory allow them to be more and more competitive. In this context, EDF R&D developed the 3D version of the well-validated FE code ATHENA2D. The code is dedicated to the simulation of wave propagation in all kinds of elastic media and in particular, heterogeneous and anisotropic materials like welds. It is based on solving elastodynamic equations in the calculation zone expressed in terms of stress and particle velocities. The particularity of the code relies on the fact that the discretization of the calculation domain uses a Cartesian regular 3D mesh while the defect of complex geometry can be described using a separate (2D) mesh using the fictitious domains method. This allows combining the rapidity of regular meshes computation with the capability of modelling arbitrary shaped defects. Furthermore, the calculation domain is discretized with a quasi-explicit time evolution scheme. Thereby only local linear systems of small size have to be solved. The final step to reduce the computation time relies on the fact that ATHENA3D has been parallelized and adapted to the use of HPC resources. In this paper, the validation of the 3D FE model is discussed. A cross-validation of ATHENA 3D and CIVA is proposed for several inspection configurations. The performances in terms of calculation time are also presented in the cases of both local computer and computation cluster use.
An improved snow scheme for the ECMWF land surface model: Description and offline validation
Emanuel Dutra; Gianpaolo Balsamo; Pedro Viterbo; Pedro M. A. Miranda; Anton Beljaars; Christoph Schar; Kelly Elder
2010-01-01
A new snow scheme for the European Centre for Medium-Range Weather Forecasts (ECMWF) land surface model has been tested and validated. The scheme includes a new parameterization of snow density, incorporating a liquid water reservoir, and revised formulations for the subgrid snow cover fraction and snow albedo. Offline validation (covering a wide range of spatial and...
ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION
HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG
2011-01-01
We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEM algorithm is also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein. PMID:21949541
NASA Technical Reports Server (NTRS)
Aires, F.; Chedin, A.; Scott, N. A.; Rossow, W. B.; Hansen, James E. (Technical Monitor)
2001-01-01
Abstract In this paper, a fast atmospheric and surface temperature retrieval algorithm is developed for the high resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. This algorithm is constructed on the basis of a neural network technique that has been regularized by introduction of a priori information. The performance of the resulting fast and accurate inverse radiative transfer model is presented for a large divE:rsified dataset of radiosonde atmospheres including rare events. Two configurations are considered: a tropical-airmass specialized scheme and an all-air-masses scheme.
FeynArts model file for MSSM transition counterterms from DREG to DRED
NASA Astrophysics Data System (ADS)
Stöckinger, Dominik; Varšo, Philipp
2012-02-01
The FeynArts model file MSSMdreg2dred implements MSSM transition counterterms which can convert one-loop Green functions from dimensional regularization to dimensional reduction. They correspond to a slight extension of the well-known Martin/Vaughn counterterms, specialized to the MSSM, and can serve also as supersymmetry-restoring counterterms. The paper provides full analytic results for the counterterms and gives one- and two-loop usage examples. The model file can simplify combining MS¯-parton distribution functions with supersymmetric renormalization or avoiding the renormalization of ɛ-scalars in dimensional reduction. Program summaryProgram title:MSSMdreg2dred.mod Catalogue identifier: AEKR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL-License [1] No. of lines in distributed program, including test data, etc.: 7600 No. of bytes in distributed program, including test data, etc.: 197 629 Distribution format: tar.gz Programming language: Mathematica, FeynArts Computer: Any, capable of running Mathematica and FeynArts Operating system: Any, with running Mathematica, FeynArts installation Classification: 4.4, 5, 11.1 Subprograms used: Cat Id Title Reference ADOW_v1_0 FeynArts CPC 140 (2001) 418 Nature of problem: The computation of one-loop Feynman diagrams in the minimal supersymmetric standard model (MSSM) requires regularization. Two schemes, dimensional regularization and dimensional reduction are both common but have different pros and cons. In order to combine the advantages of both schemes one would like to easily convert existing results from one scheme into the other. Solution method: Finite counterterms are constructed which correspond precisely to the one-loop scheme differences for the MSSM. They are provided as a FeynArts [2] model file. Using this model file together with FeynArts, the (ultra-violet) regularization of any MSSM one-loop Green function is switched automatically from dimensional regularization to dimensional reduction. In particular the counterterms serve as supersymmetry-restoring counterterms for dimensional regularization. Restrictions: The counterterms are restricted to the one-loop level and the MSSM. Running time: A few seconds to generate typical Feynman graphs with FeynArts.
Spatially multiplexed interferometric microscopy with partially coherent illumination
NASA Astrophysics Data System (ADS)
Picazo-Bueno, José Ángel; Zalevsky, Zeev; García, Javier; Ferreira, Carlos; Micó, Vicente
2016-10-01
We have recently reported on a simple, low cost, and highly stable way to convert a standard microscope into a holographic one [Opt. Express 22, 14929 (2014)]. The method, named spatially multiplexed interferometric microscopy (SMIM), proposes an off-axis holographic architecture implemented onto a regular (nonholographic) microscope with minimum modifications: the use of coherent illumination and a properly placed and selected one-dimensional diffraction grating. In this contribution, we report on the implementation of partially (temporally reduced) coherent illumination in SMIM as a way to improve quantitative phase imaging. The use of low coherence sources forces the application of phase shifting algorithm instead of off-axis holographic recording to recover the sample's phase information but improves phase reconstruction due to coherence noise reduction. In addition, a less restrictive field of view limitation (1/2) is implemented in comparison with our previously reported scheme (1/3). The proposed modification is experimentally validated in a regular Olympus BX-60 upright microscope considering a wide range of samples (resolution test, microbeads, swine sperm cells, red blood cells, and prostate cancer cells).
A combined reconstruction-classification method for diffuse optical tomography.
Hiltunen, P; Prince, S J D; Arridge, S
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
MIB Galerkin method for elliptic interface problems.
Xia, Kelin; Zhan, Meng; Wei, Guo-Wei
2014-12-15
Material interfaces are omnipresent in the real-world structures and devices. Mathematical modeling of material interfaces often leads to elliptic partial differential equations (PDEs) with discontinuous coefficients and singular sources, which are commonly called elliptic interface problems. The development of high-order numerical schemes for elliptic interface problems has become a well defined field in applied and computational mathematics and attracted much attention in the past decades. Despite of significant advances, challenges remain in the construction of high-order schemes for nonsmooth interfaces, i.e., interfaces with geometric singularities, such as tips, cusps and sharp edges. The challenge of geometric singularities is amplified when they are associated with low solution regularities, e.g., tip-geometry effects in many fields. The present work introduces a matched interface and boundary (MIB) Galerkin method for solving two-dimensional (2D) elliptic PDEs with complex interfaces, geometric singularities and low solution regularities. The Cartesian grid based triangular elements are employed to avoid the time consuming mesh generation procedure. Consequently, the interface cuts through elements. To ensure the continuity of classic basis functions across the interface, two sets of overlapping elements, called MIB elements, are defined near the interface. As a result, differentiation can be computed near the interface as if there is no interface. Interpolation functions are constructed on MIB element spaces to smoothly extend function values across the interface. A set of lowest order interface jump conditions is enforced on the interface, which in turn, determines the interpolation functions. The performance of the proposed MIB Galerkin finite element method is validated by numerical experiments with a wide range of interface geometries, geometric singularities, low regularity solutions and grid resolutions. Extensive numerical studies confirm the designed second order convergence of the MIB Galerkin method in the L ∞ and L 2 errors. Some of the best results are obtained in the present work when the interface is C 1 or Lipschitz continuous and the solution is C 2 continuous.
NASA Astrophysics Data System (ADS)
Kazakova, E. I.; Medvedev, A. N.; Kolomytseva, A. O.; Demina, M. I.
2017-11-01
The paper presents a mathematical model of blasting schemes management in presence of random disturbances. Based on the lemmas and theorems proved, a control functional is formulated, which is stable. A universal classification of blasting schemes is developed. The main classification attributes are suggested: the orientation in plan the charging wells rows relatively the block of rocks; the presence of cuts in the blasting schemes; the separation of the wells series onto elements; the sequence of the blasting. The periodic regularity of transition from one Short-delayed scheme of blasting to another is proved.
Bhave, Sampada; Lingala, Sajan Goud; Newell, John D; Nagle, Scott K; Jacob, Mathews
2016-06-01
The objective of this study was to increase the spatial and temporal resolution of dynamic 3-dimensional (3D) magnetic resonance imaging (MRI) of lung volumes and diaphragm motion. To achieve this goal, we evaluate the utility of the proposed blind compressed sensing (BCS) algorithm to recover data from highly undersampled measurements. We evaluated the performance of the BCS scheme to recover dynamic data sets from retrospectively and prospectively undersampled measurements. We also compared its performance against that of view-sharing, the nuclear norm minimization scheme, and the l1 Fourier sparsity regularization scheme. Quantitative experiments were performed on a healthy subject using a fully sampled 2D data set with uniform radial sampling, which was retrospectively undersampled with 16 radial spokes per frame to correspond to an undersampling factor of 8. The images obtained from the 4 reconstruction schemes were compared with the fully sampled data using mean square error and normalized high-frequency error metrics. The schemes were also compared using prospective 3D data acquired on a Siemens 3 T TIM TRIO MRI scanner on 8 healthy subjects during free breathing. Two expert cardiothoracic radiologists (R1 and R2) qualitatively evaluated the reconstructed 3D data sets using a 5-point scale (0-4) on the basis of spatial resolution, temporal resolution, and presence of aliasing artifacts. The BCS scheme gives better reconstructions (mean square error = 0.0232 and normalized high frequency = 0.133) than the other schemes in the 2D retrospective undersampling experiments, producing minimally distorted reconstructions up to an acceleration factor of 8 (16 radial spokes per frame). The prospective 3D experiments show that the BCS scheme provides visually improved reconstructions than the other schemes do. The BCS scheme provides improved qualitative scores over nuclear norm and l1 Fourier sparsity regularization schemes in the temporal blurring and spatial blurring categories. The qualitative scores for aliasing artifacts in the images reconstructed by nuclear norm scheme and BCS scheme are comparable.The comparisons of the tidal volume changes also show that the BCS scheme has less temporal blurring as compared with the nuclear norm minimization scheme and the l1 Fourier sparsity regularization scheme. The minute ventilation estimated by BCS for tidal breathing in supine position (4 L/min) and the measured supine inspiratory capacity (1.5 L) is in good correlation with the literature. The improved performance of BCS can be explained by its ability to efficiently adapt to the data, thus providing a richer representation of the signal. The feasibility of the BCS scheme was demonstrated for dynamic 3D free breathing MRI of lung volumes and diaphragm motion. A temporal resolution of ∼500 milliseconds, spatial resolution of 2.7 × 2.7 × 10 mm, with whole lung coverage (16 slices) was achieved using the BCS scheme.
The Capra Research Program for Modelling Extreme Mass Ratio Inspirals
NASA Astrophysics Data System (ADS)
Thornburg, Jonathan
2011-02-01
Suppose a small compact object (black hole or neutron star) of mass m orbits a large black hole of mass M ≫ m. This system emits gravitational waves (GWs) that have a radiation-reaction effect on the particle's motion. EMRIs (extreme-mass-ratio inspirals) of this type will be important GW sources for LISA. To fully analyze these GWs, and to detect weaker sources also present in the LISA data stream, will require highly accurate EMRI GW templates. In this article I outline the ``Capra'' research program to try to model EMRIs and calculate their GWs ab initio, assuming only that m ≪ M and that the Einstein equations hold. Because m ≪ M the timescale for the particle's orbit to shrink is too long for a practical direct numerical integration of the Einstein equations, and because this orbit may be deep in the large black hole's strong-field region, a post-Newtonian approximation would be inaccurate. Instead, we treat the EMRI spacetime as a perturbation of the large black hole's ``background'' (Schwarzschild or Kerr) spacetime and use the methods of black-hole perturbation theory, expanding in the small parameter m/M. The particle's motion can be described either as the result of a radiation-reaction ``self-force'' acting in the background spacetime or as geodesic motion in a perturbed spacetime. Several different lines of reasoning lead to the (same) basic O(m/M) ``MiSaTaQuWa'' equations of motion for the particle. In particular, the MiSaTaQuWa equations can be derived by modelling the particle as either a point particle or a small Schwarzschild black hole. The latter is conceptually elegant, but the former is technically much simpler and (surprisingly for a nonlinear field theory such as general relativity) still yields correct results. Modelling the small body as a point particle, its own field is singular along the particle worldline, so it's difficult to formulate a meaningful ``perturbation'' theory or equations of motion there. Detweiler and Whiting found an elegant decomposition of the particle's metric perturbation into a singular part which is spherically symmetric at the particle and a regular part which is smooth (and non-symmetric) at the particle. If we assume that the singular part (being spherically symmetric at the particle) exerts no force on the particle, then the MiSaTaQuWa equations follow immediately. The MiSaTaQuWa equations involve gradients of a (curved-spacetime) Green function, integrated over the particle's entire past worldline. These expressions aren't amenable to direct use in practical computations. By carefully analysing the singularity structure of each term in a spherical-harmonic expansion of the particle's field, Barack and Ori found that the self-force can be written as an infinite sum of modes, each of which can be calculated by (numerically) solving a set of wave equations in 1{+}1 dimensions, summing the gradients of the resulting fields at the particle position, and then subtracting certain analytically-calculable ``regularization parameters''. This ``mode-sum'' regularization scheme has been the basis for much further research including explicit numerical calculations of the self-force in a variety of situations, initially for Schwarzschild spacetime and more recently extending to Kerr spacetime. Recently Barack and Golbourn developed an alternative ``m-mode'' regularization scheme. This regularizes the physical metric perturbation by subtracting from it a suitable ``puncture function'' approximation to the Detweiler-Whiting singular field. The residual is then decomposed into a Fourier sum over azimuthal (e^{imϕ}) modes, and the resulting equations solved numerically in 2{+}1 dimensions. Vega and Detweiler have developed a related scheme that uses the same puncture-function regularization but then solves the regularized perturbation equation numerically in 3{+}1 dimensions, avoiding a mode-sum decomposition entirely. A number of research projects are now using these puncture-function regularization schemes, particularly for calculations in Kerr spacetime. Most Capra research to date has used 1st order perturbation theory, with the particle moving on a fixed (usually geodesic) worldline. Much current research is devoted to generalizing this to allow the particle worldline to be perturbed by the self-force, and to obtain approximation schemes which remain valid over long (EMRI-inspiral) timescales. To obtain the very high accuracies needed to fully exploit LISA's observations of the strongest EMRIs, 2nd order perturbation theory will probably also be needed; both this and long-time approximations remain frontiers for future Capra research.
Study of X(5568) in a unitary coupled-channel approximation of BK¯ and Bs π
NASA Astrophysics Data System (ADS)
Sun, Bao-Xi; Dong, Fang-Yong; Pang, Jing-Long
2017-07-01
The potential of the B meson and the pseudoscalar meson is constructed up to the next-to-leading order Lagrangian, and then the BK¯ and Bs π interaction is studied in the unitary coupled-channel approximation. A resonant state with a mass about 5568 MeV and JP =0+ is generated dynamically, which can be associated with the X(5568) state announced by the D0 Collaboration recently. The mass and the decay width of this resonant state depend on the regularization scale in the dimensional regularization scheme, or the maximum momentum in the momentum cutoff regularization scheme. The scattering amplitude of the vector B meson and the pseudoscalar meson is calculated, and an axial-vector state with a mass near 5620 MeV and JP =1+ is produced. Their partners in the charm sector are also discussed.
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
Couvin, David; Bernheim, Aude; Toffano-Nioche, Claire; Touchon, Marie; Michalik, Juraj; Néron, Bertrand; C Rocha, Eduardo P; Vergnaud, Gilles; Gautheret, Daniel; Pourcel, Christine
2018-05-22
CRISPR (clustered regularly interspaced short palindromic repeats) arrays and their associated (Cas) proteins confer bacteria and archaea adaptive immunity against exogenous mobile genetic elements, such as phages or plasmids. CRISPRCasFinder allows the identification of both CRISPR arrays and Cas proteins. The program includes: (i) an improved CRISPR array detection tool facilitating expert validation based on a rating system, (ii) prediction of CRISPR orientation and (iii) a Cas protein detection and typing tool updated to match the latest classification scheme of these systems. CRISPRCasFinder can either be used online or as a standalone tool compatible with Linux operating system. All third-party software packages employed by the program are freely available. CRISPRCasFinder is available at https://crisprcas.i2bc.paris-saclay.fr.
NASA Astrophysics Data System (ADS)
Zhao, Xia; Wang, Guang-xin
2008-12-01
Synthetic aperture radar (SAR) is an active remote sensing sensor. It is a coherent imaging system, the speckle is its inherent default, which affects badly the interpretation and recognition of the SAR targets. Conventional methods of removing the speckle is studied usually in real SAR image, which reduce the edges of the images at the same time as depressing the speckle. Morever, Conventional methods lost the information about images phase. Removing the speckle and enhancing the target and edge simultaneously are still a puzzle. To suppress the spckle and enhance the targets and the edges simultaneously, a half-quadratic variational regularization method in complex SAR image is presented, which is based on the prior knowledge of the targets and the edge. Due to the non-quadratic and non- convex quality and the complexity of the cost function, a half-quadratic variational regularization variation is used to construct a new cost function,which is solved by alternate optimization. In the proposed scheme, the construction of the model, the solution of the model and the selection of the model peremeters are studied carefully. In the end, we validate the method using the real SAR data.Theoretic analysis and the experimental results illustrate the the feasibility of the proposed method. Further more, the proposed method can preserve the information about images phase.
Primal-dual and forward gradient implementation for quantitative susceptibility mapping.
Kee, Youngwook; Deh, Kofi; Dimov, Alexey; Spincemaille, Pascal; Wang, Yi
2017-12-01
To investigate the computational aspects of the prior term in quantitative susceptibility mapping (QSM) by (i) comparing the Gauss-Newton conjugate gradient (GNCG) algorithm that uses numerical conditioning (ie, modifies the prior term) with a primal-dual (PD) formulation that avoids this, and (ii) carrying out a comparison between a central and forward difference scheme for the discretization of the prior term. A spatially continuous formulation of the regularized QSM inversion problem and its PD formulation were derived. The Chambolle-Pock algorithm for PD was implemented and its convergence behavior was compared with that of GNCG for the original QSM. Forward and central difference schemes were compared in terms of the presence of checkerboard artifacts. All methods were tested and validated on a gadolinium phantom, ex vivo brain blocks, and in vivo brain MRI data with respect to COSMOS. The PD approach provided a faster convergence rate than GNCG. The GNCG convergence rate slowed considerably with smaller (more accurate) values of the conditioning parameter. Using a forward difference suppressed the checkerboard artifacts in QSM, as compared with the central difference. The accuracy of PD and GNCG were validated based on excellent correlation with COSMOS. The PD approach with forward difference for the gradient showed improved convergence and accuracy over the GNCG method using central difference. Magn Reson Med 78:2416-2427, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Technical Reports Server (NTRS)
Harten, A.; Tal-Ezer, H.
1981-01-01
This paper presents a family of two-level five-point implicit schemes for the solution of one-dimensional systems of hyperbolic conservation laws, which generalized the Crank-Nicholson scheme to fourth order accuracy (4-4) in both time and space. These 4-4 schemes are nondissipative and unconditionally stable. Special attention is given to the system of linear equations associated with these 4-4 implicit schemes. The regularity of this system is analyzed and efficiency of solution-algorithms is examined. A two-datum representation of these 4-4 implicit schemes brings about a compactification of the stencil to three mesh points at each time-level. This compact two-datum representation is particularly useful in deriving boundary treatments. Numerical results are presented to illustrate some properties of the proposed scheme.
Notes on Accuracy of Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2011-01-01
Truncation-error analysis is a reliable tool in predicting convergence rates of discretization errors on regular smooth grids. However, it is often misleading in application to finite-volume discretization schemes on irregular (e.g., unstructured) grids. Convergence of truncation errors severely degrades on general irregular grids; a design-order convergence can be achieved only on grids with a certain degree of geometric regularity. Such degradation of truncation-error convergence does not necessarily imply a lower-order convergence of discretization errors. In these notes, irregular-grid computations demonstrate that the design-order discretization-error convergence can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all.
Designing a Syntax-Based Retrieval System for Supporting Language Learning
ERIC Educational Resources Information Center
Tsao, Nai-Lung; Kuo, Chin-Hwa; Wible, David; Hung, Tsung-Fu
2009-01-01
In this paper, we propose a syntax-based text retrieval system for on-line language learning and use a fast regular expression search engine as its main component. Regular expression searches provide more scalable querying and search results than keyword-based searches. However, without a well-designed index scheme, the execution time of regular…
A weakly-compressible Cartesian grid approach for hydrodynamic flows
NASA Astrophysics Data System (ADS)
Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.
2017-11-01
The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.
A second order derivative scheme based on Bregman algorithm class
NASA Astrophysics Data System (ADS)
Campagna, Rosanna; Crisci, Serena; Cuomo, Salvatore; Galletti, Ardelio; Marcellino, Livia
2016-10-01
The algorithms based on the Bregman iterative regularization are known for efficiently solving convex constraint optimization problems. In this paper, we introduce a second order derivative scheme for the class of Bregman algorithms. Its properties of convergence and stability are investigated by means of numerical evidences. Moreover, we apply the proposed scheme to an isotropic Total Variation (TV) problem arising out of the Magnetic Resonance Image (MRI) denoising. Experimental results confirm that our algorithm has good performance in terms of denoising quality, effectiveness and robustness.
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.
Sparse spikes super-resolution on thin grids II: the continuous basis pursuit
NASA Astrophysics Data System (ADS)
Duval, Vincent; Peyré, Gabriel
2017-09-01
This article analyzes the performance of the continuous basis pursuit (C-BP) method for sparse super-resolution. The C-BP has been recently proposed by Ekanadham, Tranchina and Simoncelli as a refined discretization scheme for the recovery of spikes in inverse problems regularization. One of the most well known discretization scheme, the basis pursuit (BP, also known as \
Parallel discrete-event simulation schemes with heterogeneous processing elements.
Kim, Yup; Kwon, Ikhyun; Chae, Huiseung; Yook, Soon-Hyung
2014-07-01
To understand the effects of nonidentical processing elements (PEs) on parallel discrete-event simulation (PDES) schemes, two stochastic growth models, the restricted solid-on-solid (RSOS) model and the Family model, are investigated by simulations. The RSOS model is the model for the PDES scheme governed by the Kardar-Parisi-Zhang equation (KPZ scheme). The Family model is the model for the scheme governed by the Edwards-Wilkinson equation (EW scheme). Two kinds of distributions for nonidentical PEs are considered. In the first kind computing capacities of PEs are not much different, whereas in the second kind the capacities are extremely widespread. The KPZ scheme on the complex networks shows the synchronizability and scalability regardless of the kinds of PEs. The EW scheme never shows the synchronizability for the random configuration of PEs of the first kind. However, by regularizing the arrangement of PEs of the first kind, the EW scheme is made to show the synchronizability. In contrast, EW scheme never shows the synchronizability for any configuration of PEs of the second kind.
Representation of viruses in the remediated PDB archive
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawson, Catherine L., E-mail: cathy.lawson@rutgers.edu; Dutta, Shuchismita; Westbrook, John D.
2008-08-01
A new data model for PDB entries of viruses and other biological assemblies with regular noncrystallographic symmetry is described. A new scheme has been devised to represent viruses and other biological assemblies with regular noncrystallographic symmetry in the Protein Data Bank (PDB). The scheme describes existing and anticipated PDB entries of this type using generalized descriptions of deposited and experimental coordinate frames, symmetry and frame transformations. A simplified notation has been adopted to express the symmetry generation of assemblies from deposited coordinates and matrix operations describing the required point, helical or crystallographic symmetry. Complete correct information for building full assemblies,more » subassemblies and crystal asymmetric units of all virus entries is now available in the remediated PDB archive.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn
Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivativesmore » of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, H
Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected tomore » the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
Group-regularized individual prediction: theory and application to pain.
Lindquist, Martin A; Krishnan, Anjali; López-Solà, Marina; Jepma, Marieke; Woo, Choong-Wan; Koban, Leonie; Roy, Mathieu; Atlas, Lauren Y; Schmidt, Liane; Chang, Luke J; Reynolds Losin, Elizabeth A; Eisenbarth, Hedwig; Ashar, Yoni K; Delk, Elizabeth; Wager, Tor D
2017-01-15
Multivariate pattern analysis (MVPA) has become an important tool for identifying brain representations of psychological processes and clinical outcomes using fMRI and related methods. Such methods can be used to predict or 'decode' psychological states in individual subjects. Single-subject MVPA approaches, however, are limited by the amount and quality of individual-subject data. In spite of higher spatial resolution, predictive accuracy from single-subject data often does not exceed what can be accomplished using coarser, group-level maps, because single-subject patterns are trained on limited amounts of often-noisy data. Here, we present a method that combines population-level priors, in the form of biomarker patterns developed on prior samples, with single-subject MVPA maps to improve single-subject prediction. Theoretical results and simulations motivate a weighting based on the relative variances of biomarker-based prediction-based on population-level predictive maps from prior groups-and individual-subject, cross-validated prediction. Empirical results predicting pain using brain activity on a trial-by-trial basis (single-trial prediction) across 6 studies (N=180 participants) confirm the theoretical predictions. Regularization based on a population-level biomarker-in this case, the Neurologic Pain Signature (NPS)-improved single-subject prediction accuracy compared with idiographic maps based on the individuals' data alone. The regularization scheme that we propose, which we term group-regularized individual prediction (GRIP), can be applied broadly to within-person MVPA-based prediction. We also show how GRIP can be used to evaluate data quality and provide benchmarks for the appropriateness of population-level maps like the NPS for a given individual or study. Copyright © 2015 Elsevier Inc. All rights reserved.
Structure-Function Network Mapping and Its Assessment via Persistent Homology
2017-01-01
Understanding the relationship between brain structure and function is a fundamental problem in network neuroscience. This work deals with the general method of structure-function mapping at the whole-brain level. We formulate the problem as a topological mapping of structure-function connectivity via matrix function, and find a stable solution by exploiting a regularization procedure to cope with large matrices. We introduce a novel measure of network similarity based on persistent homology for assessing the quality of the network mapping, which enables a detailed comparison of network topological changes across all possible thresholds, rather than just at a single, arbitrary threshold that may not be optimal. We demonstrate that our approach can uncover the direct and indirect structural paths for predicting functional connectivity, and our network similarity measure outperforms other currently available methods. We systematically validate our approach with (1) a comparison of regularized vs. non-regularized procedures, (2) a null model of the degree-preserving random rewired structural matrix, (3) different network types (binary vs. weighted matrices), and (4) different brain parcellation schemes (low vs. high resolutions). Finally, we evaluate the scalability of our method with relatively large matrices (2514x2514) of structural and functional connectivity obtained from 12 healthy human subjects measured non-invasively while at rest. Our results reveal a nonlinear structure-function relationship, suggesting that the resting-state functional connectivity depends on direct structural connections, as well as relatively parsimonious indirect connections via polysynaptic pathways. PMID:28046127
Computer-aided classification of breast masses using contrast-enhanced digital mammograms
NASA Astrophysics Data System (ADS)
Danala, Gopichandh; Aghaei, Faranak; Heidari, Morteza; Wu, Teresa; Patel, Bhavika; Zheng, Bin
2018-02-01
By taking advantages of both mammography and breast MRI, contrast-enhanced digital mammography (CEDM) has emerged as a new promising imaging modality to improve efficacy of breast cancer screening and diagnosis. The primary objective of study is to develop and evaluate a new computer-aided detection and diagnosis (CAD) scheme of CEDM images to classify between malignant and benign breast masses. A CEDM dataset consisting of 111 patients (33 benign and 78 malignant) was retrospectively assembled. Each case includes two types of images namely, low-energy (LE) and dual-energy subtracted (DES) images. First, CAD scheme applied a hybrid segmentation method to automatically segment masses depicting on LE and DES images separately. Optimal segmentation results from DES images were also mapped to LE images and vice versa. Next, a set of 109 quantitative image features related to mass shape and density heterogeneity was initially computed. Last, four multilayer perceptron-based machine learning classifiers integrated with correlationbased feature subset evaluator and leave-one-case-out cross-validation method was built to classify mass regions depicting on LE and DES images, respectively. Initially, when CAD scheme was applied to original segmentation of DES and LE images, the areas under ROC curves were 0.7585+/-0.0526 and 0.7534+/-0.0470, respectively. After optimal segmentation mapping from DES to LE images, AUC value of CAD scheme significantly increased to 0.8477+/-0.0376 (p<0.01). Since DES images eliminate overlapping effect of dense breast tissue on lesions, segmentation accuracy was significantly improved as compared to regular mammograms, the study demonstrated that computer-aided classification of breast masses using CEDM images yielded higher performance.
Sensor validation and fusion for gas turbine vibration monitoring
NASA Astrophysics Data System (ADS)
Yan, Weizhong; Goebel, Kai F.
2003-08-01
Vibration monitoring is an important practice throughout regular operation of gas turbine power systems and, even more so, during characterization tests. Vibration monitoring relies on accurate and reliable sensor readings. To obtain accurate readings, sensors are placed such that the signal is maximized. In the case of characterization tests, strain gauges are placed at the location of vibration modes on blades inside the gas turbine. Due to the prevailing harsh environment, these sensors have a limited life and decaying accuracy, both of which impair vibration assessment. At the same time bandwidth limitations may restrict data transmission, which in turn limits the number of sensors that can be used for assessment. Knowing the sensor status (normal or faulty), and more importantly, knowing the true vibration level of the system all the time is essential for successful gas turbine vibration monitoring. This paper investigates a dynamic sensor validation and system health reasoning scheme that addresses the issues outlined above by considering only the information required to reliably assess system health status. In particular, if abnormal system health is suspected or if the primary sensor is determined to be faulted, information from available "sibling" sensors is dynamically integrated. A confidence expresses the complex interactions of sensor health and system health, their reliabilities, conflicting information, and what the health assessment is. Effectiveness of the scheme in achieving accurate and reliable vibration evaluation is then demonstrated using a combination of simulated data and a small sample of a real-world application data where the vibration of compressor blades during a real time characterization test of a new gas turbine power system is monitored.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kilcher, Levi F
Model Validation and Site Characterization for Early Deployment Marine and Hydrokinetic Energy Sites and Establishment of Wave Classification Scheme presentation from from Water Power Technologies Office Peer Review, FY14-FY16.
PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†
NASA Astrophysics Data System (ADS)
Naghibzadeh, Shahrzad; van der Veen, Alle-Jan
2018-06-01
Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.
Application of the Organic Synthetic Designs to Astrobiology
NASA Astrophysics Data System (ADS)
Kolb, V. M.
2009-12-01
In this paper we propose a synthesis of the heterocyclic compounds and the insoluble materials on the meteorites. Our synthetic scheme involves the reaction of sugars and amino acids, the so-called Maillard reaction. We have developed this scheme based on the combined analysis of the regular and retrosynthetic organic synthetic principles. The merits of these synthetic methods for the prebiotic design are addressed.
Zietze, Stefan; Müller, Rainer H; Brecht, René
2008-03-01
In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.
NASA Astrophysics Data System (ADS)
Benfenati, A.; La Camera, A.; Carbillet, M.
2016-02-01
Aims: High-dynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright point-wise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources. Methods: The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized Kullback-Leibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the trade-off between the Kullback-Leibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise. Results: The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources' position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources' position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star's position concerns a connected region instead of isolated pixels.
Scalar self-force on eccentric geodesics in Schwarzschild spacetime: A time-domain computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haas, Roland
2007-06-15
We calculate the self-force acting on a particle with scalar charge moving on a generic geodesic around a Schwarzschild black hole. This calculation requires an accurate computation of the retarded scalar field produced by the moving charge; this is done numerically with the help of a fourth-order convergent finite-difference scheme formulated in the time domain. The calculation also requires a regularization procedure, because the retarded field is singular on the particle's world line; this is handled mode-by-mode via the mode-sum regularization scheme first introduced by Barack and Ori. This paper presents the numerical method, various numerical tests, and a samplemore » of results for mildly eccentric orbits as well as ''zoom-whirl'' orbits.« less
Simulating incompressible flow on moving meshfree grids using General Finite Differences (GFD)
NASA Astrophysics Data System (ADS)
Vasyliv, Yaroslav; Alexeev, Alexander
2016-11-01
We simulate incompressible flow around an oscillating cylinder at different Reynolds numbers using General Finite Differences (GFD) on a meshfree grid. We evolve the meshfree grid by treating each grid node as a particle. To compute velocities and accelerations, we consider the particles at a particular instance as Eulerian observation points. The incompressible Navier-Stokes equations are directly discretized using GFD with boundary conditions enforced using a sharp interface treatment. Cloud sizes are set such that the local approximations use only 16 neighbors. To enforce incompressibility, we apply a semi-implicit approximate projection method. To prevent overlapping particles and formation of voids in the grid, we propose a particle regularization scheme based on a local minimization principle. We validate the GFD results for an oscillating cylinder against the lattice Boltzmann method and find good agreement. Financial support provided by National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.
Two-level schemes for the advection equation
NASA Astrophysics Data System (ADS)
Vabishchevich, Petr N.
2018-06-01
The advection equation is the basis for mathematical models of continuum mechanics. In the approximate solution of nonstationary problems it is necessary to inherit main properties of the conservatism and monotonicity of the solution. In this paper, the advection equation is written in the symmetric form, where the advection operator is the half-sum of advection operators in conservative (divergent) and non-conservative (characteristic) forms. The advection operator is skew-symmetric. Standard finite element approximations in space are used. The standard explicit two-level scheme for the advection equation is absolutely unstable. New conditionally stable regularized schemes are constructed, on the basis of the general theory of stability (well-posedness) of operator-difference schemes, the stability conditions of the explicit Lax-Wendroff scheme are established. Unconditionally stable and conservative schemes are implicit schemes of the second (Crank-Nicolson scheme) and fourth order. The conditionally stable implicit Lax-Wendroff scheme is constructed. The accuracy of the investigated explicit and implicit two-level schemes for an approximate solution of the advection equation is illustrated by the numerical results of a model two-dimensional problem.
2011-07-01
10%. These results demonstrate that the IOP-based BRDF correction scheme (which is composed of the R„ model along with the IOP retrieval...distribution was averaged over 10 min 5. Validation of the lOP-Based BRDF Correction Scheme The IOP-based BRDF correction scheme is applied to both...oceanic and coastal waters were very consistent qualitatively and quantitatively and thus validate the IOP- based BRDF correction system, at least
High-quality compressive ghost imaging
NASA Astrophysics Data System (ADS)
Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun
2018-04-01
We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.
Regularization in Orbital Mechanics; Theory and Practice
NASA Astrophysics Data System (ADS)
Roa, Javier
2017-09-01
Regularized equations of motion can improve numerical integration for the propagation of orbits, and simplify the treatment of mission design problems. This monograph discusses standard techniques and recent research in the area. While each scheme is derived analytically, its accuracy is investigated numerically. Algebraic and topological aspects of the formulations are studied, as well as their application to practical scenarios such as spacecraft relative motion and new low-thrust trajectories.
A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer.
Hwang, Seong Jae; Collins, Maxwell D; Ravi, Sathya N; Ithapu, Vamsi K; Adluru, Nagesh; Johnson, Sterling C; Singh, Vikas
2015-12-01
Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a "black box" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.
NASA Astrophysics Data System (ADS)
Zeng, Dong; Bian, Zhaoying; Gong, Changfei; Huang, Jing; He, Ji; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua
2016-03-01
Multienergy computed tomography (MECT) has the potential to simultaneously offer multiple sets of energy- selective data belonging to specific energy windows. However, because sufficient photon counts are not available in the specific energy windows compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise (SNR) and strong streak artifacts. To eliminate this drawback, in this work we present a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization to improve the MECT images quality from low-milliampere-seconds (low-mAs) data acquisitions. Henceforth the present scheme is referred to as `PWLS- STV' for simplicity. Specifically, the STV regularization is derived by penalizing the eigenvalues of the structure tensor of every point in the MECT images. Thus it can provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Experiments with a digital XCAT phantom clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of noise-induced artifacts suppression, resolution preservation, and material decomposition assessment.
Testing & Validating: 3D Seismic Travel Time Tomography (Detailed Shallow Subsurface Imaging)
NASA Astrophysics Data System (ADS)
Marti, David; Marzan, Ignacio; Alvarez-Marron, Joaquina; Carbonell, Ramon
2016-04-01
A detailed full 3 dimensional P wave seismic velocity model was constrained by a high-resolution seismic tomography experiment. A regular and dense grid of shots and receivers was use to image a 500x500x200 m volume of the shallow subsurface. 10 GEODE's resulting in a 240 channels recording system and a 250 kg weight drop were used for the acquisition. The recording geometry consisted in 10x20m geophone grid spacing, and a 20x20 m stagered source spacing. A total of 1200 receivers and 676 source points. The study area is located within the Iberian Meseta, in Villar de Cañas (Cuenca, Spain). The lithological/geological target consisted in a Neogen sedimentary sequence formed from bottom to top by a transition from gyspum to silstones. The main objectives consisted in resolving the underground structure: contacts/discontinuities; constrain the 3D geometry of the lithology (possible cavities, faults/fractures). These targets were achieved by mapping the 3D distribution of the physical properties (P-wave velocity). The regularly space dense acquisition grid forced to acquire the survey in different stages and with a variety of weather conditions. Therefore, a careful quality control was required. More than a half million first arrivals were inverted to provide a 3D Vp velocity model that reached depths of 120 m in the areas with the highest ray coverage. An extended borehole campaign, that included borehole geophysical measurements in some wells provided unique tight constraints on the lithology an a validation scheme for the tomographic results. The final image reveals a laterally variable structure consisting of four different lithological units. In this methodological validation test travel-time tomography features a high capacity of imaging in detail the lithological contrasts for complex structures located at very shallow depths.
Highly accurate fast lung CT registration
NASA Astrophysics Data System (ADS)
Rühaak, Jan; Heldmann, Stefan; Kipshagen, Till; Fischer, Bernd
2013-03-01
Lung registration in thoracic CT scans has received much attention in the medical imaging community. Possible applications range from follow-up analysis, motion correction for radiation therapy, monitoring of air flow and pulmonary function to lung elasticity analysis. In a clinical environment, runtime is always a critical issue, ruling out quite a few excellent registration approaches. In this paper, a highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization is presented. The method ensures diffeomorphic deformations by an additional volume regularization. Supplemental user knowledge, like a segmentation of the lungs, may be incorporated as well. The accuracy of our method was evaluated on 40 test cases from clinical routine. In the EMPIRE10 lung registration challenge, our scheme ranks third, with respect to various validation criteria, out of 28 algorithms with an average landmark distance of 0.72 mm. The average runtime is about 1:50 min on a standard PC, making it by far the fastest approach of the top-ranking algorithms. Additionally, the ten publicly available DIR-Lab inhale-exhale scan pairs were registered to subvoxel accuracy at computation times of only 20 seconds. Our method thus combines very attractive runtimes with state-of-the-art accuracy in a unique way.
Implementation of non-axisymmetric mesh system in the gyrokinetic PIC code (XGC) for Stellarators
NASA Astrophysics Data System (ADS)
Moritaka, Toseo; Hager, Robert; Cole, Micheal; Chang, Choong-Seock; Lazerson, Samuel; Ku, Seung-Hoe; Ishiguro, Seiji
2017-10-01
Gyrokinetic simulation is a powerful tool to investigate turbulent and neoclassical transports based on the first-principles of plasma kinetics. The gyrokinetic PIC code XGC has been developed for integrated simulations that cover the entire region of Tokamaks. Complicated field line and boundary structures should be taken into account to demonstrate edge plasma dynamics under the influence of X-point and vessel components. XGC employs gyrokinetic Poisson solver on unstructured triangle mesh to deal with this difficulty. We introduce numerical schemes newly developed for XGC simulation in non-axisymmetric Stellarator geometry. Triangle meshes in each poloidal plane are defined by PEST poloidal angle in the VMEC equilibrium so that they have the same regular structure in the straight field line coordinate. Electric charge of marker particle is distributed to the triangles specified by the field-following projection to the neighbor poloidal planes. 3D spline interpolation in a cylindrical mesh is also used to obtain equilibrium magnetic field at the particle position. These schemes capture the anisotropic plasma dynamics and resulting potential structure with high accuracy. The triangle meshes can smoothly connect to unstructured meshes in the edge region. We will present the validation test in the core region of Large Helical Device and discuss about future challenges toward edge simulations.
NASA Astrophysics Data System (ADS)
Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.
2017-07-01
In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.
A gas-kinetic BGK scheme for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Xu, Kun
2000-01-01
This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.
Natarajan, Logesh Kumar; Wu, Sean F
2012-06-01
This paper presents helpful guidelines and strategies for reconstructing the vibro-acoustic quantities on a highly non-spherical surface by using the Helmholtz equation least squares (HELS). This study highlights that a computationally simple code based on the spherical wave functions can produce an accurate reconstruction of the acoustic pressure and normal surface velocity on planar surfaces. The key is to select the optimal origin of the coordinate system behind the planar surface, choose a target structural wavelength to be reconstructed, set an appropriate stand-off distance and microphone spacing, use a hybrid regularization scheme to determine the optimal number of the expansion functions, etc. The reconstructed vibro-acoustic quantities are validated rigorously via experiments by comparing the reconstructed normal surface velocity spectra and distributions with the benchmark data obtained by scanning a laser vibrometer over the plate surface. Results confirm that following the proposed guidelines and strategies can ensure the accuracy in reconstructing the normal surface velocity up to the target structural wavelength, and produce much more satisfactory results than a straight application of the original HELS formulations. Experiment validations on a baffled, square plate were conducted inside a fully anechoic chamber.
Shaikh, Riaz Ahmed; Jameel, Hassan; d'Auriol, Brian J; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae
2009-01-01
Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm.
Shaikh, Riaz Ahmed; Jameel, Hassan; d’Auriol, Brian J.; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae
2009-01-01
Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm. PMID:22454568
A Robust and Effective Smart-Card-Based Remote User Authentication Mechanism Using Hash Function
Odelu, Vanga; Goswami, Adrijit
2014-01-01
In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme. PMID:24892078
A robust and effective smart-card-based remote user authentication mechanism using hash function.
Das, Ashok Kumar; Odelu, Vanga; Goswami, Adrijit
2014-01-01
In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme.
A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Kim, Hyun Dae; Liu, Nan-Suey
1992-01-01
A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.
Range-Separated Brueckner Coupled Cluster Doubles Theory
NASA Astrophysics Data System (ADS)
Shepherd, James J.; Henderson, Thomas M.; Scuseria, Gustavo E.
2014-04-01
We introduce a range-separation approximation to coupled cluster doubles (CCD) theory that successfully overcomes limitations of regular CCD when applied to the uniform electron gas. We combine the short-range ladder channel with the long-range ring channel in the presence of a Bruckner renormalized one-body interaction and obtain ground-state energies with an accuracy of 0.001 a.u./electron across a wide range of density regimes. Our scheme is particularly useful in the low-density and strongly correlated regimes, where regular CCD has serious drawbacks. Moreover, we cure the infamous overcorrelation of approaches based on ring diagrams (i.e., the particle-hole random phase approximation). Our energies are further shown to have appropriate basis set and thermodynamic limit convergence, and overall this scheme promises energetic properties for realistic periodic and extended systems which existing methods do not possess.
One-loop corrections to light cone wave functions: The dipole picture DIS cross section
NASA Astrophysics Data System (ADS)
Hänninen, H.; Lappi, T.; Paatelainen, R.
2018-06-01
We develop methods to perform loop calculations in light cone perturbation theory using a helicity basis, refining the method introduced in our earlier work. In particular this includes implementing a consistent way to contract the four-dimensional tensor structures from the helicity vectors with d-dimensional tensors arising from loop integrals, in a way that can be fully automatized. We demonstrate this explicitly by calculating the one-loop correction to the virtual photon to quark-antiquark dipole light cone wave function. This allows us to calculate the deep inelastic scattering cross section in the dipole formalism to next-to-leading order accuracy. Our results, obtained using the four dimensional helicity scheme, agree with the recent calculation by Beuf using conventional dimensional regularization, confirming the regularization scheme independence of this cross section.
NASA Astrophysics Data System (ADS)
Zhai, Guang; Shirzaei, Manoochehr
2017-12-01
Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.
Wavelet domain image restoration with adaptive edge-preserving regularization.
Belge, M; Kilmer, M E; Miller, E L
2000-01-01
In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.
Generalization Analysis of Fredholm Kernel Regularized Classifiers.
Gong, Tieliang; Xu, Zongben; Chen, Hong
2017-07-01
Recently, a new framework, Fredholm learning, was proposed for semisupervised learning problems based on solving a regularized Fredholm integral equation. It allows a natural way to incorporate unlabeled data into learning algorithms to improve their prediction performance. Despite rapid progress on implementable algorithms with theoretical guarantees, the generalization ability of Fredholm kernel learning has not been studied. In this letter, we focus on investigating the generalization performance of a family of classification algorithms, referred to as Fredholm kernel regularized classifiers. We prove that the corresponding learning rate can achieve [Formula: see text] ([Formula: see text] is the number of labeled samples) in a limiting case. In addition, a representer theorem is provided for the proposed regularized scheme, which underlies its applications.
NASA Astrophysics Data System (ADS)
Meinke, I.
2003-04-01
A new method is presented to validate cloud parametrization schemes in numerical atmospheric models with satellite data of scanning radiometers. This method is applied to the regional atmospheric model HRM (High Resolution Regional Model) using satellite data from ISCCP (International Satellite Cloud Climatology Project). Due to the limited reliability of former validations there has been a need for developing a new validation method: Up to now differences between simulated and measured cloud properties are mostly declared as deficiencies of the cloud parametrization scheme without further investigation. Other uncertainties connected with the model or with the measurements have not been taken into account. Therefore changes in the cloud parametrization scheme based on such kind of validations might not be realistic. The new method estimates uncertainties of the model and the measurements. Criteria for comparisons of simulated and measured data are derived to localize deficiencies in the model. For a better specification of these deficiencies simulated clouds are classified regarding their parametrization. With this classification the localized model deficiencies are allocated to a certain parametrization scheme. Applying this method to the regional model HRM the quality of forecasting cloud properties is estimated in detail. The overestimation of simulated clouds in low emissivity heights especially during the night is localized as model deficiency. This is caused by subscale cloudiness. As the simulation of subscale clouds in the regional model HRM is described by a relative humidity parametrization these deficiencies are connected with this parameterization.
Goode, N; Salmon, P M; Taylor, N Z; Lenné, M G; Finch, C F
2017-10-01
One factor potentially limiting the uptake of Rasmussen's (1997) Accimap method by practitioners is the lack of a contributing factor classification scheme to guide accident analyses. This article evaluates the intra- and inter-rater reliability and criterion-referenced validity of a classification scheme developed to support the use of Accimap by led outdoor activity (LOA) practitioners. The classification scheme has two levels: the system level describes the actors, artefacts and activity context in terms of 14 codes; the descriptor level breaks the system level codes down into 107 specific contributing factors. The study involved 11 LOA practitioners using the scheme on two separate occasions to code a pre-determined list of contributing factors identified from four incident reports. Criterion-referenced validity was assessed by comparing the codes selected by LOA practitioners to those selected by the method creators. Mean intra-rater reliability scores at the system (M = 83.6%) and descriptor (M = 74%) levels were acceptable. Mean inter-rater reliability scores were not consistently acceptable for both coding attempts at the system level (M T1 = 68.8%; M T2 = 73.9%), and were poor at the descriptor level (M T1 = 58.5%; M T2 = 64.1%). Mean criterion referenced validity scores at the system level were acceptable (M T1 = 73.9%; M T2 = 75.3%). However, they were not consistently acceptable at the descriptor level (M T1 = 67.6%; M T2 = 70.8%). Overall, the results indicate that the classification scheme does not currently satisfy reliability and validity requirements, and that further work is required. The implications for the design and development of contributing factors classification schemes are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chanteloup, Francoise; Lenton, Simon; Fetherston, James; Barratt, Monica J
2005-07-01
The effect on the cannabis market is one area of interest in the evaluation of the new 'prohibition with civil penalties' scheme for minor cannabis offences in WA. One goal of the scheme is to reduce the proportion of cannabis consumed that is supplied by large-scale suppliers that may also supply other drugs. As part of the pre-change phase of the evaluation, 100 regular (at least weekly) cannabis users were given a qualitative and quantitative interview covering knowledge and attitudes towards cannabis law, personal cannabis use, market factors, experience with the justice system and impact of legislative change. Some 85% of those who commented identified the changes as having little impact on their cannabis use. Some 89% of the 70 who intended to cultivate cannabis once the CIN scheme was introduced suggested they would grow cannabis within the two non-hydroponic plant-limit eligible for an infringement notice under the new law. Only 15% believed an increase in self-supply would undermine the large scale suppliers of cannabis in the market and allow some cannabis users to distance themselves from its unsavoury aspects. Only 11% said they would enter, or re-enter, the cannabis market as sellers as a result of the scheme introduction. Most respondents who commented believed that the impact of the legislative changes on the cannabis market would be negligible. The extent to which this happens will be addressed in the post-change phase of this research. Part of the challenge in assessing the impact of the CIN scheme on the cannabis market is that it is distinctly heterogeneous.
Stratospheric water vapour in the vicinity of the Arctic polar vortex
NASA Astrophysics Data System (ADS)
Maturilli, M.; Fierli, F.; Yushkov, V.; Lukyanov, A.; Khaykin, S.; Hauchecorne, A.
2006-07-01
The stratospheric water vapour mixing ratio inside, outside, and at the edge of the polar vortex has been accurately measured by the FLASH-B Lyman-Alpha hygrometer during the LAUTLOS campaign in Sodankylä, Finland, in January and February 2004. The retrieved H2O profiles reveal a detailed view on the Arctic lower stratospheric water vapour distribution, and provide a valuable dataset for the validation of model and satellite data. Analysing the measurements with the semi-lagrangian advection model MIMOSA, water vapour profiles typical for the polar vortex' interior and exterior have been identified, and laminae in the observed profiles have been correlated to filamentary structures in the potential vorticity field. Applying the validated MIMOSA transport scheme to specific humidity fields from operational ECMWF analyses, large discrepancies from the observed profiles arise. Although MIMOSA is able to reproduce weak water vapour filaments and improves the shape of the profiles compared to operational ECMWF analyses, both models reveal a dry bias of about 1 ppmv in the lower stratosphere above 400 K, accounting for a relative difference from the measurements in the order of 20%. The large dry bias in the analysis representation of stratospheric water vapour in the Arctic implies the need for future regular measurements of water vapour in the polar stratosphere to allow the validation and improvement of climate models.
Lee, Sun Mi; Katz, Matthew H G; Liu, Li; Sundar, Manonmani; Wang, Hua; Varadhachary, Gauri R; Wolff, Robert A; Lee, Jeffrey E; Maitra, Anirban; Fleming, Jason B; Rashid, Asif; Wang, Huamin
2016-12-01
Neoadjuvant therapy has been increasingly used to treat patients with potentially resectable pancreatic ductal adenocarcinoma (PDAC). Although the College of American Pathologists (CAP) grading scheme for tumor response in posttherapy specimens has been used, its clinical significance has not been validated. Previously, we proposed a 3-tier histologic tumor regression grading (HTRG) scheme (HTRG 0, no viable tumor; HTRG 1, <5% viable tumor cells; HTRG 2, ≥5% viable tumor cells) and showed that the 3-tier HTRG scheme correlated with prognosis. In this study, we sought to validate our proposed HTRG scheme in a new cohort of 167 consecutive PDAC patients who completed neoadjuvant therapy and pancreaticoduodenectomy. We found that patients with HTRG 0 or 1 were associated with a lower frequency of lymph node metastasis (P=0.004) and recurrence (P=0.01), lower ypT (P<0.001) and AJCC stage (P<0.001), longer disease-free survival (DFS, P=0.004) and overall survival (OS, P=0.02) than those with HTRG 2. However, there was no difference in either DFS or OS between the groups with CAP grade 2 and those with CAP grade 3 (P>0.05). In multivariate analysis, HTRG grade 0 or 1 was an independent prognostic factor for better DFS (P=0.03), but not OS. Therefore we validated the proposed HTRG scheme from our previous study. The proposed HTRG scheme is simple and easy to apply in practice by pathologists and might be used as a successful surrogate for longer DFS in patients with potentially resectable PDAC who completed neoadjuvant therapy and surgery.
High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities
2015-03-31
FD scheme is only consistent for classical solutions of the PDE . For this reason, we implement the method of singularity subtraction as a means for...regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE . For this reason, we...Introduction In the present work, we develop a high-order numerical method for solving linear elliptic PDEs with well-behaved variable coefficients on
Effects of high-frequency damping on iterative convergence of implicit viscous solver
NASA Astrophysics Data System (ADS)
Nishikawa, Hiroaki; Nakashima, Yoshitaka; Watanabe, Norihiko
2017-11-01
This paper discusses effects of high-frequency damping on iterative convergence of an implicit defect-correction solver for viscous problems. The study targets a finite-volume discretization with a one parameter family of damped viscous schemes. The parameter α controls high-frequency damping: zero damping with α = 0, and larger damping for larger α (> 0). Convergence rates are predicted for a model diffusion equation by a Fourier analysis over a practical range of α. It is shown that the convergence rate attains its minimum at α = 1 on regular quadrilateral grids, and deteriorates for larger values of α. A similar behavior is observed for regular triangular grids. In both quadrilateral and triangular grids, the solver is predicted to diverge for α smaller than approximately 0.5. Numerical results are shown for the diffusion equation and the Navier-Stokes equations on regular and irregular grids. The study suggests that α = 1 and 4/3 are suitable values for robust and efficient computations, and α = 4 / 3 is recommended for the diffusion equation, which achieves higher-order accuracy on regular quadrilateral grids. Finally, a Jacobian-Free Newton-Krylov solver with the implicit solver (a low-order Jacobian approximately inverted by a multi-color Gauss-Seidel relaxation scheme) used as a variable preconditioner is recommended for practical computations, which provides robust and efficient convergence for a wide range of α.
Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K
2013-08-01
A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.
On the regularization for nonlinear tomographic absorption spectroscopy
NASA Astrophysics Data System (ADS)
Dai, Jinghang; Yu, Tao; Xu, Lijun; Cai, Weiwei
2018-02-01
Tomographic absorption spectroscopy (TAS) has attracted increased research efforts recently due to the development in both hardware and new imaging concepts such as nonlinear tomography and compressed sensing. Nonlinear TAS is one of the emerging modality that bases on the concept of nonlinear tomography and has been successfully demonstrated both numerically and experimentally. However, all the previous demonstrations were realized using only two orthogonal projections simply for ease of implementation. In this work, we examine the performance of nonlinear TAS using other beam arrangements and test the effectiveness of the beam optimization technique that has been developed for linear TAS. In addition, so far only smoothness prior has been adopted and applied in nonlinear TAS. Nevertheless, there are also other useful priors such as sparseness and model-based prior which have not been investigated yet. This work aims to show how these priors can be implemented and included in the reconstruction process. Regularization through Bayesian formulation will be introduced specifically for this purpose, and a method for the determination of a proper regularization factor will be proposed. The comparative studies performed with different beam arrangements and regularization schemes on a few representative phantoms suggest that the beam optimization method developed for linear TAS also works for the nonlinear counterpart and the regularization scheme should be selected properly according to the available a priori information under specific application scenarios so as to achieve the best reconstruction fidelity. Though this work is conducted under the context of nonlinear TAS, it can also provide useful insights for other tomographic modalities.
Medical image enhancement using resolution synthesis
NASA Astrophysics Data System (ADS)
Wong, Tak-Shing; Bouman, Charles A.; Thibault, Jean-Baptiste; Sauer, Ken D.
2011-03-01
We introduce a post-processing approach to improve the quality of CT reconstructed images. The scheme is adapted from the resolution-synthesis (RS)1 interpolation algorithm. In this approach, we consider the input image, scanned at a particular dose level, as a degraded version of a high quality image scanned at a high dose level. Image enhancement is achieved by predicting the high quality image by classification based linear regression. To improve the robustness of our scheme, we also apply the minimum description length principle to determine the optimal number of predictors to use in the scheme, and the ridge regression to regularize the design of the predictors. Experimental results show that our scheme is effective in reducing the noise in images reconstructed from filtered back projection without significant loss of image details. Alternatively, our scheme can also be applied to reduce dose while maintaining image quality at an acceptable level.
NASA Astrophysics Data System (ADS)
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
Zhang, Yuanke; Lu, Hongbing; Rong, Junyan; Meng, Jing; Shang, Junliang; Ren, Pinghong; Zhang, Junying
2017-09-01
Low-dose CT (LDCT) technique can reduce the x-ray radiation exposure to patients at the cost of degraded images with severe noise and artifacts. Non-local means (NLM) filtering has shown its potential in improving LDCT image quality. However, currently most NLM-based approaches employ a weighted average operation directly on all neighbor pixels with a fixed filtering parameter throughout the NLM filtering process, ignoring the non-stationary noise nature of LDCT images. In this paper, an adaptive NLM filtering scheme on local principle neighborhoods (PC-NLM) is proposed for structure-preserving noise/artifacts reduction in LDCT images. Instead of using neighboring patches directly, in the PC-NLM scheme, the principle component analysis (PCA) is first applied on local neighboring patches of the target patch to decompose the local patches into uncorrelated principle components (PCs), then a NLM filtering is used to regularize each PC of the target patch and finally the regularized components is transformed to get the target patch in image domain. Especially, in the NLM scheme, the filtering parameter is estimated adaptively from local noise level of the neighborhood as well as the signal-to-noise ratio (SNR) of the corresponding PC, which guarantees a "weaker" NLM filtering on PCs with higher SNR and a "stronger" filtering on PCs with lower SNR. The PC-NLM procedure is iteratively performed several times for better removal of the noise and artifacts, and an adaptive iteration strategy is developed to reduce the computational load by determining whether a patch should be processed or not in next round of the PC-NLM filtering. The effectiveness of the presented PC-NLM algorithm is validated by experimental phantom studies and clinical studies. The results show that it can achieve promising gain over some state-of-the-art methods in terms of artifact suppression and structure preservation. With the use of PCA on local neighborhoods to extract principal structural components, as well as adaptive NLM filtering on PCs of the target patch using filtering parameter estimated based on the local noise level and corresponding SNR, the proposed PC-NLM method shows its efficacy in preserving fine anatomical structures and suppressing noise/artifacts in LDCT images. © 2017 American Association of Physicists in Medicine.
Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Kumar, Neeraj
2015-11-01
In the last few years, numerous remote user authentication and session key agreement schemes have been put forwarded for Telecare Medical Information System, where the patient and medical server exchange medical information using Internet. We have found that most of the schemes are not usable for practical applications due to known security weaknesses. It is also worth to note that unrestricted number of patients login to the single medical server across the globe. Therefore, the computation and maintenance overhead would be high and the server may fail to provide services. In this article, we have designed a medical system architecture and a standard mutual authentication scheme for single medical server, where the patient can securely exchange medical data with the doctor(s) via trusted central medical server over any insecure network. We then explored the security of the scheme with its resilience to attacks. Moreover, we formally validated the proposed scheme through the simulation using Automated Validation of Internet Security Schemes and Applications software whose outcomes confirm that the scheme is protected against active and passive attacks. The performance comparison demonstrated that the proposed scheme has lower communication cost than the existing schemes in literature. In addition, the computation cost of the proposed scheme is nearly equal to the exiting schemes. The proposed scheme not only efficient in terms of different security attacks, but it also provides an efficient login, mutual authentication, session key agreement and verification and password update phases along with password recovery.
Unconditionally secure commitment in position-based quantum cryptography.
Nadeem, Muhammad
2014-10-27
A new commitment scheme based on position-verification and non-local quantum correlations is presented here for the first time in literature. The only credential for unconditional security is the position of committer and non-local correlations generated; neither receiver has any pre-shared data with the committer nor does receiver require trusted and authenticated quantum/classical channels between him and the committer. In the proposed scheme, receiver trusts the commitment only if the scheme itself verifies position of the committer and validates her commitment through non-local quantum correlations in a single round. The position-based commitment scheme bounds committer to reveal valid commitment within allocated time and guarantees that the receiver will not be able to get information about commitment unless committer reveals. The scheme works for the commitment of both bits and qubits and is equally secure against committer/receiver as well as against any third party who may have interests in destroying the commitment. Our proposed scheme is unconditionally secure in general and evades Mayers and Lo-Chau attacks in particular.
NASA Technical Reports Server (NTRS)
Miki, Kenji; Moder, Jeff; Liou, Meng-Sing
2016-01-01
In this paper, we present the recent enhancement of the Open National Combustion Code (OpenNCC) and apply the OpenNCC to model a realistic combustor configuration (Energy Efficient Engine (E3)). First, we perform a series of validation tests for the newly-implemented advection upstream splitting method (AUSM) and the extended version of the AUSM-family schemes (AUSM+-up). Compared with the analytical/experimental data of the validation tests, we achieved good agreement. In the steady-state E3 cold flow results using the Reynolds-averaged Navier-Stokes(RANS), we find a noticeable difference in the flow fields calculated by the two different numerical schemes, the standard Jameson- Schmidt-Turkel (JST) scheme and the AUSM scheme. The main differences are that the AUSM scheme is less numerical dissipative and it predicts much stronger reverse flow in the recirculation zone. This study indicates that two schemes could show different flame-holding predictions and overall flame structures.
Attack and improvements of fair quantum blind signature schemes
NASA Astrophysics Data System (ADS)
Zou, Xiangfu; Qiu, Daowen
2013-06-01
Blind signature schemes allow users to obtain the signature of a message while the signer learns neither the message nor the resulting signature. Therefore, blind signatures have been used to realize cryptographic protocols providing the anonymity of some participants, such as: secure electronic payment systems and electronic voting systems. A fair blind signature is a form of blind signature which the anonymity could be removed with the help of a trusted entity, when this is required for legal reasons. Recently, a fair quantum blind signature scheme was proposed and thought to be safe. In this paper, we first point out that there exists a new attack on fair quantum blind signature schemes. The attack shows that, if any sender has intercepted any valid signature, he (she) can counterfeit a valid signature for any message and can not be traced by the counterfeited blind signature. Then, we construct a fair quantum blind signature scheme by improved the existed one. The proposed fair quantum blind signature scheme can resist the preceding attack. Furthermore, we demonstrate the security of the proposed fair quantum blind signature scheme and compare it with the other one.
Numbers and functions in quantum field theory
NASA Astrophysics Data System (ADS)
Schnetz, Oliver
2018-04-01
We review recent results in the theory of numbers and single-valued functions on the complex plane which arise in quantum field theory. These results are the basis for a new approach to high-loop-order calculations. As concrete examples, we provide scheme-independent counterterms of primitive log-divergent graphs in ϕ4 theory up to eight loops and the renormalization functions β , γ , γm of dimensionally regularized ϕ4 theory in the minimal subtraction scheme up to seven loops.
NASA Astrophysics Data System (ADS)
Somogyi, Gábor; Trócsányi, Zoltán
2008-08-01
In previous articles we outlined a subtraction scheme for regularizing doubly-real emission and real-virtual emission in next-to-next-to-leading order (NNLO) calculations of jet cross sections in electron-positron annihilation. In order to find the NNLO correction these subtraction terms have to be integrated over the factorized unresolved phase space and combined with the two-loop corrections. In this paper we perform the integration of all one-parton unresolved subtraction terms.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-19
...-AA48 Traffic Separation Schemes: In the Strait of Juan de Fuca and Its Approaches; in Puget Sound and..., the Coast Guard codifies traffic separation schemes in the Strait of Juan de Fuca and its approaches.... These traffic separation schemes (TSSs) were validated by a Port Access Route Study (PARS) conducted...
NASA Astrophysics Data System (ADS)
Kataev, A. L.; Molokoedov, V. S.
2017-12-01
The analytical {\\mathscr{O}}({a}s4) perturbative QCD expression for the flavour non-singlet contribution to the Bjorken polarized sum rule in the rather applicable at present gauge-dependent miniMOM scheme is obtained. For the considered three values of the gauge parameter, namely ξ = 0 (Landau gauge), ξ = -1 (anti-Feynman gauge) and ξ = -3 (Stefanis-Mikhailov gauge), the scheme-dependent coefficients are considerably smaller than the gauge-independent {\\overline{{MS}}} results. It is found that the fundamental property of the factorization of the QCD renormalization group β-function in the generalized Crewther relation, which is valid in the gauge-invariant {\\overline{{MS}}} scheme up to {\\mathscr{O}}({a}s4)-level at least, is unexpectedly valid at the same level in the miniMOM-scheme for ξ = 0, and for ξ = -1 and ξ = -3 in part.
Estimation of actual evapotranspiration in the Nagqu river basin of the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Zou, Mijun; Zhong, Lei; Ma, Yaoming; Hu, Yuanyuan; Feng, Lu
2018-05-01
As a critical component of the energy and water cycle, terrestrial actual evapotranspiration (ET) can be influenced by many factors. This study was mainly devoted to providing accurate and continuous estimations of actual ET for the Tibetan Plateau (TP) and analyzing the effects of its impact factors. In this study, summer observational data from the Coordinated Enhanced Observing Period (CEOP) Asia-Australia Monsoon Project (CAMP) on the Tibetan Plateau (CAMP/Tibet) for 2003 to 2004 was selected to determine actual ET and investigate its relationship with energy, hydrological, and dynamical parameters. Multiple-layer air temperature, relative humidity, net radiation flux, wind speed, precipitation, and soil moisture were used to estimate actual ET. The regression model simulation results were validated with independent data retrieved using the combinatory method. The results suggested that significant correlations exist between actual ET and hydro-meteorological parameters in the surface layer of the Nagqu river basin, among which the most important factors are energy-related elements (net radiation flux and air temperature). The results also suggested that how ET is eventually affected by precipitation and two-layer wind speed difference depends on whether their positive or negative feedback processes have a more important role. The multivariate linear regression method provided reliable estimations of actual ET; thus, 6-parameter simplified schemes and 14-parameter regular schemes were established.
Force-controlled absorption in a fully-nonlinear numerical wave tank
NASA Astrophysics Data System (ADS)
Spinneken, Johannes; Christou, Marios; Swan, Chris
2014-09-01
An active control methodology for the absorption of water waves in a numerical wave tank is introduced. This methodology is based upon a force-feedback technique which has previously been shown to be very effective in physical wave tanks. Unlike other methods, an a-priori knowledge of the wave conditions in the tank is not required; the absorption controller being designed to automatically respond to a wide range of wave conditions. In comparison to numerical sponge layers, effective wave absorption is achieved on the boundary, thereby minimising the spatial extent of the numerical wave tank. In contrast to the imposition of radiation conditions, the scheme is inherently capable of absorbing irregular waves. Most importantly, simultaneous generation and absorption can be achieved. This is an important advance when considering inclusion of reflective bodies within the numerical wave tank. In designing the absorption controller, an infinite impulse response filter is adopted, thereby eliminating the problem of non-causality in the controller optimisation. Two alternative controllers are considered, both implemented in a fully-nonlinear wave tank based on a multiple-flux boundary element scheme. To simplify the problem under consideration, the present analysis is limited to water waves propagating in a two-dimensional domain. The paper presents an extensive numerical validation which demonstrates the success of the method for a wide range of wave conditions including regular, focused and random waves. The numerical investigation also highlights some of the limitations of the method, particularly in simultaneously generating and absorbing large amplitude or highly-nonlinear waves. The findings of the present numerical study are directly applicable to related fields where optimum absorption is sought; these include physical wavemaking, wave power absorption and a wide range of numerical wave tank schemes.
Dimensional regularization in position space and a Forest Formula for Epstein-Glaser renormalization
NASA Astrophysics Data System (ADS)
Dütsch, Michael; Fredenhagen, Klaus; Keller, Kai Johannes; Rejzner, Katarzyna
2014-12-01
We reformulate dimensional regularization as a regularization method in position space and show that it can be used to give a closed expression for the renormalized time-ordered products as solutions to the induction scheme of Epstein-Glaser. This closed expression, which we call the Epstein-Glaser Forest Formula, is analogous to Zimmermann's Forest Formula for BPH renormalization. For scalar fields, the resulting renormalization method is always applicable, we compute several examples. We also analyze the Hopf algebraic aspects of the combinatorics. Our starting point is the Main Theorem of Renormalization of Stora and Popineau and the arising renormalization group as originally defined by Stückelberg and Petermann.
Renormalized stress-energy tensor for stationary black holes
NASA Astrophysics Data System (ADS)
Levi, Adam
2017-01-01
We continue the presentation of the pragmatic mode-sum regularization (PMR) method for computing the renormalized stress-energy tensor (RSET). We show in detail how to employ the t -splitting variant of the method, which was first presented for ⟨ϕ2⟩ren , to compute the RSET in a stationary, asymptotically flat background. This variant of the PMR method was recently used to compute the RSET for an evaporating spinning black hole. As an example for regularization, we demonstrate here the computation of the RSET for a minimally coupled, massless scalar field on Schwarzschild background in all three vacuum states. We discuss future work and possible improvements of the regularization schemes in the PMR method.
NASA Astrophysics Data System (ADS)
Britt, S.; Tsynkov, S.; Turkel, E.
2018-02-01
We solve the wave equation with variable wave speed on nonconforming domains with fourth order accuracy in both space and time. This is accomplished using an implicit finite difference (FD) scheme for the wave equation and solving an elliptic (modified Helmholtz) equation at each time step with fourth order spatial accuracy by the method of difference potentials (MDP). High-order MDP utilizes compact FD schemes on regular structured grids to efficiently solve problems on nonconforming domains while maintaining the design convergence rate of the underlying FD scheme. Asymptotically, the computational complexity of high-order MDP scales the same as that for FD.
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
Ray, Nilanjan
2011-10-01
Fluid motion estimation from time-sequenced images is a significant image analysis task. Its application is widespread in experimental fluidics research and many related areas like biomedical engineering and atmospheric sciences. In this paper, we present a novel flow computation framework to estimate the flow velocity vectors from two consecutive image frames. In an energy minimization-based flow computation, we propose a novel data fidelity term, which: 1) can accommodate various measures, such as cross-correlation or sum of absolute or squared differences of pixel intensities between image patches; 2) has a global mechanism to control the adverse effect of outliers arising out of motion discontinuities, proximity of image borders; and 3) can go hand-in-hand with various spatial smoothness terms. Further, the proposed data term and related regularization schemes are both applicable to dense and sparse flow vector estimations. We validate these claims by numerical experiments on benchmark flow data sets. © 2011 IEEE
Hom, Erik F. Y.; Marchis, Franck; Lee, Timothy K.; Haase, Sebastian; Agard, David A.; Sedat, John W.
2011-01-01
We describe an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. Included in AIDA is a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. We validated AIDA using synthetic data spanning a broad range of signal-to-noise ratios and image types and demonstrated the algorithm to be effective for experimental data from adaptive optics–equipped telescope systems and wide-field microscopy. PMID:17491626
Three-dimensional boundary layer calculation by a characteristic method
NASA Technical Reports Server (NTRS)
Houdeville, R.
1992-01-01
A numerical method for solving the three-dimensional boundary layer equations for bodies of arbitrary shape is presented. In laminar flows, the application domain extends from incompressible to hypersonic flows with the assumption of chemical equilibrium. For turbulent boundary layers, the application domain is limited by the validity of the mixing length model used. In order to respect the hyperbolic nature of the equations reduced to first order partial derivative terms, the momentum equations are discretized along the local streamlines using of the osculator tangent plane at each node of the body fitted coordinate system. With this original approach, it is possible to overcome the use of the generalized coordinates, and therefore, it is not necessary to impose an extra hypothesis about the regularity of the mesh in which the boundary conditions are given. By doing so, it is possible to limit, and sometimes to suppress, the pre-treatment of the data coming from an inviscid calculation. Although the proposed scheme is only semi-implicit, the method remains numerically very efficient.
Properties of Solutions to the Irving-Mullineux Oscillator Equation
NASA Astrophysics Data System (ADS)
Mickens, Ronald E.
2002-10-01
A nonlinear differential equation is given in the book by Irving and Mullineux to model certain oscillatory phenomena.^1 They use a regular perturbation method^2 to obtain a first-approximation to the assumed periodic solution. However, their result is not uniformly valid and this means that the obtained solution is not periodic because of the presence of secular terms. We show their way of proceeding is not only incorrect, but that in fact the actual solution to this differential equation is a damped oscillatory function. Our proof uses the method of averaging^2,3 and the qualitative theory of differential equations for 2-dim systems. A nonstandard finite-difference scheme is used to calculate numerical solutions for the trajectories in phase-space. References: ^1J. Irving and N. Mullineux, Mathematics in Physics and Engineering (Academic, 1959); section 14.1. ^2R. E. Mickens, Nonlinear Oscillations (Cambridge University Press, 1981). ^3D. W. Jordan and P. Smith, Nonlinear Ordinary Differential Equations (Oxford, 1987).
Modeling of DNA-Mediated Self-Assembly from Anisotropic Nanoparticles: A Molecular Dynamics Study
NASA Astrophysics Data System (ADS)
Millan, Jaime; Girard, Martin; Brodin, Jeffrey; O'Brien, Matt; Mirkin, Chad; Olvera de La Cruz, Monica
The programmable selectivity of DNA recognition constitutes an elegant scheme to self-assemble a rich variety of superlattices from versatile nanoscale building blocks, where the natural interactions between building blocks are traded by complementary DNA hybridization interactions. Recently, we introduced and validated a scale-accurate coarse-grained model for a molecular dynamics approach that captures the dynamic nature of DNA hybridization events and reproduces the experimentally-observed crystallization behavior of various mixtures of spherical DNA-modified nanoparticles. Here, we have extended this model to robustly reproduce the assembly of nanoparticles with the anisotropic shapes observed experimentally. In particular, we are interested in two different particle types: (i) regular shapes, namely the cubic and octahedral polyhedra shapes commonly observed in gold nanoparticles, and (ii) irregular shapes akin to those exhibited by enzymes. Anisotropy in shape can provide an analog to the atomic orbitals exhibited by conventional atomic crystals. We present results for the assembly of enzymes or anisotropic nanoparticles and the co-assembly of enzymes and nanoparticles.
Efficient implicit LES method for the simulation of turbulent cavitating flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Schmidt, Steffen J.; Hickel, Stefan
2016-07-01
We present a numerical method for efficient large-eddy simulation of compressible liquid flows with cavitation based on an implicit subgrid-scale model. Phase change and subgrid-scale interface structures are modeled by a homogeneous mixture model that assumes local thermodynamic equilibrium. Unlike previous approaches, emphasis is placed on operating on a small stencil (at most four cells). The truncation error of the discretization is designed to function as a physically consistent subgrid-scale model for turbulence. We formulate a sensor functional that detects shock waves or pseudo-phase boundaries within the homogeneous mixture model for localizing numerical dissipation. In smooth regions of the flowmore » field, a formally non-dissipative central discretization scheme is used in combination with a regularization term to model the effect of unresolved subgrid scales. The new method is validated by computing standard single- and two-phase test-cases. Comparison of results for a turbulent cavitating mixing layer obtained with the new method demonstrates its suitability for the target applications.« less
Sythesis of MCMC and Belief Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Sungsoo; Chertkov, Michael; Shin, Jinwoo
Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus (LC) approach whichmore » allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes.« less
A back-fitting algorithm to improve real-time flood forecasting
NASA Astrophysics Data System (ADS)
Zhang, Xiaojing; Liu, Pan; Cheng, Lei; Liu, Zhangjun; Zhao, Yan
2018-07-01
Real-time flood forecasting is important for decision-making with regards to flood control and disaster reduction. The conventional approach involves a postprocessor calibration strategy that first calibrates the hydrological model and then estimates errors. This procedure can simulate streamflow consistent with observations, but obtained parameters are not optimal. Joint calibration strategies address this issue by refining hydrological model parameters jointly with the autoregressive (AR) model. In this study, five alternative schemes are used to forecast floods. Scheme I uses only the hydrological model, while scheme II includes an AR model for error correction. In scheme III, differencing is used to remove non-stationarity in the error series. A joint inference strategy employed in scheme IV calibrates the hydrological and AR models simultaneously. The back-fitting algorithm, a basic approach for training an additive model, is adopted in scheme V to alternately recalibrate hydrological and AR model parameters. The performance of the five schemes is compared with a case study of 15 recorded flood events from China's Baiyunshan reservoir basin. Our results show that (1) schemes IV and V outperform scheme III during the calibration and validation periods and (2) scheme V is inferior to scheme IV in the calibration period, but provides better results in the validation period. Joint calibration strategies can therefore improve the accuracy of flood forecasting. Additionally, the back-fitting recalibration strategy produces weaker overcorrection and a more robust performance compared with the joint inference strategy.
Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction
2016-01-01
1 Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction William F. Moulder, James D. Krieger, Denise T. Maurais-Galejs, Huy...described and validated experimentally with the formation of high quality microwave images. It is further shown that the scheme is more than two orders of... scheme (wherein transmitters and receivers are co-located) which require NTNR transmit-receive elements to achieve the same sampling. The second
Spectral cumulus parameterization based on cloud-resolving model
NASA Astrophysics Data System (ADS)
Baba, Yuya
2018-02-01
We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.
Matching the quasiparton distribution in a momentum subtraction scheme
NASA Astrophysics Data System (ADS)
Stewart, Iain W.; Zhao, Yong
2018-03-01
The quasiparton distribution is a spatial correlation of quarks or gluons along the z direction in a moving nucleon which enables direct lattice calculations of parton distribution functions. It can be defined with a nonperturbative renormalization in a regularization independent momentum subtraction scheme (RI/MOM), which can then be perturbatively related to the collinear parton distribution in the MS ¯ scheme. Here we carry out a direct matching from the RI/MOM scheme for the quasi-PDF to the MS ¯ PDF, determining the non-singlet quark matching coefficient at next-to-leading order in perturbation theory. We find that the RI/MOM matching coefficient is insensitive to the ultraviolet region of convolution integral, exhibits improved perturbative convergence when converting between the quasi-PDF and PDF, and is consistent with a quasi-PDF that vanishes in the unphysical region as the proton momentum Pz→∞ , unlike other schemes. This direct approach therefore has the potential to improve the accuracy for converting quasidistribution lattice calculations to collinear distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturm, C.; Soni, A.; Aoki, Y.
2009-07-01
We extend the Rome-Southampton regularization independent momentum-subtraction renormalization scheme (RI/MOM) for bilinear operators to one with a nonexceptional, symmetric subtraction point. Two-point Green's functions with the insertion of quark bilinear operators are computed with scalar, pseudoscalar, vector, axial-vector and tensor operators at one-loop order in perturbative QCD. We call this new scheme RI/SMOM, where the S stands for 'symmetric'. Conversion factors are derived, which connect the RI/SMOM scheme and the MS scheme and can be used to convert results obtained in lattice calculations into the MS scheme. Such a symmetric subtraction point involves nonexceptional momenta implying a lattice calculation withmore » substantially suppressed contamination from infrared effects. Further, we find that the size of the one-loop corrections for these infrared improved kinematics is substantially decreased in the case of the pseudoscalar and scalar operator, suggesting a much better behaved perturbative series. Therefore it should allow us to reduce the error in the determination of the quark mass appreciably.« less
An irregular lattice method for elastic wave propagation
NASA Astrophysics Data System (ADS)
O'Brien, Gareth S.; Bean, Christopher J.
2011-12-01
Lattice methods are a class of numerical scheme which represent a medium as a connection of interacting nodes or particles. In the case of modelling seismic wave propagation, the interaction term is determined from Hooke's Law including a bond-bending term. This approach has been shown to model isotropic seismic wave propagation in an elastic or viscoelastic medium by selecting the appropriate underlying lattice structure. To predetermine the material constants, this methodology has been restricted to regular grids, hexagonal or square in 2-D or cubic in 3-D. Here, we present a method for isotropic elastic wave propagation where we can remove this lattice restriction. The methodology is outlined and a relationship between the elastic material properties and an irregular lattice geometry are derived. The numerical method is compared with an analytical solution for wave propagation in an infinite homogeneous body along with comparing the method with a numerical solution for a layered elastic medium. The dispersion properties of this method are derived from a plane wave analysis showing the scheme is more dispersive than a regular lattice method. Therefore, the computational costs of using an irregular lattice are higher. However, by removing the regular lattice structure the anisotropic nature of fracture propagation in such methods can be removed.
A geometric level set model for ultrasounds analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarti, A.; Malladi, R.
We propose a partial differential equation (PDE) for filtering and segmentation of echocardiographic images based on a geometric-driven scheme. The method allows edge-preserving image smoothing and a semi-automatic segmentation of the heart chambers, that regularizes the shapes and improves edge fidelity especially in presence of distinct gaps in the edge map as is common in ultrasound imagery. A numerical scheme for solving the proposed PDE is borrowed from level set methods. Results on human in vivo acquired 2D, 2D+time,3D, 3D+time echocardiographic images are shown.
Boundary-element modelling of dynamics in external poroviscoelastic problems
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Litvinchuk, S. Yu; Ipatov, A. A.; Petrov, A. N.
2018-04-01
A problem of a spherical cavity in porous media is considered. Porous media are assumed to be isotropic poroelastic or isotropic poroviscoelastic. The poroviscoelastic formulation is treated as a combination of Biot’s theory of poroelasticity and elastic-viscoelastic correspondence principle. Such viscoelastic models as Kelvin–Voigt, Standard linear solid, and a model with weakly singular kernel are considered. Boundary field study is employed with the help of the boundary element method. The direct approach is applied. The numerical scheme is based on the collocation method, regularized boundary integral equation, and Radau stepped scheme.
Apparently noninvariant terms of nonlinear sigma models in lattice perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harada, Koji; Hattori, Nozomu; Kubo, Hirofumi
2009-03-15
Apparently noninvariant terms (ANTs) that appear in loop diagrams for nonlinear sigma models are revisited in lattice perturbation theory. The calculations have been done mostly with dimensional regularization so far. In order to establish that the existence of ANTs is independent of the regularization scheme, and of the potential ambiguities in the definition of the Jacobian of the change of integration variables from group elements to 'pion' fields, we employ lattice regularization, in which everything (including the Jacobian) is well defined. We show explicitly that lattice perturbation theory produces ANTs in the four-point functions of the pion fields at one-loopmore » and the Jacobian does not play an important role in generating ANTs.« less
NASA Astrophysics Data System (ADS)
Fan, Tong-liang; Wen, Yu-cang; Kadri, Chaibou
Orthogonal frequency-division multiplexing (OFDM) is robust against frequency selective fading because of the increase of the symbol duration. However, the time-varying nature of the channel causes inter-carrier interference (ICI) which destroys the orthogonal of sub-carriers and degrades the system performance severely. To alleviate the detrimental effect of ICI, there is a need for ICI mitigation within one OFDM symbol. We propose an iterative Inter-Carrier Interference (ICI) estimation and cancellation technique for OFDM systems based on regularized constrained total least squares. In the proposed scheme, ICI aren't treated as additional additive white Gaussian noise (AWGN). The effect of Inter-Carrier Interference (ICI) and inter-symbol interference (ISI) on channel estimation is regarded as perturbation of channel. We propose a novel algorithm for channel estimation o based on regularized constrained total least squares. Computer simulations show that significant improvement can be obtained by the proposed scheme in fast fading channels.
A color-coded vision scheme for robotics
NASA Technical Reports Server (NTRS)
Johnson, Kelley Tina
1991-01-01
Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.
Tachyon field in loop quantum cosmology: An example of traversable singularity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Lifang; Zhu Jianyang
2009-06-15
Loop quantum cosmology (LQC) predicts a nonsingular evolution of the universe through a bounce in the high energy region. But LQC has an ambiguity about the quantization scheme. Recently, the authors in [Phys. Rev. D 77, 124008 (2008)] proposed a new quantization scheme. Similar to others, this new quantization scheme also replaces the big bang singularity with the quantum bounce. More interestingly, it introduces a quantum singularity, which is traversable. We investigate this novel dynamics quantitatively with a tachyon scalar field, which gives us a concrete example. Our result shows that our universe can evolve through the quantum singularity regularly,more » which is different from the classical big bang singularity. So this singularity is only a weak singularity.« less
[PICS: pharmaceutical inspection cooperation scheme].
Morénas, J
2009-01-01
The pharmaceutical inspection cooperation scheme (PICS) is a structure containing 34 participating authorities located worldwide (October 2008). It has been created in 1995 on the basis of the pharmaceutical inspection convention (PIC) settled by the European free trade association (EFTA) in1970. This scheme has different goals as to be an international recognised body in the field of good manufacturing practices (GMP), for training inspectors (by the way of an annual seminar and experts circles related notably to active pharmaceutical ingredients [API], quality risk management, computerized systems, useful for the writing of inspection's aide-memoires). PICS is also leading to high standards for GMP inspectorates (through regular crossed audits) and being a room for exchanges on technical matters between inspectors but also between inspectors and pharmaceutical industry.
Asynchronous discrete event schemes for PDEs
NASA Astrophysics Data System (ADS)
Stone, D.; Geiger, S.; Lord, G. J.
2017-08-01
A new class of asynchronous discrete-event simulation schemes for advection-diffusion-reaction equations is introduced, based on the principle of allowing quanta of mass to pass through faces of a (regular, structured) Cartesian finite volume grid. The timescales of these events are linked to the flux on the face. The resulting schemes are self-adaptive, and local in both time and space. Experiments are performed on realistic physical systems related to porous media flow applications, including a large 3D advection diffusion equation and advection diffusion reaction systems. The results are compared to highly accurate reference solutions where the temporal evolution is computed with exponential integrator schemes using the same finite volume discretisation. This allows a reliable estimation of the solution error. Our results indicate a first order convergence of the error as a control parameter is decreased, and we outline a framework for analysis.
Matching by linear programming and successive convexification.
Jiang, Hao; Drew, Mark S; Li, Ze-Nian
2007-06-01
We present a novel convex programming scheme to solve matching problems, focusing on the challenging problem of matching in a large search range and with cluttered background. Matching is formulated as metric labeling with L1 regularization terms, for which we propose a novel linear programming relaxation method and an efficient successive convexification implementation. The unique feature of the proposed relaxation scheme is that a much smaller set of basis labels is used to represent the original label space. This greatly reduces the size of the searching space. A successive convexification scheme solves the labeling problem in a coarse to fine manner. Importantly, the original cost function is reconvexified at each stage, in the new focus region only, and the focus region is updated so as to refine the searching result. This makes the method well-suited for large label set matching. Experiments demonstrate successful applications of the proposed matching scheme in object detection, motion estimation, and tracking.
Validation of the use of synthetic imagery for camouflage effectiveness assessment
NASA Astrophysics Data System (ADS)
Newman, Sarah; Gilmore, Marilyn A.; Moorhead, Ian R.; Filbee, David R.
2002-08-01
CAMEO-SIM was developed as a laboratory method to assess the effectiveness of aircraft camouflage schemes. It is a physically accurate synthetic image generator, rendering in any waveband between 0.4 and 14 microns. Camouflage schemes are assessed by displaying imagery to observers under controlled laboratory conditions or by analyzing the digital image and calculating the contrast statistics between the target and background. Code verification has taken place during development. However, validation of CAMEO-SIM is essential to ensure that the imagery produced is suitable to be used for camouflage effectiveness assessment. Real world characteristics are inherently variable, so exact pixel to pixel correlation is unnecessary. For camouflage effectiveness assessment it is more important to be confident that the comparative effects of different schemes are correct, but prediction of detection ranges is also desirable. Several different tests have been undertaken to validate CAMEO-SIM for the purpose of assessing camouflage effectiveness. Simple scenes have been modeled and measured. Thermal and visual properties of the synthetic and real scenes have been compared. This paper describes the validation tests and discusses the suitability of CAMEO-SIM for camouflage assessment.
The Development and Validation of a New Land Surface Model for Regional and Global Climate Modeling
NASA Astrophysics Data System (ADS)
Lynch-Stieglitz, Marc
1995-11-01
A new land-surface scheme intended for use in mesoscale and global climate models has been developed and validated. The ground scheme consists of 6 soil layers. Diffusion and a modified tipping bucket model govern heat and water flow respectively. A 3 layer snow model has been incorporated into a modified BEST vegetation scheme. TOPMODEL equations and Digital Elevation Model data are used to generate baseflow which supports lowland saturated zones. Soil moisture heterogeneity represented by saturated lowlands subsequently impacts watershed evapotranspiration, the partitioning of surface fluxes, and the development of the storm hydrograph. Five years of meteorological and hydrological data from the Sleepers river watershed located in the eastern highlands of Vermont where winter snow cover is significant were then used to drive and validate the new scheme. Site validation data were sufficient to evaluate model performance with regard to various aspects of the watershed water balance, including snowpack growth/ablation, the spring snowmelt hydrograph, storm hydrographs, and the seasonal development of watershed evapotranspiration and soil moisture. By including topographic effects, not only are the main spring hydrographs and individual storm hydrographs adequately resolved, but the mechanisms generating runoff are consistent with current views of hydrologic processes. The seasonal movement of the mean water table depth and the saturated area of the watershed are consistent with site data and the overall model hydroclimatology, including the surface fluxes, seems reasonable.
NASA Astrophysics Data System (ADS)
Couderc, F.; Duran, A.; Vila, J.-P.
2017-08-01
We present an explicit scheme for a two-dimensional multilayer shallow water model with density stratification, for general meshes and collocated variables. The proposed strategy is based on a regularized model where the transport velocity in the advective fluxes is shifted proportionally to the pressure potential gradient. Using a similar strategy for the potential forces, we show the stability of the method in the sense of a discrete dissipation of the mechanical energy, in general multilayer and non-linear frames. These results are obtained at first-order in space and time and extended using a second-order MUSCL extension in space and a Heun's method in time. With the objective of minimizing the diffusive losses in realistic contexts, sufficient conditions are exhibited on the regularizing terms to ensure the scheme's linear stability at first and second-order in time and space. The other main result stands in the consistency with respect to the asymptotics reached at small and large time scales in low Froude regimes, which governs large-scale oceanic circulation. Additionally, robustness and well-balanced results for motionless steady states are also ensured. These stability properties tend to provide a very robust and efficient approach, easy to implement and particularly well suited for large-scale simulations. Some numerical experiments are proposed to highlight the scheme efficiency: an experiment of fast gravitational modes, a smooth surface wave propagation, an initial propagating surface water elevation jump considering a non-trivial topography, and a last experiment of slow Rossby modes simulating the displacement of a baroclinic vortex subject to the Coriolis force.
A Quantum Proxy Blind Signature Scheme Based on Genuine Five-Qubit Entangled State
NASA Astrophysics Data System (ADS)
Zeng, Chuan; Zhang, Jian-Zhong; Xie, Shu-Cui
2017-06-01
In this paper, a quantum proxy blind signature scheme based on controlled quantum teleportation is proposed. This scheme uses a genuine five-qubit entangled state as quantum channel and adopts the classical Vernam algorithm to blind message. We use the physical characteristics of quantum mechanics to implement delegation, signature and verification. Security analysis shows that our scheme is valid and satisfy the properties of a proxy blind signature, such as blindness, verifiability, unforgeability, undeniability.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography
NASA Astrophysics Data System (ADS)
Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography.
Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Armand, P; Deeg, H J; Kim, H T; Lee, H; Armistead, P; de Lima, M; Gupta, V; Soiffer, R J
2010-05-01
Cytogenetics is an important prognostic factor for patients with myelodysplastic syndromes (MDS). However, existing cytogenetics grouping schemes are based on patients treated with supportive care, and may not be optimal for patients undergoing allo-SCT. We proposed earlier an SCT-specific cytogenetics grouping scheme for patients with MDS and AML arising from MDS, based on an analysis of patients transplanted at the Dana-Farber Cancer Institute/Brigham and Women's Hospital. Under this scheme, abnormalities of chromosome 7 and complex karyotype are considered adverse risk, whereas all others are considered standard risk. In this retrospective study, we validated this scheme on an independent multicenter cohort of 546 patients. Adverse cytogenetics was the strongest prognostic factor for outcome in this cohort. The 4-year relapse-free survival and OS were 42 and 46%, respectively, in the standard-risk group, vs 21 and 23% in the adverse group (P<0.0001 for both comparisons). This grouping scheme retained its prognostic significance irrespective of patient age, disease type, earlier leukemogenic therapy and conditioning intensity. Therapy-related disease was not associated with increased mortality in this cohort, after taking cytogenetics into account. We propose that this SCT-specific cytogenetics grouping scheme be used for patients with MDS or AML arising from MDS who are considering or undergoing SCT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturm, C.; Almeida, L.
2010-04-26
Light quark masses can be determined through lattice simulations in regularization invariant momentum-subtraction (RI/MOM) schemes. Subsequently, matching factors, computed in continuum perturbation theory, are used in order to convert these quark masses from a RI/MOM scheme to the {ovr MS} scheme. We calculate the two-loop corrections in QCD to these matching factors as well as the three-loop mass anomalous dimensions for the RI/SMOM and RI/SMOM{sub {gamma}{mu}} schemes. These two schemes are characterized by a symmetric subtraction point. Providing the conversion factors in the two different schemes allows for a better understanding of the systematic uncertainties. The two-loop expansion coefficients ofmore » the matching factors for both schemes turn out to be small compared to the traditional RI/MOM schemes. For n{sub f} = 3 quark flavors they are about 0.6%-0.7% and 2%, respectively, of the leading order result at scales of about 2 GeV. Therefore, they will allow for a significant reduction of the systematic uncertainty of light quark mass determinations obtained through this approach. The determination of these matching factors requires the computation of amputated Green's functions with the insertions of quark bilinear operators. As a by-product of our calculation we also provide the corresponding results for the tensor operator.« less
NASA Astrophysics Data System (ADS)
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
In this article, one and two-dimensional hydrodynamical models of semiconductor devices are numerically investigated. The models treat the propagation of electrons in a semiconductor device as the flow of a charged compressible fluid. It plays an important role in predicting the behavior of electron flow in semiconductor devices. Mathematically, the governing equations form a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the kinetic flux-vector splitting (KFVS) method for the hyperbolic step, and a semi-implicit Runge-Kutta method for the relaxation step. The KFVS method is based on the direct splitting of macroscopic flux functions of the system on the cell interfaces. The second order accuracy of the scheme is achieved by using MUSCL-type initial reconstruction and Runge-Kutta time stepping method. Several case studies are considered. For validation, the results of current scheme are compared with those obtained from the splitting scheme based on the NT central scheme. The effects of various parameters such as low field mobility, device length, lattice temperature and voltage are analyzed. The accuracy, efficiency and simplicity of the proposed KFVS scheme validates its generic applicability to the given model equations. A two dimensional simulation is also performed by KFVS method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
On Classification in the Study of Failure, and a Challenge to Classifiers
NASA Technical Reports Server (NTRS)
Wasson, Kimberly S.
2003-01-01
Classification schemes are abundant in the literature of failure. They serve a number of purposes, some more successfully than others. We examine several classification schemes constructed for various purposes relating to failure and its investigation, and discuss their values and limits. The analysis results in a continuum of uses for classification schemes, that suggests that the value of certain properties of these schemes is dependent on the goals a classification is designed to forward. The contrast in the value of different properties for different uses highlights a particular shortcoming: we argue that while humans are good at developing one kind of scheme: dynamic, flexible classifications used for exploratory purposes, we are not so good at developing another: static, rigid classifications used to trap and organize data for specific analytic goals. Our lack of strong foundation in developing valid instantiations of the latter impedes progress toward a number of investigative goals. This shortcoming and its consequences pose a challenge to researchers in the study of failure: to develop new methods for constructing and validating static classification schemes of demonstrable value in promoting the goals of investigations. We note current productive activity in this area, and outline foundations for more.
2012-01-01
Background The robust identification of isotope patterns originating from peptides being analyzed through mass spectrometry (MS) is often significantly hampered by noise artifacts and the interference of overlapping patterns arising e.g. from post-translational modifications. As the classification of the recorded data points into either ‘noise’ or ‘signal’ lies at the very root of essentially every proteomic application, the quality of the automated processing of mass spectra can significantly influence the way the data might be interpreted within a given biological context. Results We propose non-negative least squares/non-negative least absolute deviation regression to fit a raw spectrum by templates imitating isotope patterns. In a carefully designed validation scheme, we show that the method exhibits excellent performance in pattern picking. It is demonstrated that the method is able to disentangle complicated overlaps of patterns. Conclusions We find that regularization is not necessary to prevent overfitting and that thresholding is an effective and user-friendly way to perform feature selection. The proposed method avoids problems inherent in regularization-based approaches, comes with a set of well-interpretable parameters whose default configuration is shown to generalize well without the need for fine-tuning, and is applicable to spectra of different platforms. The R package IPPD implements the method and is available from the Bioconductor platform (http://bioconductor.fhcrc.org/help/bioc-views/devel/bioc/html/IPPD.html). PMID:23137144
NASA Astrophysics Data System (ADS)
Islam, Atiq; Iftekharuddin, Khan M.; Ogg, Robert J.; Laningham, Fred H.; Sivakumar, Bhuvaneswari
2008-03-01
In this paper, we characterize the tumor texture in pediatric brain magnetic resonance images (MRIs) and exploit these features for automatic segmentation of posterior fossa (PF) tumors. We focus on PF tumor because of the prevalence of such tumor in pediatric patients. Due to varying appearance in MRI, we propose to model the tumor texture with a multi-fractal process, such as a multi-fractional Brownian motion (mBm). In mBm, the time-varying Holder exponent provides flexibility in modeling irregular tumor texture. We develop a detailed mathematical framework for mBm in two-dimension and propose a novel algorithm to estimate the multi-fractal structure of tissue texture in brain MRI based on wavelet coefficients. This wavelet based multi-fractal feature along with MR image intensity and a regular fractal feature obtained using our existing piecewise-triangular-prism-surface-area (PTPSA) method, are fused in segmenting PF tumor and non-tumor regions in brain T1, T2, and FLAIR MR images respectively. We also demonstrate a non-patient-specific automated tumor prediction scheme based on these image features. We experimentally show the tumor discriminating power of our novel multi-fractal texture along with intensity and fractal features in automated tumor segmentation and statistical prediction. To evaluate the performance of our tumor prediction scheme, we obtain ROCs and demonstrate how sharply the curves reach the specificity of 1.0 sacrificing minimal sensitivity. Experimental results show the effectiveness of our proposed techniques in automatic detection of PF tumors in pediatric MRIs.
Educational Supervision Appropriate for Psychiatry Trainee's Needs
ERIC Educational Resources Information Center
Rele, Kiran; Tarrant, C. Jane
2010-01-01
Objective: The authors studied the regularity and content of supervision sessions in one of the U.K. postgraduate psychiatric training schemes (Mid-Trent). Methods: A questionnaire sent to psychiatry trainees assessed the timing and duration of supervision, content and protection of supervision time, and overall quality of supervision. The authors…
A spatially adaptive total variation regularization method for electrical resistance tomography
NASA Astrophysics Data System (ADS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2015-12-01
The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.
Step to improve neural cryptography against flipping attacks.
Zhou, Jiantao; Xu, Qinzhen; Pei, Wenjiang; He, Zhenya; Szu, Harold
2004-12-01
Synchronization of neural networks by mutual learning has been demonstrated to be possible for constructing key exchange protocol over public channel. However, the neural cryptography schemes presented so far are not the securest under regular flipping attack (RFA) and are completely insecure under majority flipping attack (MFA). We propose a scheme by splitting the mutual information and the training process to improve the security of neural cryptosystem against flipping attacks. Both analytical and simulation results show that the success probability of RFA on the proposed scheme can be decreased to the level of brute force attack (BFA) and the success probability of MFA still decays exponentially with the weights' level L. The synchronization time of the parties also remains polynomial with L. Moreover, we analyze the security under an advanced flipping attack.
Weak Galerkin method for the Biot’s consolidation model
Hu, Xiaozhe; Mu, Lin; Ye, Xiu
2017-08-23
In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less
Damageable contact between an elastic body and a rigid foundation
NASA Astrophysics Data System (ADS)
Campo, M.; Fernández, J. R.; Silva, A.
2009-02-01
In this work, the contact problem between an elastic body and a rigid obstacle is studied, including the development of material damage which results from internal compression or tension. The variational problem is formulated as a first-kind variational inequality for the displacements coupled with a parabolic partial differential equation for the damage field. The existence of a unique local weak solution is stated. Then, a fully discrete scheme is introduced using the finite element method to approximate the spatial variable and an Euler scheme to discretize the time derivatives. Error estimates are derived on the approximate solutions, from which the linear convergence of the algorithm is deduced under suitable regularity conditions. Finally, three two-dimensional numerical simulations are performed to demonstrate the accuracy and the behaviour of the scheme.
Systolic array processing of the sequential decoding algorithm
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Yao, K.
1989-01-01
A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.
Weak Galerkin method for the Biot’s consolidation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Xiaozhe; Mu, Lin; Ye, Xiu
In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less
NASA Astrophysics Data System (ADS)
Peckerar, Martin C.; Marrian, Christie R.
1995-05-01
Standard matrix inversion methods of e-beam proximity correction are compared with a variety of pseudoinverse approaches based on gradient descent. It is shown that the gradient descent methods can be modified using 'regularizers' (terms added to the cost function minimized during gradient descent). This modification solves the 'negative dose' problem in a mathematically sound way. Different techniques are contrasted using a weighted error measure approach. It is shown that the regularization approach leads to the highest quality images. In some cases, ignoring negative doses yields results which are worse than employing an uncorrected dose file.
Representation of viruses in the remediated PDB archive
Lawson, Catherine L.; Dutta, Shuchismita; Westbrook, John D.; Henrick, Kim; Berman, Helen M.
2008-01-01
A new scheme has been devised to represent viruses and other biological assemblies with regular noncrystallographic symmetry in the Protein Data Bank (PDB). The scheme describes existing and anticipated PDB entries of this type using generalized descriptions of deposited and experimental coordinate frames, symmetry and frame transformations. A simplified notation has been adopted to express the symmetry generation of assemblies from deposited coordinates and matrix operations describing the required point, helical or crystallographic symmetry. Complete correct information for building full assemblies, subassemblies and crystal asymmetric units of all virus entries is now available in the remediated PDB archive. PMID:18645236
Generalized Sheet Transition Condition FDTD Simulation of Metasurface
NASA Astrophysics Data System (ADS)
Vahabzadeh, Yousef; Chamanara, Nima; Caloz, Christophe
2018-01-01
We propose an FDTD scheme based on Generalized Sheet Transition Conditions (GSTCs) for the simulation of polychromatic, nonlinear and space-time varying metasurfaces. This scheme consists in placing the metasurface at virtual nodal plane introduced between regular nodes of the staggered Yee grid and inserting fields determined by GSTCs in this plane in the standard FDTD algorithm. The resulting update equations are an elegant generalization of the standard FDTD equations. Indeed, in the limiting case of a null surface susceptibility ($\\chi_\\text{surf}=0$), they reduce to the latter, while in the next limiting case of a time-invariant metasurface $[\\chi_\\text{surf}\
An improved cylindrical FDTD method and its application to field-tissue interaction study in MRI.
Chi, Jieru; Liu, Feng; Xia, Ling; Shao, Tingting; Mason, David G; Crozier, Stuart
2010-01-01
This paper presents a three dimensional finite-difference time-domain (FDTD) scheme in cylindrical coordinates with an improved algorithm for accommodating the numerical singularity associated with the polar axis. The regularization of this singularity problem is entirely based on Ampere's law. The proposed algorithm has been detailed and verified against a problem with a known solution obtained from a commercial electromagnetic simulation package. The numerical scheme is also illustrated by modeling high-frequency RF field-human body interactions in MRI. The results demonstrate the accuracy and capability of the proposed algorithm.
Optimizing methods for linking cinematic features to fMRI data.
Kauttonen, Janne; Hlushchuk, Yevhen; Tikka, Pia
2015-04-15
One of the challenges of naturalistic neurosciences using movie-viewing experiments is how to interpret observed brain activations in relation to the multiplicity of time-locked stimulus features. As previous studies have shown less inter-subject synchronization across viewers of random video footage than story-driven films, new methods need to be developed for analysis of less story-driven contents. To optimize the linkage between our fMRI data collected during viewing of a deliberately non-narrative silent film 'At Land' by Maya Deren (1944) and its annotated content, we combined the method of elastic-net regularization with the model-driven linear regression and the well-established data-driven independent component analysis (ICA) and inter-subject correlation (ISC) methods. In the linear regression analysis, both IC and region-of-interest (ROI) time-series were fitted with time-series of a total of 36 binary-valued and one real-valued tactile annotation of film features. The elastic-net regularization and cross-validation were applied in the ordinary least-squares linear regression in order to avoid over-fitting due to the multicollinearity of regressors, the results were compared against both the partial least-squares (PLS) regression and the un-regularized full-model regression. Non-parametric permutation testing scheme was applied to evaluate the statistical significance of regression. We found statistically significant correlation between the annotation model and 9 ICs out of 40 ICs. Regression analysis was also repeated for a large set of cubic ROIs covering the grey matter. Both IC- and ROI-based regression analyses revealed activations in parietal and occipital regions, with additional smaller clusters in the frontal lobe. Furthermore, we found elastic-net based regression more sensitive than PLS and un-regularized regression since it detected a larger number of significant ICs and ROIs. Along with the ISC ranking methods, our regression analysis proved a feasible method for ordering the ICs based on their functional relevance to the annotated cinematic features. The novelty of our method is - in comparison to the hypothesis-driven manual pre-selection and observation of some individual regressors biased by choice - in applying data-driven approach to all content features simultaneously. We found especially the combination of regularized regression and ICA useful when analyzing fMRI data obtained using non-narrative movie stimulus with a large set of complex and correlated features. Copyright © 2015. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorbahn, Martin; Jaeger, Sebastian; Department of Physics and Astronomy, University of Sussex, Falmer, Brighton BN1 9QH
2010-12-01
We compute the conversion factors needed to obtain the MS and renormalization-group-invariant (RGI) up, down, and strange quark masses at next-to-next-to-leading order from the corresponding parameters renormalized in the recently proposed RI/SMOM and RI/SMOM{sub {gamma}{sub {mu}} }renormalization schemes. This is important for obtaining the MS masses with the best possible precision from numerical lattice QCD simulations, because the customary RI{sup (')}/MOM scheme is afflicted with large irreducible uncertainties both on the lattice and in perturbation theory. We find that the smallness of the known one-loop matching coefficients is accompanied by even smaller two-loop contributions. From a study of residual scalemore » dependences, we estimate the resulting perturbative uncertainty on the light-quark masses to be about 2% in the RI/SMOM scheme and about 3% in the RI/SMOM{sub {gamma}{sub {mu}} }scheme. Our conversion factors are given in fully analytic form, for general covariant gauge and renormalization point. We provide expressions for the associated anomalous dimensions.« less
A multiplicative regularization for force reconstruction
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2017-02-01
Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.
ERIC Educational Resources Information Center
Lovin, LouAnn H.; Stevens, Alexis L.; Siegfried, John; Wilkins, Jesse L. M.; Norton, Anderson
2018-01-01
In an effort to expand our knowledge base pertaining to pre-K-8 prospective teachers' understanding of fractions, the present study was designed to extend the work on fractions schemes and operations to this population. One purpose of our study was to validate the fractions schemes and operations hierarchy with the pre-K-8 prospective teacher…
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1987-01-01
The validity of the modified equation stability analysis introduced by Warming and Hyett was investigated. It is shown that the procedure used in the derivation of the modified equation is flawed and generally leads to invalid results. Moreover, the interpretation of the modified equation as the exact partial differential equation solved by a finite-difference method generally cannot be justified even if spatial periodicity is assumed. For a two-level scheme, due to a series of mathematical quirks, the connection between the modified equation approach and the von Neuman method established by Warming and Hyett turns out to be correct despite its questionable original derivation. However, this connection is only partially valid for a scheme involving more than two time levels. In the von Neumann analysis, the complex error multiplication factor associated with a wave number generally has (L-1) roots for an L-level scheme. It is shown that the modified equation provides information about only one of these roots.
Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.
Pang, Jiahao; Cheung, Gene
2017-04-01
Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.
Improta, Roberto; Vitagliano, Luigi; Esposito, Luciana
2015-11-01
The elucidation of the mutual influence between peptide bond geometry and local conformation has important implications for protein structure refinement, validation, and prediction. To gain insights into the structural determinants and the energetic contributions associated with protein/peptide backbone plasticity, we here report an extensive analysis of the variability of the peptide bond angles by combining statistical analyses of protein structures and quantum mechanics calculations on small model peptide systems. Our analyses demonstrate that all the backbone bond angles strongly depend on the peptide conformation and unveil the existence of regular trends as function of ψ and/or φ. The excellent agreement of the quantum mechanics calculations with the statistical surveys of protein structures validates the computational scheme here employed and demonstrates that the valence geometry of protein/peptide backbone is primarily dictated by local interactions. Notably, for the first time we show that the position of the H(α) hydrogen atom, which is an important parameter in NMR structural studies, is also dependent on the local conformation. Most of the trends observed may be satisfactorily explained by invoking steric repulsive interactions; in some specific cases the valence bond variability is also influenced by hydrogen-bond like interactions. Moreover, we can provide a reliable estimate of the energies involved in the interplay between geometry and conformations. © 2015 Wiley Periodicals, Inc.
Validity of portfolio assessment: which qualities determine ratings?
Driessen, Erik W; Overeem, Karlijn; van Tartwijk, Jan; van der Vleuten, Cees P M; Muijtjens, Arno M M
2006-09-01
The portfolio is becoming increasingly accepted as a valuable tool for learning and assessment. The validity of portfolio assessment, however, may suffer from bias due to irrelevant qualities, such as lay-out and writing style. We examined the possible effects of such qualities in a portfolio programme aimed at stimulating Year 1 medical students to reflect on their professional and personal development. In later curricular years, this portfolio is also used to judge clinical competence. We developed an instrument, the Portfolio Analysis Scoring Inventory, to examine the impact of form and content aspects on portfolio assessment. The Inventory consists of 15 items derived from interviews with experienced mentors, the literature, and the criteria for reflective competence used in the regular portfolio assessment procedure. Forty portfolios, selected from 231 portfolios for which ratings from the regular assessment procedure were available, were rated by 2 researchers, independently, using the Inventory. Regression analysis was used to estimate the correlation between the ratings from the regular assessment and those resulting from the Inventory items. Inter-rater agreement ranged from 0.46 to 0.87. The strongest predictor of the variance in the regular ratings was 'quality of reflection' (R 0.80; R2 66%). No further items accounted for a significant proportion of variance. Irrelevant items, such as writing style and lay-out, had negligible effects. The absence of an impact of irrelevant criteria appears to support the validity of the portfolio assessment procedure. Further studies should examine the portfolio's validity for the assessment of clinical competence.
DOT National Transportation Integrated Search
2015-02-01
Although the freeway travel time data has been validated extensively in recent : years, the quality of arterial travel time data is not well known. This project : presents a comprehensive validation scheme for arterial travel time data based : on GPS...
NASA Astrophysics Data System (ADS)
Garkusha, A. V.; Kataev, A. L.; Molokoedov, V. S.
2018-02-01
The problem of scheme and gauge dependence of the factorization property of the renormalization group β-function in the SU( N c ) QCD generalized Crewther relation (GCR), which connects the flavor non-singlet contributions to the Adler and Bjorken polarized sum rule functions, is investigated at the O({a}_s^4) level of perturbation theory. It is known that in the gauge-invariant renormalization \\overline{MS} -scheme this property holds in the QCD GCR at least at this order. To study whether this factorization property is true in all gauge-invariant schemes, we consider the MS-like schemes in QCD and the QED-limit of the GCR in the \\overline{MS} -scheme and in two other gauge-independent subtraction schemes, namely in the momentum MOM and the on-shell OS schemes. In these schemes we confirm the existence of the β-function factorization in the QCD and QED variants of the GCR. The problem of the possible β-factorization in the gauge-dependent renormalization schemes in QCD is studied. To investigate this problem we consider the gauge non-invariant mMOM and MOMgggg-schemes. We demonstrate that in the mMOM scheme at the O({a}_s^3) level the β-factorization is valid for three values of the gauge parameter ξ only, namely for ξ = -3 , -1 and ξ = 0. In the O({a}_s^4) order of PT it remains valid only for case of the Landau gauge ξ = 0. The consideration of these two gauge-dependent schemes for the QCD GCR allows us to conclude that the factorization of RG β-function will always be implemented in any MOM-like renormalization schemes with linear covariant gauge at ξ = 0 and ξ = -3 at the O({a}_s^3) approximation. It is demonstrated that if factorization property for the MS-like schemes is true in all orders of PT, as theoretically indicated in the several works on the subject, then the factorization will also occur in the arbitrary MOM-like scheme in the Landau gauge in all orders of perturbation theory as well.
Zhou, Yuefang; Cameron, Elaine; Forbes, Gillian; Humphris, Gerry
2012-08-01
To develop and validate the St Andrews Behavioural Interaction Coding Scheme (SABICS): a tool to record nurse-child interactive behaviours. The SABICS was developed primarily from observation of video recorded interactions; and refined through an iterative process of applying the scheme to new data sets. Its practical applicability was assessed via implementation of the scheme on specialised behavioural coding software. Reliability was calculated using Cohen's Kappa. Discriminant validity was assessed using logistic regression. The SABICS contains 48 codes. Fifty-five nurse-child interactions were successfully coded through administering the scheme on The Observer XT8.0 system. Two visualization results of interaction patterns demonstrated the scheme's capability of capturing complex interaction processes. Cohen's Kappa was 0.66 (inter-coder) and 0.88 and 0.78 (two intra-coders). The frequency of nurse behaviours, such as "instruction" (OR = 1.32, p = 0.027) and "praise" (OR = 2.04, p = 0.027), predicted a child receiving the intervention. The SABICS is a unique system to record interactions between dental nurses and 3-5 years old children. It records and displays complex nurse-child interactive behaviours. It is easily administered and demonstrates reasonable psychometric properties. The SABICS has potential for other paediatric settings. Its development procedure may be helpful for other similar coding scheme development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Impact of WRF model PBL schemes on air quality simulations over Catalonia, Spain.
Banks, R F; Baldasano, J M
2016-12-01
Here we analyze the impact of four planetary boundary-layer (PBL) parametrization schemes from the Weather Research and Forecasting (WRF) numerical weather prediction model on simulations of meteorological variables and predicted pollutant concentrations from an air quality forecast system (AQFS). The current setup of the Spanish operational AQFS, CALIOPE, is composed of the WRF-ARW V3.5.1 meteorological model tied to the Yonsei University (YSU) PBL scheme, HERMES v2 emissions model, CMAQ V5.0.2 chemical transport model, and dust outputs from BSC-DREAM8bv2. We test the performance of the YSU scheme against the Assymetric Convective Model Version 2 (ACM2), Mellor-Yamada-Janjic (MYJ), and Bougeault-Lacarrère (BouLac) schemes. The one-day diagnostic case study is selected to represent the most frequent synoptic condition in the northeast Iberian Peninsula during spring 2015; regional recirculations. It is shown that the ACM2 PBL scheme performs well with daytime PBL height, as validated against estimates retrieved using a micro-pulse lidar system (mean bias=-0.11km). In turn, the BouLac scheme showed WRF-simulated air and dew point temperature closer to METAR surface meteorological observations. Results are more ambiguous when simulated pollutant concentrations from CMAQ are validated against network urban, suburban, and rural background stations. The ACM2 scheme showed the lowest mean bias (-0.96μgm -3 ) with respect to surface ozone at urban stations, while the YSU scheme performed best with simulated nitrogen dioxide (-6.48μgm -3 ). The poorest results were with simulated particulate matter, with similar results found with all schemes tested. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Maltese, A.; Capodici, F.; Ciraolo, G.; La Loggia, G.
2015-10-01
Temporal availability of grapes actual evapotranspiration is an emerging issue since vineyards farms are more and more converted from rainfed to irrigated agricultural systems. The manuscript aims to verify the accuracy of the actual evapotranspiration retrieval coupling a single source energy balance approach and two different temporal upscaling schemes. The first scheme tests the temporal upscaling of the main input variables, namely the NDVI, albedo and LST; the second scheme tests the temporal upscaling of the energy balance output, the actual evapotranspiration. The temporal upscaling schemes were implemented on: i) airborne remote sensing data acquired monthly during a whole irrigation season over a Sicilian vineyard; ii) low resolution MODIS products released daily or weekly; iii) meteorological data acquired by standard gauge stations. Daily MODIS LST products (MOD11A1) were disaggregated using the DisTrad model, 8-days black and white sky albedo products (MCD43A) allowed modeling the total albedo, and 8-days NDVI products (MOD13Q1) were modeled using the Fisher approach. Results were validated both in time and space. The temporal validation was carried out using the actual evapotranspiration measured in situ using data collected by a flux tower through the eddy covariance technique. The spatial validation involved airborne images acquired at different times from June to September 2008. Results aim to test whether the upscaling of the energy balance input or output data performed better.
NASA Astrophysics Data System (ADS)
Yaparova, N.
2017-10-01
We consider the problem of heating a cylindrical body with an internal thermal source when the main characteristics of the material such as specific heat, thermal conductivity and material density depend on the temperature at each point of the body. We can control the surface temperature and the heat flow from the surface inside the cylinder, but it is impossible to measure the temperature on axis and the initial temperature in the entire body. This problem is associated with the temperature measurement challenge and appears in non-destructive testing, in thermal monitoring of heat treatment and technical diagnostics of operating equipment. The mathematical model of heating is represented as nonlinear parabolic PDE with the unknown initial condition. In this problem, both the Dirichlet and Neumann boundary conditions are given and it is required to calculate the temperature values at the internal points of the body. To solve this problem, we propose the numerical method based on using of finite-difference equations and a regularization technique. The computational scheme involves solving the problem at each spatial step. As a result, we obtain the temperature function at each internal point of the cylinder beginning from the surface down to the axis. The application of the regularization technique ensures the stability of the scheme and allows us to significantly simplify the computational procedure. We investigate the stability of the computational scheme and prove the dependence of the stability on the discretization steps and error level of the measurement results. To obtain the experimental temperature error estimates, computational experiments were carried out. The computational results are consistent with the theoretical error estimates and confirm the efficiency and reliability of the proposed computational scheme.
NASA Astrophysics Data System (ADS)
Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre
2016-04-01
In this paper, four assimilation schemes, including an intermittent assimilation scheme (INT) and three incremental assimilation schemes (IAU 0, IAU 50 and IAU 100), are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The three IAU schemes differ from each other in the position of the increment update window that has the same size as the assimilation window. 0, 50 and 100 correspond to the degree of superposition of the increment update window on the current assimilation window. Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments The relevance of each assimilation scheme is evaluated through analyses on thermohaline variables and the current velocities. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semi-independent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system.
Advanced Imaging Methods for Long-Baseline Optical Interferometry
NASA Astrophysics Data System (ADS)
Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.
2008-11-01
We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.
NASA Astrophysics Data System (ADS)
Mahéo, Laurent; Grolleau, Vincent; Rio, Gérard
2009-11-01
To deal with dynamic and wave propagation problems, dissipative methods are often used to reduce the effects of the spurious oscillations induced by the spatial and time discretization procedures. Among the many dissipative methods available, the Tchamwa-Wielgosz (TW) explicit scheme is particularly useful because it damps out the spurious oscillations occurring in the highest frequency domain. The theoretical study performed here shows that the TW scheme is decentered to the right, and that the damping can be attributed to a nodal displacement perturbation. The FEM study carried out using instantaneous 1-D and 3-D compression loads shows that it is useful to display the damping versus the number of time steps in order to obtain a constant damping efficiency whatever the size of element used for the regular meshing. A study on the responses obtained with irregular meshes shows that the TW scheme is only slightly sensitive to the spatial discretization procedure used. To cite this article: L. Mahéo et al., C. R. Mecanique 337 (2009).
Data traffic reduction schemes for sparse Cholesky factorizations
NASA Technical Reports Server (NTRS)
Naik, Vijay K.; Patrick, Merrell L.
1988-01-01
Load distribution schemes are presented which minimize the total data traffic in the Cholesky factorization of dense and sparse, symmetric, positive definite matrices on multiprocessor systems with local and shared memory. The total data traffic in factoring an n x n sparse, symmetric, positive definite matrix representing an n-vertex regular 2-D grid graph using n (sup alpha), alpha is equal to or less than 1, processors are shown to be O(n(sup 1 + alpha/2)). It is O(n(sup 3/2)), when n (sup alpha), alpha is equal to or greater than 1, processors are used. Under the conditions of uniform load distribution, these results are shown to be asymptotically optimal. The schemes allow efficient use of up to O(n) processors before the total data traffic reaches the maximum value of O(n(sup 3/2)). The partitioning employed within the scheme, allows a better utilization of the data accessed from shared memory than those of previously published methods.
Design and evaluation of nonverbal sound-based input for those with motor handicapped.
Punyabukkana, Proadpran; Chanjaradwichai, Supadaech; Suchato, Atiwong
2013-03-01
Most personal computing interfaces rely on the users' ability to use their hand and arm movements to interact with on-screen graphical widgets via mainstream devices, including keyboards and mice. Without proper assistive devices, this style of input poses difficulties for motor-handicapped users. We propose a sound-based input scheme enabling users to operate Windows' Graphical User Interface by producing hums and fricatives through regular microphones. Hierarchically arranged menus are utilized so that only minimal numbers of different actions are required at a time. The proposed scheme was found to be accurate and capable of responding promptly compared to other sound-based schemes. Being able to select from multiple item-selecting modes helps reducing the average time duration needed for completing tasks in the test scenarios almost by half the time needed when the tasks were performed solely through cursor movements. Still, improvements on facilitating users to select the most appropriate modes for desired tasks should improve the overall usability of the proposed scheme.
Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2012-01-01
The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, A.; Divsalar, D.; Yao, K.
2004-01-01
In this paper we propose an innovative channel coding scheme called Accumulate Repeat Accumulate codes. This class of codes can be viewed as trubo-like codes, namely a double serial concatenation of a rate-1 accumulator as an outer code, a regular or irregular repetition as a middle code, and a punctured accumulator as an inner code.
Global Static Indexing for Real-Time Exploration of Very Large Regular Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pascucci, V; Frank, R
2001-07-23
In this paper we introduce a new indexing scheme for progressive traversal and visualization of large regular grids. We demonstrate the potential of our approach by providing a tool that displays at interactive rates planar slices of scalar field data with very modest computing resources. We obtain unprecedented results both in terms of absolute performance and, more importantly, in terms of scalability. On a laptop computer we provide real time interaction with a 2048{sup 3} grid (8 Giga-nodes) using only 20MB of memory. On an SGI Onyx we slice interactively an 8192{sup 3} grid (1/2 tera-nodes) using only 60MB ofmore » memory. The scheme relies simply on the determination of an appropriate reordering of the rectilinear grid data and a progressive construction of the output slice. The reordering minimizes the amount of I/O performed during the out-of-core computation. The progressive and asynchronous computation of the output provides flexible quality/speed tradeoffs and a time-critical and interruptible user interface.« less
Hexagonal Pixels and Indexing Scheme for Binary Images
NASA Technical Reports Server (NTRS)
Johnson, Gordon G.
2004-01-01
A scheme for resampling binaryimage data from a rectangular grid to a regular hexagonal grid and an associated tree-structured pixel-indexing scheme keyed to the level of resolution have been devised. This scheme could be utilized in conjunction with appropriate image-data-processing algorithms to enable automated retrieval and/or recognition of images. For some purposes, this scheme is superior to a prior scheme that relies on rectangular pixels: one example of such a purpose is recognition of fingerprints, which can be approximated more closely by use of line segments along hexagonal axes than by line segments along rectangular axes. This scheme could also be combined with algorithms for query-image-based retrieval of images via the Internet. A binary image on a rectangular grid is generated by raster scanning or by sampling on a stationary grid of rectangular pixels. In either case, each pixel (each cell in the rectangular grid) is denoted as either bright or dark, depending on whether the light level in the pixel is above or below a prescribed threshold. The binary data on such an image are stored in a matrix form that lends itself readily to searches of line segments aligned with either or both of the perpendicular coordinate axes. The first step in resampling onto a regular hexagonal grid is to make the resolution of the hexagonal grid fine enough to capture all the binaryimage detail from the rectangular grid. In practice, this amounts to choosing a hexagonal-cell width equal to or less than a third of the rectangular- cell width. Once the data have been resampled onto the hexagonal grid, the image can readily be checked for line segments aligned with the hexagonal coordinate axes, which typically lie at angles of 30deg, 90deg, and 150deg with respect to say, the horizontal rectangular coordinate axis. Optionally, one can then rotate the rectangular image by 90deg, then again sample onto the hexagonal grid and check for line segments at angles of 0deg, 60deg, and 120deg to the original horizontal coordinate axis. The net result is that one has checked for line segments at angular intervals of 30deg. For even finer angular resolution, one could, for example, then rotate the rectangular-grid image +/-45deg before sampling to perform checking for line segments at angular intervals of 15deg.
Improving ATLAS grid site reliability with functional tests using HammerCloud
NASA Astrophysics Data System (ADS)
Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan
2012-12-01
With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.
Scoring Rubric Development: Validity and Reliability.
ERIC Educational Resources Information Center
Moskal, Barbara M.; Leydens, Jon A.
2000-01-01
Provides clear definitions of the terms "validity" and "reliability" in the context of developing scoring rubrics and illustrates these definitions through examples. Also clarifies how validity and reliability may be addressed in the development of scoring rubrics, defined as descriptive scoring schemes developed to guide the analysis of the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMillan, K; Bostani, M; McNitt-Gray, M
2015-06-15
Purpose: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. Methods: For 10 patients who received clinically-indicated chest (n=5) and abdomen/pelvis (n=5) scans on a Siemens CT scanner, both CT localizer radiograph (“topogram”) and image data were collected. Methods were devised to estimate themore » complete x-y-z TCM scheme using patient attenuation data: (a) available in the Siemens CT localizer radiograph/topogram itself (“actual-topo”) and (b) from a simulated topogram (“sim-topo”) derived from a projection of the image data. For comparison, the actual TCM scheme was extracted from the projection data of each patient. For validation, Monte Carlo simulations were performed using each TCM scheme to estimate dose to the lungs (chest scans) and liver (abdomen/pelvis scans). Organ doses from simulations using the actual TCM were compared to those using each of the estimated TCM methods (“actual-topo” and “sim-topo”). Results: For chest scans, the average differences between doses estimated using actual TCM schemes and estimated TCM schemes (“actual-topo” and “sim-topo”) were 3.70% and 4.98%, respectively. For abdomen/pelvis scans, the average differences were 5.55% and 6.97%, respectively. Conclusion: Strong agreement between doses estimated using actual and estimated TCM schemes validates the methods for simulating Siemens topograms and converting attenuation data into TCM schemes. This indicates that the methods developed in this work can be used to accurately estimate TCM schemes for any patient model or computational phantom, whether a CT localizer radiograph is available or not. Funding Support: NIH Grant R01-EB017095; Disclosures - Michael McNitt-Gray: Institutional Research Agreement, Siemens AG; Research Support, Siemens AG; Consultant, Flaherty Sensabaugh Bonasso PLLC; Consultant, Fulbright and Jaworski; Disclosures - Cynthia McCollough: Research Grant, Siemens Healthcare.« less
Design of a multiple kernel learning algorithm for LS-SVM by convex programming.
Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou
2011-06-01
As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Pérez-Beteta, Julián; Molina-García, David; Ortiz-Alhambra, José A; Fernández-Romero, Antonio; Luque, Belén; Arregui, Elena; Calvo, Manuel; Borrás, José M; Meléndez, Bárbara; Rodríguez de Lope, Ángel; Moreno de la Presa, Raquel; Iglesias Bayo, Lidia; Barcia, Juan A; Martino, Juan; Velásquez, Carlos; Asenjo, Beatriz; Benavides, Manuel; Herruzo, Ismael; Revert, Antonio; Arana, Estanislao; Pérez-García, Víctor M
2018-07-01
Purpose To evaluate the prognostic and predictive value of surface-derived imaging biomarkers obtained from contrast material-enhanced volumetric T1-weighted pretreatment magnetic resonance (MR) imaging sequences in patients with glioblastoma multiforme. Materials and Methods A discovery cohort from five local institutions (165 patients; mean age, 62 years ± 12 [standard deviation]; 43% women and 57% men) and an independent validation cohort (51 patients; mean age, 60 years ± 12; 39% women and 61% men) from The Cancer Imaging Archive with volumetric T1-weighted pretreatment contrast-enhanced MR imaging sequences were included in the study. Clinical variables such as age, treatment, and survival were collected. After tumor segmentation and image processing, tumor surface regularity, measuring how much the tumor surface deviates from a sphere of the same volume, was obtained. Kaplan-Meier, Cox proportional hazards, correlations, and concordance indexes were used to compare variables and patient subgroups. Results Surface regularity was a powerful predictor of survival in the discovery (P = .005, hazard ratio [HR] = 1.61) and validation groups (P = .05, HR = 1.84). Multivariate analysis selected age and surface regularity as significant variables in a combined prognostic model (P < .001, HR = 3.05). The model achieved concordance indexes of 0.76 and 0.74 for the discovery and validation cohorts, respectively. Tumor surface regularity was a predictor of survival for patients who underwent complete resection (P = .01, HR = 1.90). Tumors with irregular surfaces did not benefit from total over subtotal resections (P = .57, HR = 1.17), but those with regular surfaces did (P = .004, HR = 2.07). Conclusion The surface regularity obtained from high-resolution contrast-enhanced pretreatment volumetric T1-weighted MR images is a predictor of survival in patients with glioblastoma. It may help in classifying patients for surgery. © RSNA, 2018 Online supplemental material is available for this article.
Pérez-Beteta, Julián; Molina-García, David; Ortiz-Alhambra, José A; Fernández-Romero, Antonio; Luque, Belén; Arregui, Elena; Calvo, Manuel; Borrás, José M; Meléndez, Bárbara; Rodríguez de Lope, Ángel; Moreno de la Presa, Raquel; Iglesias Bayo, Lidia; Barcia, Juan A; Martino, Juan; Velásquez, Carlos; Asenjo, Beatriz; Benavides, Manuel; Herruzo, Ismael; Revert, Antonio; Arana, Estanislao; Pérez-García, Víctor M
2018-04-03
Purpose To evaluate the prognostic and predictive value of surface-derived imaging biomarkers obtained from contrast material-enhanced volumetric T1-weighted pretreatment magnetic resonance (MR) imaging sequences in patients with glioblastoma multiforme. Materials and Methods A discovery cohort from five local institutions (165 patients; mean age, 62 years ± 12 [standard deviation]; 43% women and 57% men) and an independent validation cohort (51 patients; mean age, 60 years ± 12; 39% women and 61% men) from The Cancer Imaging Archive with volumetric T1-weighted pretreatment contrast-enhanced MR imaging sequences were included in the study. Clinical variables such as age, treatment, and survival were collected. After tumor segmentation and image processing, tumor surface regularity, measuring how much the tumor surface deviates from a sphere of the same volume, was obtained. Kaplan-Meier, Cox proportional hazards, correlations, and concordance indexes were used to compare variables and patient subgroups. Results Surface regularity was a powerful predictor of survival in the discovery (P = .005, hazard ratio [HR] = 1.61) and validation groups (P = .05, HR = 1.84). Multivariate analysis selected age and surface regularity as significant variables in a combined prognostic model (P < .001, HR = 3.05). The model achieved concordance indexes of 0.76 and 0.74 for the discovery and validation cohorts, respectively. Tumor surface regularity was a predictor of survival for patients who underwent complete resection (P = .01, HR = 1.90). Tumors with irregular surfaces did not benefit from total over subtotal resections (P = .57, HR = 1.17), but those with regular surfaces did (P = .004, HR = 2.07). Conclusion The surface regularity obtained from high-resolution contrast-enhanced pretreatment volumetric T1-weighted MR images is a predictor of survival in patients with glioblastoma. It may help in classifying patients for surgery. © RSNA, 2018 Online supplemental material is available for this article.
Optimization study on multiple train formation scheme of urban rail transit
NASA Astrophysics Data System (ADS)
Xia, Xiaomei; Ding, Yong; Wen, Xin
2018-05-01
The new organization method, represented by the mixed operation of multi-marshalling trains, can adapt to the characteristics of the uneven distribution of passenger flow, but the research on this aspect is still not perfect enough. This paper introduced the passenger sharing rate and congestion penalty coefficient with different train formations. On this basis, this paper established an optimization model with the minimum passenger cost and operation cost as objective, and operation frequency and passenger demand as constraint. The ideal point method is used to solve this model. Compared with the fixed marshalling operation model, the overall cost of this scheme saves 9.24% and 4.43% respectively. This result not only validates the validity of the model, but also illustrate the advantages of the multiple train formations scheme.
Finite-element lattice Boltzmann simulations of contact line dynamics
NASA Astrophysics Data System (ADS)
Matin, Rastin; Krzysztof Misztal, Marek; Hernández-García, Anier; Mathiesen, Joachim
2018-01-01
The lattice Boltzmann method has become one of the standard techniques for simulating a wide range of fluid flows. However, the intrinsic coupling of momentum and space discretization restricts the traditional lattice Boltzmann method to regular lattices. Alternative off-lattice Boltzmann schemes exist for both single- and multiphase flows that decouple the velocity discretization from the underlying spatial grid. The current study extends the applicability of these off-lattice methods by introducing a finite element formulation that enables simulating contact line dynamics for partially wetting fluids. This work exemplifies the implementation of the scheme and furthermore presents benchmark experiments that show the scheme reduces spurious currents at the liquid-vapor interface by at least two orders of magnitude compared to a nodal implementation and allows for predicting the equilibrium states accurately in the range of moderate contact angles.
A novel quantum group signature scheme without using entangled states
NASA Astrophysics Data System (ADS)
Xu, Guang-Bao; Zhang, Ke-Jia
2015-07-01
In this paper, we propose a novel quantum group signature scheme. It can make the signer sign a message on behalf of the group without the help of group manager (the arbitrator), which is different from the previous schemes. In addition, a signature can be verified again when its signer disavows she has ever generated it. We analyze the validity and the security of the proposed signature scheme. Moreover, we discuss the advantages and the disadvantages of the new scheme and the existing ones. The results show that our scheme satisfies all the characteristics of a group signature and has more advantages than the previous ones. Like its classic counterpart, our scheme can be used in many application scenarios, such as e-government and e-business.
Microbiological Validation of the IVGEN System
NASA Technical Reports Server (NTRS)
Porter, David A.
2013-01-01
The principal purpose of this report is to describe a validation process that can be performed in part on the ground prior to launch, and in space for the IVGEN system. The general approach taken is derived from standard pharmaceutical industry validation schemes modified to fit the special requirements of in-space usage.
Das, Ashok Kumar; Bruhadeshwar, Bezawada
2013-10-01
Recently Lee and Liu proposed an efficient password based authentication and key agreement scheme using smart card for the telecare medicine information system [J. Med. Syst. (2013) 37:9933]. In this paper, we show that though their scheme is efficient, their scheme still has two security weaknesses such as (1) it has design flaws in authentication phase and (2) it has design flaws in password change phase. In order to withstand these flaws found in Lee-Liu's scheme, we propose an improvement of their scheme. Our improved scheme keeps also the original merits of Lee-Liu's scheme. We show that our scheme is efficient as compared to Lee-Liu's scheme. Further, through the security analysis, we show that our scheme is secure against possible known attacks. In addition, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that our scheme is secure against passive and active attacks.
A note on the regularity of solutions of infinite dimensional Riccati equations
NASA Technical Reports Server (NTRS)
Burns, John A.; King, Belinda B.
1994-01-01
This note is concerned with the regularity of solutions of algebraic Riccati equations arising from infinite dimensional LQR and LQG control problems. We show that distributed parameter systems described by certain parabolic partial differential equations often have a special structure that smoothes solutions of the corresponding Riccati equation. This analysis is motivated by the need to find specific representations for Riccati operators that can be used in the development of computational schemes for problems where the input and output operators are not Hilbert-Schmidt. This situation occurs in many boundary control problems and in certain distributed control problems associated with optimal sensor/actuator placement.
NASA Astrophysics Data System (ADS)
Sulyok, G.
2017-07-01
Starting from the general definition of a one-loop tensor N-point function, we use its Feynman parametrization to calculate the ultraviolet (UV-)divergent part of an arbitrary tensor coefficient in the framework of dimensional regularization. In contrast to existing recursion schemes, we are able to present a general analytic result in closed form that enables direct determination of the UV-divergent part of any one-loop tensor N-point coefficient independent from UV-divergent parts of other one-loop tensor N-point coefficients. Simplified formulas and explicit expressions are presented for A-, B-, C-, D-, E-, and F-functions.
A Secure and Efficient Threshold Group Signature Scheme
NASA Astrophysics Data System (ADS)
Zhang, Yansheng; Wang, Xueming; Qiu, Gege
The paper presents a secure and efficient threshold group signature scheme aiming at two problems of current threshold group signature schemes: conspiracy attack and inefficiency. Scheme proposed in this paper takes strategy of separating designed clerk who is responsible for collecting and authenticating each individual signature from group, the designed clerk don't participate in distribution of group secret key and has his own public key and private key, designed clerk needs to sign part information of threshold group signature after collecting signatures. Thus verifier has to verify signature of the group after validating signature of the designed clerk. This scheme is proved to be secure against conspiracy attack at last and is more efficient by comparing with other schemes.
Strange quark condensate in the nucleon in 2 + 1 flavor QCD.
Toussaint, D; Freeman, W
2009-09-18
We calculate the "strange quark content of the nucleon,"
Contextuality as a Resource for Models of Quantum Computation with Qubits
NASA Astrophysics Data System (ADS)
Bermejo-Vega, Juan; Delfosse, Nicolas; Browne, Dan E.; Okay, Cihan; Raussendorf, Robert
2017-09-01
A central question in quantum computation is to identify the resources that are responsible for quantum speed-up. Quantum contextuality has been recently shown to be a resource for quantum computation with magic states for odd-prime dimensional qudits and two-dimensional systems with real wave functions. The phenomenon of state-independent contextuality poses a priori an obstruction to characterizing the case of regular qubits, the fundamental building block of quantum computation. Here, we establish contextuality of magic states as a necessary resource for a large class of quantum computation schemes on qubits. We illustrate our result with a concrete scheme related to measurement-based quantum computation.
Intercomparison of land-surface parameterizations launched
NASA Astrophysics Data System (ADS)
Henderson-Sellers, A.; Dickinson, R. E.
One of the crucial tasks for climatic and hydrological scientists over the next several years will be validating land surface process parameterizations used in climate models. There is not, necessarily, a unique set of parameters to be used. Different scientists will want to attempt to capture processes through various methods “for example, Avissar and Verstraete, 1990”. Validation of some aspects of the available (and proposed) schemes' performance is clearly required. It would also be valuable to compare the behavior of the existing schemes [for example, Dickinson et al., 1991; Henderson-Sellers, 1992a].The WMO-CAS Working Group on Numerical Experimentation (WGNE) and the Science Panel of the GEWEX Continental-Scale International Project (GCIP) [for example, Chahine, 1992] have agreed to launch the joint WGNE/GCIP Project for Intercomparison of Land-Surface Parameterization Schemes (PILPS). The principal goal of this project is to achieve greater understanding of the capabilities and potential applications of existing and new land-surface schemes in atmospheric models. It is not anticipated that a single “best” scheme will emerge. Rather, the aim is to explore alternative models in ways compatible with their authors' or exploiters' goals and to increase understanding of the characteristics of these models in the scientific community.
Friberg, Leif; Gasparini, Alessandro; Carrero, Juan Jesus
2018-04-01
Information about renal function is important for drug safety studies using administrative health databases. However, serum creatinine values are seldom available in these registries. Our aim was to develop and test a simple scheme for stratification of renal function without access to laboratory test results. Our scheme uses registry data about diagnoses, contacts, dialysis and drug use. We validated the scheme in the Stockholm CREAtinine Measurements (SCREAM) project using information on approximately 1.1 million individuals residing in the Stockholm County who underwent calibrated creatinine testing during 2006-11, linked with data about health care contacts and filled drug prescriptions. Estimated glomerular filtration rate (eGFR) was calculated with the CKD-EPI formula and used as the gold standard for validation of the scheme. When the scheme classified patients as having eGFR <30 mL/min/1.73 m 2 , it was correct in 93.5% of cases. The specificity of the scheme was close to 100% in all age groups. The sensitivity was poor, ranging from 68.2% in the youngest age quartile, down to 10.7% in the oldest age quartile. Age-related decline in renal function makes a large proportion of elderly patients fall into the chronic kidney disease (CKD) range without receiving CKD diagnoses, as this often is seen as part of normal ageing. In the absence of renal function tests, our scheme may be of value for identifying patients with moderate and severe CKD on the basis of diagnostic and prescription data for use in studies of large healthcare databases.
Geostatistical regularization operators for geophysical inverse problems on irregular meshes
NASA Astrophysics Data System (ADS)
Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA
2018-05-01
Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.
Avoiding Treatment Interruptions: What Role Do Australian Community Pharmacists Play?
Abukres, Salem Hasn; Hoti, Kreshnik; Hughes, Jeffery David
2016-01-01
Objective To explore the reported practice of Australian community pharmacists when dealing with medication supply requests in absence of a valid prescription. Methods Self-administered questionnaire was posted to 1490 randomly selected community pharmacies across all Australian states and territories. This sample was estimated to be a 20% of all Australian community pharmacies. Results Three hundred eighty five pharmacists participated in the study (response rate achieved was 27.9% (there were 111 undelivered questionnaires). Respondents indicated that they were more likely to provide medications to regular customers without a valid prescription compared to non-regular customers (p<0.0001). However, supply was also influenced by the type of prescription and the medication requested. In the case of type of prescription (Standard, Authority or Private) this relates to the complexity/probability of obtaining a valid prescription from the prescriber at a later date (i.e. supply with an anticipated prescription). Decisions to supply and/or not supply related to medication type were more complex. For some cases, including medication with potential for abuse, the practice and/or the method of supply varied significantly according to age and gender of the pharmacist, and pharmacy location (p<0.05). Conclusions Although being a regular customer does not guarantee a supply, results of this study reinforce the importance for patients having a regular pharmacy, where pharmacists were more likely to continue medication supply in cases of patients presenting without a valid prescription. We would suggest, more flexible legislation should be implemented to allow pharmacists to continue supplying of medication when obtaining a prescription is not practical. PMID:27170997
Validation of Microphysical Schemes in a CRM Using TRMM Satellite
NASA Astrophysics Data System (ADS)
Li, X.; Tao, W.; Matsui, T.; Liu, C.; Masunaga, H.
2007-12-01
The microphysical scheme in the Goddard Cumulus Ensemble (GCE) model has been the most heavily developed component in the past decade. The cloud-resolving model now has microphysical schemes ranging from the original Lin type bulk scheme, to improved bulk schemes, to a two-moment scheme, to a detailed bin spectral scheme. Even with the most sophisticated bin scheme, many uncertainties still exist, especially in ice phase microphysics. In this study, we take advantages of the long-term TRMM observations, especially the cloud profiles observed by the precipitation radar (PR), to validate microphysical schemes in the simulations of Mesoscale Convective Systems (MCSs). Two contrasting cases, a midlatitude summertime continental MCS with leading convection and trailing stratiform region, and an oceanic MCS in tropical western Pacific are studied. The simulated cloud structures and particle sizes are fed into a forward radiative transfer model to simulate the TRMM satellite sensors, i.e., the PR, the TRMM microwave imager (TMI) and the visible and infrared scanner (VIRS). MCS cases that match the structure and strength of the simulated systems over the 10-year period are used to construct statistics of different sensors. These statistics are then compared with the synthetic satellite data obtained from the forward radiative transfer calculations. It is found that the GCE model simulates the contrasts between the continental and oceanic case reasonably well, with less ice scattering in the oceanic case comparing with the continental case. However, the simulated ice scattering signals for both PR and TMI are generally stronger than the observations, especially for the bulk scheme and at the upper levels in the stratiform region. This indicates larger, denser snow/graupel particles at these levels. Adjusting microphysical schemes in the GCE model according the observations, especially the 3D cloud structure observed by TRMM PR, result in a much better agreement.
Regular Class Participation System (RCPS). A Final Report.
ERIC Educational Resources Information Center
Ferguson, Dianne L.; And Others
The Regular Class Participation System (RCPS) project attempted to develop, implement, and validate a system for placing and maintaining students with severe disabilities in general education classrooms, with a particular emphasis on achieving both social and learning outcomes for students. A teacher-based planning strategy was developed and…
Analysis of sensitivity to different parameterization schemes for a subtropical cyclone
NASA Astrophysics Data System (ADS)
Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.
2018-05-01
A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.
Time-Delayed Two-Step Selective Laser Photodamage of Dye-Biomolecule Complexes
NASA Astrophysics Data System (ADS)
Andreoni, A.; Cubeddu, R.; de Silvestri, S.; Laporta, P.; Svelto, O.
1980-08-01
A scheme is proposed for laser-selective photodamage of biological molecules, based on time-delayed two-step photoionization of a dye molecule bound to the biomolecule. The validity of the scheme is experimentally demonstrated in the case of the dye Proflavine, bound to synthetic polynucleotides.
ERIC Educational Resources Information Center
Jones, Daniel; Monsen, Jeremy; Franey, John
2013-01-01
This paper explores how educational psychologists working in a training/consultative way can enable teachers to manage challenging pupil behaviour more effectively. It sets out a rationale which encourages schools to embrace a group based teacher peer-support system as part of regular school development. It then explores the usefulness of the…
The Tanda: A Practice at the Intersection of Mathematics, Culture, and Financial Goals
ERIC Educational Resources Information Center
Martin, Lee; Goldman, Shelley; Jimenez, Osvaldo
2009-01-01
We present an analysis and discussion of the "tanda," a multiperson pooled credit and savings scheme (a rotating credit association or RCA), as described by two informants from Mexican immigrant communities in California. In the tanda, participants contribute regularly to a common fund which is distributed to participants on a rotating…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirillov, A. A.; Savelova, E. P., E-mail: ka98@mail.ru
The problem of free-particle scattering on virtual wormholes is considered. It is shown that, for all types of relativistic fields, this scattering leads to the appearance of additional very heavy particles, which play the role of auxiliary fields in the invariant scheme of Pauli–Villars regularization. A nonlinear correction that describes the back reaction of particles to the vacuum distribution of virtual wormholes is also obtained.
NASA Astrophysics Data System (ADS)
Schachtschneider, R.; Rother, M.; Lesur, V.
2013-12-01
We introduce a method that enables us to account for existing correlations between Gauss coefficients in core field modelling. The information about the correlations are obtained from a highly accurate field model based on CHAMP data, e.g. the GRIMM-3 model. We compute the covariance matrices of the geomagnetic field, the secular variation, and acceleration up to degree 18 and use these in the regularization scheme of the core field inversion. For testing our method we followed two different approaches by applying it to two different synthetic satellite data sets. The first is a short data set with a time span of only three months. Here we test how the information about correlations help to obtain an accurate model when only very little information are available. The second data set is a large one covering several years. In this case, besides reducing the residuals in general, we focus on the improvement of the model near the boundaries of the data set where the accerelation is generally more difficult to handle. In both cases the obtained covariance matrices are included in the damping scheme of the regularization. That way information from scales that could otherwise not be resolved by the data can be extracted. We show that by using this technique we are able to improve the models of the field and the secular variation for both, the short and the long term data set, compared to approaches using more conventional regularization techniques.
Maintaining quality in the UK breast screening program
NASA Astrophysics Data System (ADS)
Gale, Alastair
2010-02-01
Breast screening in the UK has been implemented for over 20 years and annually nearly two million women are now screened with an estimated 1,400 lives saved. Nationally, some 700 individuals interpret screening mammograms in almost 110 screening centres. Currently, women aged 50 to 70 are invited for screening every three years and by 2012 this age range will increase to 47 - 73 years. There is a rapid ongoing transition from using film mammograms to full field digital mammography such that in 2010 every screening centre will be partly digital. An early, and long running, concern has been how to ensure the highest quality of imaging interpretation across the UK, an issue enhanced by the use of a three year screening interval. To partly address this question a self assessment scheme was developed in 1988 and subsequently implemented nationally in the UK as a virtually mandatory activity. The scheme is detailed from its beginnings, through its various developments to current incarnation and future plans. This encompasses both radiological (single view screening, two view screening, mammographic film and full field digital mammography) as well as design changes (cases reported by means of: form filling; PDA; tablet PC; iPhone, and the internet). The scheme provides a rich data source which is regularly studied to examine different aspects of radiological performance. Overall it aids screening radiologists by giving them regular access to a range of difficult exemplar cases together with feedback on their performance as compared to their peers.
WEAK GALERKIN METHODS FOR SECOND ORDER ELLIPTIC INTERFACE PROBLEMS
MU, LIN; WANG, JUNPING; WEI, GUOWEI; YE, XIU; ZHAO, SHAN
2013-01-01
Weak Galerkin methods refer to general finite element methods for partial differential equations (PDEs) in which differential operators are approximated by their weak forms as distributions. Such weak forms give rise to desirable flexibilities in enforcing boundary and interface conditions. A weak Galerkin finite element method (WG-FEM) is developed in this paper for solving elliptic PDEs with discontinuous coefficients and interfaces. Theoretically, it is proved that high order numerical schemes can be designed by using the WG-FEM with polynomials of high order on each element. Extensive numerical experiments have been carried to validate the WG-FEM for solving second order elliptic interface problems. High order of convergence is numerically confirmed in both L2 and L∞ norms for the piecewise linear WG-FEM. Special attention is paid to solve many interface problems, in which the solution possesses a certain singularity due to the nonsmoothness of the interface. A challenge in research is to design nearly second order numerical methods that work well for problems with low regularity in the solution. The best known numerical scheme in the literature is of order O(h) to O(h1.5) for the solution itself in L∞ norm. It is demonstrated that the WG-FEM of the lowest order, i.e., the piecewise constant WG-FEM, is capable of delivering numerical approximations that are of order O(h1.75) to O(h2) in the L∞ norm for C1 or Lipschitz continuous interfaces associated with a C1 or H2 continuous solution. PMID:24072935
DNS of Flows over Periodic Hills using a Discontinuous-Galerkin Spectral-Element Method
NASA Technical Reports Server (NTRS)
Diosady, Laslo T.; Murman, Scott M.
2014-01-01
Direct numerical simulation (DNS) of turbulent compressible flows is performed using a higher-order space-time discontinuous-Galerkin finite-element method. The numerical scheme is validated by performing DNS of the evolution of the Taylor-Green vortex and turbulent flow in a channel. The higher-order method is shown to provide increased accuracy relative to low-order methods at a given number of degrees of freedom. The turbulent flow over a periodic array of hills in a channel is simulated at Reynolds number 10,595 using an 8th-order scheme in space and a 4th-order scheme in time. These results are validated against previous large eddy simulation (LES) results. A preliminary analysis provides insight into how these detailed simulations can be used to improve Reynoldsaveraged Navier-Stokes (RANS) modeling
NASA Astrophysics Data System (ADS)
Chen, Wen-Yuan; Liu, Chen-Chung
2006-01-01
The problems with binary watermarking schemes are that they have only a small amount of embeddable space and are not robust enough. We develop a slice-based large-cluster algorithm (SBLCA) to construct a robust watermarking scheme for binary images. In SBLCA, a small-amount cluster selection (SACS) strategy is used to search for a feasible slice in a large-cluster flappable-pixel decision (LCFPD) method, which is used to search for the best location for concealing a secret bit from a selected slice. This method has four major advantages over the others: (a) SBLCA has a simple and effective decision function to select appropriate concealment locations, (b) SBLCA utilizes a blind watermarking scheme without the original image in the watermark extracting process, (c) SBLCA uses slice-based shuffling capability to transfer the regular image into a hash state without remembering the state before shuffling, and finally, (d) SBLCA has enough embeddable space that every 64 pixels could accommodate a secret bit of the binary image. Furthermore, empirical results on test images reveal that our approach is a robust watermarking scheme for binary images.
Cryptanalysis of Chatterjee-Sarkar Hierarchical Identity-Based Encryption Scheme at PKC 06
NASA Astrophysics Data System (ADS)
Park, Jong Hwan; Lee, Dong Hoon
In 2006, Chatterjee and Sarkar proposed a hierarchical identity-based encryption (HIBE) scheme which can support an unbounded number of identity levels. This property is particularly useful in providing forward secrecy by embedding time components within hierarchical identities. In this paper we show that their scheme does not provide the claimed property. Our analysis shows that if the number of identity levels becomes larger than the value of a fixed public parameter, an unintended receiver can reconstruct a new valid ciphertext and decrypt the ciphertext using his or her own private key. The analysis is similarly applied to a multi-receiver identity-based encryption scheme presented as an application of Chatterjee and Sarkar's HIBE scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofeng, E-mail: xfyang@math.sc.edu; Han, Daozhi, E-mail: djhan@iu.edu
2017-02-01
In this paper, we develop a series of linear, unconditionally energy stable numerical schemes for solving the classical phase field crystal model. The temporal discretizations are based on the first order Euler method, the second order backward differentiation formulas (BDF2) and the second order Crank–Nicolson method, respectively. The schemes lead to linear elliptic equations to be solved at each time step, and the induced linear systems are symmetric positive definite. We prove that all three schemes are unconditionally energy stable rigorously. Various classical numerical experiments in 2D and 3D are performed to validate the accuracy and efficiency of the proposedmore » schemes.« less
NASA Astrophysics Data System (ADS)
Kim, Do-Bin; Kwon, Dae Woong; Kim, Seunghyun; Lee, Sang-Ho; Park, Byung-Gook
2018-02-01
To obtain high channel boosting potential and reduce a program disturbance in channel stacked NAND flash memory with layer selection by multilevel (LSM) operation, a new program scheme using boosted common source line (CSL) is proposed. The proposed scheme can be achieved by applying proper bias to each layer through its own CSL. Technology computer-aided design (TCAD) simulations are performed to verify the validity of the new method in LSM. Through TCAD simulation, it is revealed that the program disturbance characteristics is effectively improved by the proposed scheme.
X-ray tests of a two-dimensional stigmatic imaging scheme with variable magnifications
Lu, J.; Bitter, M.; Hill, K. W.; ...
2014-07-22
A two-dimensional stigmatic x-ray imaging scheme, consisting of two spherically bent crystals, one concave and one convex, was recently proposed [M. Bitter et al., Rev. Sci. Instrum. 83, 10E527 (2012)]. We report that the Bragg angles and the radii of curvature of the two crystals of this imaging scheme are matched to eliminate the astigmatism and to satisfy the Bragg condition across both crystal surfaces for a given x-ray energy. In this paper, we consider more general configurations of this imaging scheme, which allow us to vary the magnification for a given pair of crystals and x-ray energy. The stigmaticmore » imaging scheme has been validated for the first time by imaging x-rays generated by a micro-focus x-ray source with source size of 8.4 μm validated by knife-edge measurements. Results are presented from imaging the tungsten Lα1 emission at 8.3976 keV, using a convex Si-422 crystal and a concave Si-533 crystal with 2d-spacings of 2.21707 Å and 1.65635 Å and radii of curvature of 500 ± 1 mm and 823 ± 1 mm, respectively, showing a spatial resolution of 54.9 μm. Finally, this imaging scheme is expected to be of interest for the two-dimensional imaging of laser produced plasmas.« less
NASA Technical Reports Server (NTRS)
Yaron, I.
1974-01-01
Steady state heat or mass transfer in concentrated ensembles of drops, bubbles or solid spheres in uniform, slow viscous motion, is investigated. Convective effects at small Peclet numbers are taken into account by expanding the nondimensional temperature or concentration in powers of the Peclet number. Uniformly valid solutions are obtained, which reflect the effects of dispersed phase content and rate of internal circulation within the fluid particles. The dependence of the range of Peclet and Reynolds numbers, for which regular expansions are valid, on particle concentration is discussed.
Mixture of Segmenters with Discriminative Spatial Regularization and Sparse Weight Selection*
Chen, Ting; Rangarajan, Anand; Eisenschenk, Stephan J.
2011-01-01
This paper presents a novel segmentation algorithm which automatically learns the combination of weak segmenters and builds a strong one based on the assumption that the locally weighted combination varies w.r.t. both the weak segmenters and the training images. We learn the weighted combination during the training stage using a discriminative spatial regularization which depends on training set labels. A closed form solution to the cost function is derived for this approach. In the testing stage, a sparse regularization scheme is imposed to avoid overfitting. To the best of our knowledge, such a segmentation technique has never been reported in literature and we empirically show that it significantly improves on the performances of the weak segmenters. After showcasing the performance of the algorithm in the context of atlas-based segmentation, we present comparisons to the existing weak segmenter combination strategies on a hippocampal data set. PMID:22003748
Quantum properties of supersymmetric theories regularized by higher covariant derivatives
NASA Astrophysics Data System (ADS)
Stepanyantz, Konstantin
2018-02-01
We investigate quantum corrections in \\mathscr{N} = 1 non-Abelian supersymmetric gauge theories, regularized by higher covariant derivatives. In particular, by the help of the Slavnov-Taylor identities we prove that the vertices with two ghost legs and one leg of the quantum gauge superfield are finite in all orders. This non-renormalization theorem is confirmed by an explicit one-loop calculation. By the help of this theorem we rewrite the exact NSVZ β-function in the form of the relation between the β-function and the anomalous dimensions of the matter superfields, of the quantum gauge superfield, and of the Faddeev-Popov ghosts. Such a relation has simple qualitative interpretation and allows suggesting a prescription producing the NSVZ scheme in all loops for the theories regularized by higher derivatives. This prescription is verified by the explicit three-loop calculation for the terms quartic in the Yukawa couplings.
NASA Astrophysics Data System (ADS)
Bernard, Laura; Blanchet, Luc; Bohé, Alejandro; Faye, Guillaume; Marsat, Sylvain
2017-11-01
The Fokker action of point-particle binaries at the fourth post-Newtonian (4PN) approximation of general relativity has been determined previously. However two ambiguity parameters associated with infrared (IR) divergencies of spatial integrals had to be introduced. These two parameters were fixed by comparison with gravitational self-force (GSF) calculations of the conserved energy and periastron advance for circular orbits in the test-mass limit. In the present paper together with a companion paper, we determine both these ambiguities from first principle, by means of dimensional regularization. Our computation is thus entirely defined within the dimensional regularization scheme, for treating at once the IR and ultra-violet (UV) divergencies. In particular, we obtain crucial contributions coming from the Einstein-Hilbert part of the action and from the nonlocal tail term in arbitrary dimensions, which resolve the ambiguities.
Fluid-structure interaction with the entropic lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Dorschner, B.; Chikatamarla, S. S.; Karlin, I. V.
2018-02-01
We propose a fluid-structure interaction (FSI) scheme using the entropic multi-relaxation time lattice Boltzmann (KBC) model for the fluid domain in combination with a nonlinear finite element solver for the structural part. We show the validity of the proposed scheme for various challenging setups by comparison to literature data. Beyond validation, we extend the KBC model to multiphase flows and couple it with a finite element method (FEM) solver. Robustness and viability of the entropic multi-relaxation time model for complex FSI applications is shown by simulations of droplet impact on elastic superhydrophobic surfaces.
Fourier-Accelerated Nodal Solvers (FANS) for homogenization problems
NASA Astrophysics Data System (ADS)
Leuschner, Matthias; Fritzen, Felix
2017-11-01
Fourier-based homogenization schemes are useful to analyze heterogeneous microstructures represented by 2D or 3D image data. These iterative schemes involve discrete periodic convolutions with global ansatz functions (mostly fundamental solutions). The convolutions are efficiently computed using the fast Fourier transform. FANS operates on nodal variables on regular grids and converges to finite element solutions. Compared to established Fourier-based methods, the number of convolutions is reduced by FANS. Additionally, fast iterations are possible by assembling the stiffness matrix. Due to the related memory requirement, the method is best suited for medium-sized problems. A comparative study involving established Fourier-based homogenization schemes is conducted for a thermal benchmark problem with a closed-form solution. Detailed technical and algorithmic descriptions are given for all methods considered in the comparison. Furthermore, many numerical examples focusing on convergence properties for both thermal and mechanical problems, including also plasticity, are presented.
Flowers, Natalie L
2010-01-01
CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.
DPOD2014: a new DORIS extension of ITRF2014 for Precise Orbit Determination
NASA Astrophysics Data System (ADS)
Moreaux, G.; Willis, P.; Lemoine, F. G.; Zelensky, N. P.
2016-12-01
As one of the tracking systems used to determine orbits of the altimeter mission satellites (such as TOPEX/Poseidon, Envisat, Jason-1/2/3 & Cryosat-2), the position of the DORIS tracking stations provides a fundamental reference for the estimation of the precise orbits and so, by extension is fundamental for the quality of the altimeter data and derived products. Therefore, the time evolution of the position of both the existing and the newest DORIS stations must be precisely modeled and regularly updated. To satisfy operational requirements for precise orbit determination and routine delivery of geodetic products, the International DORIS Service maintains the so-called DPOD solutions, which can be seen as extensions of the latest available ITRF solution from the International Earth Rotation and Reference Systems Service (IERS). In mid-2016, the IDS agreed to change the processing strategy of the DPOD solution. The new solution from the IDS Combination Center (CC) consists of a DORIS cumulative position and velocity solution using the latest IDS combined weekly solutions. The first objective of this study is to describe the new DPOD elaboration scheme and to show the IDS CC internal validation steps. The second purpose is to present the external validation process made by an external team before the new DPOD is made available to all the users. The elaboration and validation procedures will be illustrated by the presentation of first version of the DPOD2014 (ITRF2014 DORIS extension) and focus will be given on the update of the position and velocity of two DORIS sites: Everest (after Gorkha earthquake M7.8 in April 2015) and Thule (Greenland).
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
Liang, Yunlei; Du, Zhijiang; Sun, Lining
2017-01-01
The tendon driven mechanism using a cable and pulley to transmit power is adopted by many surgical robots. However, backlash hysteresis objectively exists in cable-pulley mechanisms, and this nonlinear problem is a great challenge in precise position control during the surgical procedure. Previous studies mainly focused on the transmission characteristics of the cable-driven system and constructed transmission models under particular assumptions to solve nonlinear problems. However, these approaches are limited because the modeling process is complex and the transmission models lack general applicability. This paper presents a novel position compensation control scheme to reduce the impact of backlash hysteresis on the positioning accuracy of surgical robots’ end-effectors. In this paper, a position compensation scheme using a support vector machine based on feedforward control is presented to reduce the position tracking error. To validate the proposed approach, experimental validations are conducted on our cable-pulley system and comparative experiments are carried out. The results show remarkable improvements in the performance of reducing the positioning error for the use of the proposed scheme. PMID:28974011
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.
2013-01-01
Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).
Public-key quantum digital signature scheme with one-time pad private-key
NASA Astrophysics Data System (ADS)
Chen, Feng-Lin; Liu, Wan-Fang; Chen, Su-Gen; Wang, Zhi-Hua
2018-01-01
A quantum digital signature scheme is firstly proposed based on public-key quantum cryptosystem. In the scheme, the verification public-key is derived from the signer's identity information (such as e-mail) on the foundation of identity-based encryption, and the signature private-key is generated by one-time pad (OTP) protocol. The public-key and private-key pair belongs to classical bits, but the signature cipher belongs to quantum qubits. After the signer announces the public-key and generates the final quantum signature, each verifier can verify publicly whether the signature is valid or not with the public-key and quantum digital digest. Analysis results show that the proposed scheme satisfies non-repudiation and unforgeability. Information-theoretic security of the scheme is ensured by quantum indistinguishability mechanics and OTP protocol. Based on the public-key cryptosystem, the proposed scheme is easier to be realized compared with other quantum signature schemes under current technical conditions.
NASA Astrophysics Data System (ADS)
Rehman, Asad; Ali, Ishtiaq; Qamar, Shamsul
An upwind space-time conservation element and solution element (CE/SE) scheme is extended to numerically approximate the dusty gas flow model. Unlike central CE/SE schemes, the current method uses the upwind procedure to derive the numerical fluxes through the inner boundary of conservation elements. These upwind fluxes are utilized to calculate the gradients of flow variables. For comparison and validation, the central upwind scheme is also applied to solve the same dusty gas flow model. The suggested upwind CE/SE scheme resolves the contact discontinuities more effectively and preserves the positivity of flow variables in low density flows. Several case studies are considered and the results of upwind CE/SE are compared with the solutions of central upwind scheme. The numerical results show better performance of the upwind CE/SE method as compared to the central upwind scheme.
NASA Astrophysics Data System (ADS)
Li, Changgang; Sun, Yanli; Yu, Yawei
2017-05-01
Under frequency load shedding (UFLS) is an important measure to tackle with frequency drop caused by load-generation imbalance. In existing schemes, loads are shed by relays in a discontinuous way, which is the major reason leading to under-shedding and over-shedding problems. With the application of power electronics technology, some loads can be controlled continuously, and it is possible to improve the UFSL with continuous loads. This paper proposes an UFLS scheme by shedding loads continuously. The load shedding amount is proportional to frequency deviation before frequency reaches its minimum during transient process. The feasibility of the proposed scheme is analysed with analytical system frequency response model. The impacts of governor droop, system inertia, and frequency threshold on the performance of the proposed UFLS scheme are discussed. Cases are demonstrated to validate the proposed scheme by comparing it with conventional UFLS schemes.
ERIC Educational Resources Information Center
Caputo, Andrea; Langher, Viviana
2015-01-01
This article describes the development and initial validation of the Collaboration and Support for Inclusive Teaching, a measure of perceived support in special education teachers regarding the degree of collaboration with regular teachers for inclusive practice at school. The scale was validated on a sample of 276 special education teachers…
ERIC Educational Resources Information Center
Menzies, Holly M.; Lane, Kathleen Lynne
2012-01-01
In this study the authors examined the psychometric properties of the "Student Risk Screening Scale" (SRSS), including predictive validity in terms of student outcomes in behavioral and academic domains. The school, a diverse, suburban school in Southern California, administered the SRSS at three time points as part of regular school…
NASA Astrophysics Data System (ADS)
Qiang, Wei
2011-12-01
We describe a sampling scheme for the two-dimensional (2D) solid state NMR experiments, which can be readily applied to the sensitivity-limited samples. The sampling scheme utilizes continuous, non-uniform sampling profile for the indirect dimension, i.e. the acquisition number decreases as a function of the evolution time ( t1) in the indirect dimension. For a beta amyloid (Aβ) fibril sample, we observed overall 40-50% signal enhancement by measuring the cross peak volume, while the cross peak linewidths remained comparable to the linewidths obtained by regular sampling and processing strategies. Both the linear and Gaussian decay functions for the acquisition numbers result in similar percentage of increment in signal. In addition, we demonstrated that this sampling approach can be applied with different dipolar recoupling approaches such as radiofrequency assisted diffusion (RAD) and finite-pulse radio-frequency-driven recoupling (fpRFDR). This sampling scheme is especially suitable for the sensitivity-limited samples which require long signal averaging for each t1 point, for instance the biological membrane proteins where only a small fraction of the sample is isotopically labeled.
Mishra, Dheerendra; Srinivas, Jangirala; Mukhopadhyay, Sourav
2014-10-01
Advancement in network technology provides new ways to utilize telecare medicine information systems (TMIS) for patient care. Although TMIS usually faces various attacks as the services are provided over the public network. Recently, Jiang et al. proposed a chaotic map-based remote user authentication scheme for TMIS. Their scheme has the merits of low cost and session key agreement using Chaos theory. It enhances the security of the system by resisting various attacks. In this paper, we analyze the security of Jiang et al.'s scheme and demonstrate that their scheme is vulnerable to denial of service attack. Moreover, we demonstrate flaws in password change phase of their scheme. Further, our aim is to propose a new chaos map-based anonymous user authentication scheme for TMIS to overcome the weaknesses of Jiang et al.'s scheme, while also retaining the original merits of their scheme. We also show that our scheme is secure against various known attacks including the attacks found in Jiang et al.'s scheme. The proposed scheme is comparable in terms of the communication and computational overheads with Jiang et al.'s scheme and other related existing schemes. Moreover, we demonstrate the validity of the proposed scheme through the BAN (Burrows, Abadi, and Needham) logic.
Pulse design for multilevel systems by utilizing Lie transforms
NASA Astrophysics Data System (ADS)
Kang, Yi-Hao; Chen, Ye-Hong; Shi, Zhi-Cheng; Huang, Bi-Hua; Song, Jie; Xia, Yan
2018-03-01
We put forward a scheme to design pulses to manipulate multilevel systems with Lie transforms. A formula to reverse construct a control Hamiltonian is given and is applied in pulse design in the three- and four-level systems as examples. To demonstrate the validity of the scheme, we perform numerical simulations, which show the population transfers for cascaded three-level and N -type four-level Rydberg atoms can be completed successfully with high fidelities. Therefore, the scheme may benefit quantum information tasks based on multilevel systems.
A Novel Quantum Blind Signature Scheme with Four-Particle Cluster States
NASA Astrophysics Data System (ADS)
Fan, Ling
2016-03-01
In an arbitrated quantum signature scheme, the signer signs the message and the receiver verifies the signature's validity with the assistance of the arbitrator. We present an arbitrated quantum blind signature scheme by measuring four-particle cluster states and coding. By using the special relationship of four-particle cluster states, we cannot only support the security of quantum signature, but also guarantee the anonymity of the message owner. It has a wide application to E-payment system, E-government, E-business, and etc.
Hasin, Deborah S.; Shmulewitz, Dvora; Stohl, Malka; Greenstein, Eliana; Aivadyan, Christina; Morita, Kara; Saha, Tulshi; Aharonovich, Efrat; Jung, Jeesun; Zhang, Haitao; Nunes, Edward V.; Grant, Bridget F.
2016-01-01
Background Little is known about the procedural validity of lay-administered, fully-structured assessments of depressive, anxiety and post-traumatic stress (PTSD) disorders in the general population as determined by comparison to clinical re-appraisal, and whether this differs between current regular substance abusers and others. We evaluated the procedural validity of the Alcohol Use Disorder and Associated Disabilities Interview Schedule, DSM-5 Version (AUDADIS-5) assessment of these disorders through clinician re-interviews. Methods Test-retest design among respondents from the National Epidemiologic Survey on Alcohol and Related Conditions-III (NESARC-III): (264 current regular substance abusers, 447 others). Clinicians blinded to AUDADIS-5 results administered the semi-structured Psychiatric Research Interview for Substance and Mental Disorders, DSM-5 version (PRISM-5). AUDADIS-5/PRISM-5 concordance was indicated by kappa (κ) for diagnoses and intraclass correlation coefficients (ICC) for dimensional measures (DSM-5 symptom or criterion counts). Results were compared between current regular substance abusers and others. Results AUDADIS-5 and PRISM-5 concordance for DSM-5 depressive disorders, anxiety disorders and PTSD was generally fair to moderate (κ =0.24–0.59), with concordance on dimensional scales much better (ICC=0.53–0.81). Concordance differed little between regular substance abusers and others. Conclusions AUDADIS-5/PRISM-5 concordance indicated procedural validity for the AUDADIS-5 among substance abusers and others, suggesting that AUDADIS-5 diagnoses of DSM-5 depressive, anxiety and PTSD diagnoses are informative measures in both groups in epidemiologic studies. The stronger concordance on dimensional measures supports the current movement towards dimensional psychopathology measures, suggesting that such measures provide important information for research in the NESARC-III and other datasets, and possibly for clinical purposes as well. PMID:25939727
Love, Seth; Chalmers, Katy; Ince, Paul; Esiri, Margaret; Attems, Johannes; Kalaria, Raj; Jellinger, Kurt; Yamada, Masahito; McCarron, Mark; Minett, Thais; Matthews, Fiona; Greenberg, Steven; Mann, David; Kehoe, Patrick Gavin
2015-01-01
In a collaboration involving 11 groups with research interests in cerebral amyloid angiopathy (CAA), we used a two-stage process to develop and in turn validate a new consensus protocol and scoring scheme for the assessment of CAA and associated vasculopathic abnormalities in post-mortem brain tissue. Stage one used an iterative Delphi-style survey to develop the consensus protocol. The resultant scoring scheme was tested on a series of digital images and paraffin sections that were circulated blind to a number of scorers. The scoring scheme and choice of staining methods were refined by open-forum discussion. The agreed protocol scored parenchymal and meningeal CAA on a 0-3 scale, capillary CAA as present/absent and vasculopathy on 0-2 scale, in the 4 cortical lobes that were scored separately. A further assessment involving three centres was then undertaken. Neuropathologists in three centres (Bristol, Oxford and Sheffield) independently scored sections from 75 cases (25 from each centre) and high inter-rater reliability was demonstrated. Stage two used the results of the three-centre assessment to validate the protocol by investigating previously described associations between APOE genotype (previously determined), and both CAA and vasculopathy. Association of capillary CAA with or without arteriolar CAA with APOE ε4 was confirmed. However APOE ε2 was also found to be a strong risk factor for the development of CAA, not only in AD but also in elderly non-demented controls. Further validation of this protocol and scoring scheme is encouraged, to aid its wider adoption to facilitate collaborative and replication studies of CAA.[This corrects the article on p. 19 in vol. 3, PMID: 24754000.].
Love, Seth; Chalmers, Katy; Ince, Paul; Esiri, Margaret; Attems, Johannes; Kalaria, Raj; Jellinger, Kurt; Yamada, Masahito; McCarron, Mark; Minett, Thais; Matthews, Fiona; Greenberg, Steven; Mann, David; Kehoe, Patrick Gavin
2015-01-01
In a collaboration involving 11 groups with research interests in cerebral amyloid angiopathy (CAA), we used a two-stage process to develop and in turn validate a new consensus protocol and scoring scheme for the assessment of CAA and associated vasculopathic abnormalities in post-mortem brain tissue. Stage one used an iterative Delphi-style survey to develop the consensus protocol. The resultant scoring scheme was tested on a series of digital images and paraffin sections that were circulated blind to a number of scorers. The scoring scheme and choice of staining methods were refined by open-forum discussion. The agreed protocol scored parenchymal and meningeal CAA on a 0-3 scale, capillary CAA as present/absent and vasculopathy on 0-2 scale, in the 4 cortical lobes that were scored separately. A further assessment involving three centres was then undertaken. Neuropathologists in three centres (Bristol, Oxford and Sheffield) independently scored sections from 75 cases (25 from each centre) and high inter-rater reliability was demonstrated. Stage two used the results of the three-centre assessment to validate the protocol by investigating previously described associations between APOE genotype (previously determined), and both CAA and vasculopathy. Association of capillary CAA with or without arteriolar CAA with APOE ε4 was confirmed. However APOE ε2 was also found to be a strong risk factor for the development of CAA, not only in AD but also in elderly non-demented controls. Further validation of this protocol and scoring scheme is encouraged, to aid its wider adoption to facilitate collaborative and replication studies of CAA. PMID:26807344
Love, Seth; Chalmers, Katy; Ince, Paul; Esiri, Margaret; Attems, Johannes; Jellinger, Kurt; Yamada, Masahito; McCarron, Mark; Minett, Thais; Matthews, Fiona; Greenberg, Steven; Mann, David; Kehoe, Patrick Gavin
2014-01-01
In a collaboration involving 11 groups with research interests in cerebral amyloid angiopathy (CAA), we used a two-stage process to develop and in turn validate a new consensus protocol and scoring scheme for the assessment of CAA and associated vasculopathic abnormalities in post-mortem brain tissue. Stage one used an iterative Delphi-style survey to develop the consensus protocol. The resultant scoring scheme was tested on a series of digital images and paraffin sections that were circulated blind to a number of scorers. The scoring scheme and choice of staining methods were refined by open-forum discussion. The agreed protocol scored parenchymal and meningeal CAA on a 0-3 scale, capillary CAA as present/absent and vasculopathy on 0-2 scale, in the 4 cortical lobes that were scored separately. A further assessment involving three centres was then undertaken. Neuropathologists in three centres (Bristol, Oxford and Sheffield) independently scored sections from 75 cases (25 from each centre) and high inter-rater reliability was demonstrated. Stage two used the results of the three-centre assessment to validate the protocol by investigating previously described associations between APOE genotype (previously determined), and both CAA and vasculopathy. Association of capillary CAA with or without arteriolar CAA with APOE ε4 was confirmed. However APOE ε2 was also found to be a strong risk factor for the development of CAA, not only in AD but also in elderly non-demented controls. Further validation of this protocol and scoring scheme is encouraged, to aid its wider adoption to facilitate collaborative and replication studies of CAA. PMID:24754000
Modified Dispersion Relations: from Black-Hole Entropy to the Cosmological Constant
NASA Astrophysics Data System (ADS)
Garattini, Remo
2012-07-01
Quantum Field Theory is plagued by divergences in the attempt to calculate physical quantities. Standard techniques of regularization and renormalization are used to keep under control such a problem. In this paper we would like to use a different scheme based on Modified Dispersion Relations (MDR) to remove infinities appearing in one loop approximation in contrast to what happens in conventional approaches. In particular, we apply the MDR regularization to the computation of the entropy of a Schwarzschild black hole from one side and the Zero Point Energy (ZPE) of the graviton from the other side. The graviton ZPE is connected to the cosmological constant by means of of the Wheeler-DeWitt equation.
NASA Astrophysics Data System (ADS)
Yang, Chen Ning
2013-05-01
Werner Heisenberg was one of the greatest physicists of all times. When he started out as a young research worker, the world of physics was in a very confused and frustrating state, which Abraham Pais has described1 as: It was the spring of hope, it was the winter of despair using Charles Dickens' words in A Tale of Two Cities. People were playing a guessing game: There were from time to time great triumphs in proposing, through sheer intuition, make-shift schemes that amazingly explained some regularities in spectral physics, leading to joy. But invariably such successes would be followed by further work which reveal the inconsistency or inadequacy of the new scheme, leading to despair...
A far-field non-reflecting boundary condition for two-dimensional wake flows
NASA Technical Reports Server (NTRS)
Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli
1995-01-01
Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.
Finite entanglement entropy of black holes
NASA Astrophysics Data System (ADS)
Giaccari, Stefano; Modesto, Leonardo; Rachwał, Lesław; Zhu, Yiwei
2018-06-01
We compute the area term contribution to black holes' entanglement entropy (using the conical technique) for a class of local or weakly non-local super-renormalizable gravitational theories coupled to matter. For the first time, we explicitly prove that all the beta functions in the proposed theory, except for the cosmological constant, are identically zero in cut-off regularization scheme and not only in dimensional regularization scheme. In particular, we show that there is no divergence quadratic in cut-off and hence there is no contribution to the beta function of the Newton constant. As a consequence of this result, we argue that in these theories of gravity conical entropy is a sensible definition of physical entropy, in particular, it is positive-definite and gauge independent. On top of this the conical entropy, being expressed only in terms of the classical Newton constant, turns out to be finite and naturally coincides with Bekenstein-Hawking entropy. Finally, we propose a theory in which the renormalization of the Newton constant is entirely due to the Standard Model matter, arguing that such a contribution does not give the usual interpretational problems of conical entropy discussed in the literature.
Time cycle analysis and simulation of material flow in MOX process layout
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, S.; Saraswat, A.; Danny, K.M.
The (U,Pu)O{sub 2} MOX fuel is the driver fuel for the upcoming PFBR (Prototype Fast Breeder Reactor). The fuel has around 30% PuO{sub 2}. The presence of high percentages of reprocessed PuO{sub 2} necessitates the design of optimized fuel fabrication process line which will address both production need as well as meet regulatory norms regarding radiological safety criteria. The powder pellet route has highly unbalanced time cycle. This difficulty can be overcome by optimizing process layout in terms of equipment redundancy and scheduling of input powder batches. Different schemes are tested before implementing in the process line with the helpmore » of a software. This software simulates the material movement through the optimized process layout. The different material processing schemes have been devised and validity of the schemes are tested with the software. Schemes in which production batches are meeting at any glove box location are considered invalid. A valid scheme ensures adequate spacing between the production batches and at the same time it meets the production target. This software can be further improved by accurately calculating material movement time through glove box train. One important factor is considering material handling time with automation systems in place.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, J., E-mail: jlu@pppl.gov; Bitter, M.; Hill, K. W.
A two-dimensional stigmatic x-ray imaging scheme, consisting of two spherically bent crystals, one concave and one convex, was recently proposed [M. Bitter et al., Rev. Sci. Instrum. 83, 10E527 (2012)]. The Bragg angles and the radii of curvature of the two crystals of this imaging scheme are matched to eliminate the astigmatism and to satisfy the Bragg condition across both crystal surfaces for a given x-ray energy. In this paper, we consider more general configurations of this imaging scheme, which allow us to vary the magnification for a given pair of crystals and x-ray energy. The stigmatic imaging scheme hasmore » been validated for the first time by imaging x-rays generated by a micro-focus x-ray source with source size of 8.4 μm validated by knife-edge measurements. Results are presented from imaging the tungsten Lα1 emission at 8.3976 keV, using a convex Si-422 crystal and a concave Si-533 crystal with 2d-spacings of 2.21707 Å and 1.65635 Å and radii of curvature of 500 ± 1 mm and 823 ± 1 mm, respectively, showing a spatial resolution of 54.9 μm. This imaging scheme is expected to be of interest for the two-dimensional imaging of laser produced plasmas.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, J.; Bitter, M.; Hill, K. W.
A two-dimensional stigmatic x-ray imaging scheme, consisting of two spherically bent crystals, one concave and one convex, was recently proposed [M. Bitter et al., Rev. Sci. Instrum. 83, 10E527 (2012)]. We report that the Bragg angles and the radii of curvature of the two crystals of this imaging scheme are matched to eliminate the astigmatism and to satisfy the Bragg condition across both crystal surfaces for a given x-ray energy. In this paper, we consider more general configurations of this imaging scheme, which allow us to vary the magnification for a given pair of crystals and x-ray energy. The stigmaticmore » imaging scheme has been validated for the first time by imaging x-rays generated by a micro-focus x-ray source with source size of 8.4 μm validated by knife-edge measurements. Results are presented from imaging the tungsten Lα1 emission at 8.3976 keV, using a convex Si-422 crystal and a concave Si-533 crystal with 2d-spacings of 2.21707 Å and 1.65635 Å and radii of curvature of 500 ± 1 mm and 823 ± 1 mm, respectively, showing a spatial resolution of 54.9 μm. Finally, this imaging scheme is expected to be of interest for the two-dimensional imaging of laser produced plasmas.« less
An exponential time-integrator scheme for steady and unsteady inviscid flows
NASA Astrophysics Data System (ADS)
Li, Shu-Jie; Luo, Li-Shi; Wang, Z. J.; Ju, Lili
2018-07-01
An exponential time-integrator scheme of second-order accuracy based on the predictor-corrector methodology, denoted PCEXP, is developed to solve multi-dimensional nonlinear partial differential equations pertaining to fluid dynamics. The effective and efficient implementation of PCEXP is realized by means of the Krylov method. The linear stability and truncation error are analyzed through a one-dimensional model equation. The proposed PCEXP scheme is applied to the Euler equations discretized with a discontinuous Galerkin method in both two and three dimensions. The effectiveness and efficiency of the PCEXP scheme are demonstrated for both steady and unsteady inviscid flows. The accuracy and efficiency of the PCEXP scheme are verified and validated through comparisons with the explicit third-order total variation diminishing Runge-Kutta scheme (TVDRK3), the implicit backward Euler (BE) and the implicit second-order backward difference formula (BDF2). For unsteady flows, the PCEXP scheme generates a temporal error much smaller than the BDF2 scheme does, while maintaining the expected acceleration at the same time. Moreover, the PCEXP scheme is also shown to achieve the computational efficiency comparable to the implicit schemes for steady flows.
Bayesian Inversion of 2D Models from Airborne Transient EM Data
NASA Astrophysics Data System (ADS)
Blatter, D. B.; Key, K.; Ray, A.
2016-12-01
The inherent non-uniqueness in most geophysical inverse problems leads to an infinite number of Earth models that fit observed data to within an adequate tolerance. To resolve this ambiguity, traditional inversion methods based on optimization techniques such as the Gauss-Newton and conjugate gradient methods rely on an additional regularization constraint on the properties that an acceptable model can possess, such as having minimal roughness. While allowing such an inversion scheme to converge on a solution, regularization makes it difficult to estimate the uncertainty associated with the model parameters. This is because regularization biases the inversion process toward certain models that satisfy the regularization constraint and away from others that don't, even when both may suitably fit the data. By contrast, a Bayesian inversion framework aims to produce not a single `most acceptable' model but an estimate of the posterior likelihood of the model parameters, given the observed data. In this work, we develop a 2D Bayesian framework for the inversion of transient electromagnetic (TEM) data. Our method relies on a reversible-jump Markov Chain Monte Carlo (RJ-MCMC) Bayesian inverse method with parallel tempering. Previous gradient-based inversion work in this area used a spatially constrained scheme wherein individual (1D) soundings were inverted together and non-uniqueness was tackled by using lateral and vertical smoothness constraints. By contrast, our work uses a 2D model space of Voronoi cells whose parameterization (including number of cells) is fully data-driven. To make the problem work practically, we approximate the forward solution for each TEM sounding using a local 1D approximation where the model is obtained from the 2D model by retrieving a vertical profile through the Voronoi cells. The implicit parsimony of the Bayesian inversion process leads to the simplest models that adequately explain the data, obviating the need for explicit smoothness constraints. In addition, credible intervals in model space are directly obtained, resolving some of the uncertainty introduced by regularization. An example application shows how the method can be used to quantify the uncertainty in airborne EM soundings for imaging subglacial brine channels and groundwater systems.
Neufeld, E; Chavannes, N; Samaras, T; Kuster, N
2007-08-07
The modeling of thermal effects, often based on the Pennes Bioheat Equation, is becoming increasingly popular. The FDTD technique commonly used in this context suffers considerably from staircasing errors at boundaries. A new conformal technique is proposed that can easily be integrated into existing implementations without requiring a special update scheme. It scales fluxes at interfaces with factors derived from the local surface normal. The new scheme is validated using an analytical solution, and an error analysis is performed to understand its behavior. The new scheme behaves considerably better than the standard scheme. Furthermore, in contrast to the standard scheme, it is possible to obtain with it more accurate solutions by increasing the grid resolution.
Saokaew, Surasak; Kanchanasuwan, Shada; Apisarnthanarak, Piyaporn; Charoensak, Aphinya; Charatcharoenwitthaya, Phunchai; Phisalprapa, Pochamana; Chaiyakunapruk, Nathorn
2017-10-01
Non-alcoholic fatty liver disease (NAFLD) can progress from simple steatosis to hepatocellular carcinoma. None of tools have been developed specifically for high-risk patients. This study aimed to develop a simple risk scoring to predict NAFLD in patients with metabolic syndrome (MetS). A total of 509 patients with MetS were recruited. All were diagnosed by clinicians with ultrasonography-confirmed whether they were patients with NAFLD. Patients were randomly divided into derivation (n=400) and validation (n=109) cohort. To develop the risk score, clinical risk indicators measured at the time of recruitment were built by logistic regression. Regression coefficients were transformed into item scores and added up to a total score. A risk scoring scheme was developed from clinical predictors: BMI ≥25, AST/ALT ≥1, ALT ≥40, type 2 diabetes mellitus and central obesity. The scoring scheme was applied in validation cohort to test the performance. The scheme explained, by area under the receiver operating characteristic curve (AuROC), 76.8% of being NAFLD with good calibration (Hosmer-Lemeshow χ 2 =4.35; P=.629). The positive likelihood ratio of NAFLD in patients with low risk (scores below 3) and high risk (scores 5 and over) were 2.32 (95% CI: 1.90-2.82) and 7.77 (95% CI: 2.47-24.47) respectively. When applied in validation cohort, the score showed good performance with AuROC 76.7%, and illustrated 84%, and 100% certainty in low- and high-risk groups respectively. A simple and non-invasive scoring scheme of five predictors provides good prediction indices for NAFLD in MetS patients. This scheme may help clinicians in order to take further appropriate action. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Data traffic reduction schemes for Cholesky factorization on asynchronous multiprocessor systems
NASA Technical Reports Server (NTRS)
Naik, Vijay K.; Patrick, Merrell L.
1989-01-01
Communication requirements of Cholesky factorization of dense and sparse symmetric, positive definite matrices are analyzed. The communication requirement is characterized by the data traffic generated on multiprocessor systems with local and shared memory. Lower bound proofs are given to show that when the load is uniformly distributed the data traffic associated with factoring an n x n dense matrix using n to the alpha power (alpha less than or equal 2) processors is omega(n to the 2 + alpha/2 power). For n x n sparse matrices representing a square root of n x square root of n regular grid graph the data traffic is shown to be omega(n to the 1 + alpha/2 power), alpha less than or equal 1. Partitioning schemes that are variations of block assignment scheme are described and it is shown that the data traffic generated by these schemes are asymptotically optimal. The schemes allow efficient use of up to O(n to the 2nd power) processors in the dense case and up to O(n) processors in the sparse case before the total data traffic reaches the maximum value of O(n to the 3rd power) and O(n to the 3/2 power), respectively. It is shown that the block based partitioning schemes allow a better utilization of the data accessed from shared memory and thus reduce the data traffic than those based on column-wise wrap around assignment schemes.
2010-01-01
Background The finite volume solver Fluent (Lebanon, NH, USA) is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results. Methods An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK) schemes. The fluid has a diffusivity of 3.125 × 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow. Results The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140% and 116% was demonstrated between the experimental results and those obtained from the First-Order Upwind and Power Law schemes, respectively. However, both the Second-Order upwind and QUICK schemes accurately predict species concentration under high Peclet number, convection-dominated flow conditions. Conclusion Convection-diffusion discretisation scheme selection has a strong influence on resultant species concentration fields, as determined by CFD. Furthermore, either the Second-Order or QUICK discretisation schemes should be implemented when numerically modelling convection-dominated mass-transport conditions. Finally, care should be taken not to utilize computationally inexpensive discretisation schemes at the cost of accuracy in resultant species concentration. PMID:20642816
NASA Astrophysics Data System (ADS)
Jaboulay, Jean-Charles; Brun, Emeric; Hugot, François-Xavier; Huynh, Tan-Dat; Malouch, Fadhel; Mancusi, Davide; Tsilanizara, Aime
2017-09-01
After fission or fusion reactor shutdown the activated structure emits decay photons. For maintenance operations the radiation dose map must be established in the reactor building. Several calculation schemes have been developed to calculate the shutdown dose rate. These schemes are widely developed in fusion application and more precisely for the ITER tokamak. This paper presents the rigorous-two-steps scheme implemented at CEA. It is based on the TRIPOLI-4® Monte Carlo code and the inventory code MENDEL. The ITER shutdown dose rate benchmark has been carried out, results are in a good agreement with the other participant.
Self-match based on polling scheme for passive optical network monitoring
NASA Astrophysics Data System (ADS)
Zhang, Xuan; Guo, Hao; Jia, Xinhong; Liao, Qinghua
2018-06-01
We propose a self-match based on polling scheme for passive optical network monitoring. Each end-user is equipped with an optical matcher that exploits only the specific length patchcord and two different fiber Bragg gratings with 100% reflectivity. The simple and low-cost scheme can greatly simplify the final recognition processing of the network link status and reduce the sensitivity of the photodetector. We analyze the time-domain relation between reflected pulses and establish the calculation model to evaluate the false alarm rate. The feasibility of the proposed scheme and the validity of the time-domain relation analysis are experimentally demonstrated.
Verallo-Rowell, Vermén M
2011-01-01
The validated hypoallergenic (vh) rating system was initiated in 1988 to try to objectively validate the "hypoallergenic" claim in cosmetics. To show how the system rates cosmetic hypoallergenicity and to compare the prevalence of cosmetic contact dermatitis (CCD) among users of regular cosmetics versus cosmetics with high VH numbers. (1) Made a VH list based on top allergens from patch-test results published by the North American Contact Dermatitis Group (NACDG) and the European Surveillance System on Contact Allergies (ESSCA); (2) reviewed global regulatory, cosmetic, drug, packaging, and manufacturing practices to show how allergens may contaminate products; (3) compared cosmetic ingredients lists against the VH list to obtain the VH rating (the more allergens absent, the higher the VH rating); and (4) obtained CCD prevalence among users of regular cosmetics versus users of cosmetics with high VH ratings. (1) Two VH lists (1988, 2003) included only cosmetic allergens in the NACDG surveys, the third (2007) included cosmetic and potential contaminant noncosmetic allergens, and the fourth (2010) adds ESSCA patch-test surveys. (2) CCD prevalence is 0.05 to 0.12% (average, 0.08%) among users of cosmetics with high VH ratings versus 2.4 to 36.3% among users of regular cosmetics. The VH rating system is shown to objectively validate the hypoallergenic cosmetics claim.
Adaptive vector validation in image velocimetry to minimise the influence of outlier clusters
NASA Astrophysics Data System (ADS)
Masullo, Alessandro; Theunissen, Raf
2016-03-01
The universal outlier detection scheme (Westerweel and Scarano in Exp Fluids 39:1096-1100, 2005) and the distance-weighted universal outlier detection scheme for unstructured data (Duncan et al. in Meas Sci Technol 21:057002, 2010) are the most common PIV data validation routines. However, such techniques rely on a spatial comparison of each vector with those in a fixed-size neighbourhood and their performance subsequently suffers in the presence of clusters of outliers. This paper proposes an advancement to render outlier detection more robust while reducing the probability of mistakenly invalidating correct vectors. Velocity fields undergo a preliminary evaluation in terms of local coherency, which parametrises the extent of the neighbourhood with which each vector will be compared subsequently. Such adaptivity is shown to reduce the number of undetected outliers, even when implemented in the afore validation schemes. In addition, the authors present an alternative residual definition considering vector magnitude and angle adopting a modified Gaussian-weighted distance-based averaging median. This procedure is able to adapt the degree of acceptable background fluctuations in velocity to the local displacement magnitude. The traditional, extended and recommended validation methods are numerically assessed on the basis of flow fields from an isolated vortex, a turbulent channel flow and a DNS simulation of forced isotropic turbulence. The resulting validation method is adaptive, requires no user-defined parameters and is demonstrated to yield the best performances in terms of outlier under- and over-detection. Finally, the novel validation routine is applied to the PIV analysis of experimental studies focused on the near wake behind a porous disc and on a supersonic jet, illustrating the potential gains in spatial resolution and accuracy.
Outcomes of Quality Assurance: A Discussion of Knowledge, Methodology and Validity
ERIC Educational Resources Information Center
Stensaker, Bjorn
2008-01-01
A common characteristic in many quality assurance schemes around the world is their implicit and often narrowly formulated understanding of how organisational change is to take place as a result of the process. By identifying some of the underlying assumptions related to organisational change in current quality assurance schemes, the aim of this…
NASA Astrophysics Data System (ADS)
Mishra, C.; Samantaray, A. K.; Chakraborty, G.
2016-09-01
Vibration analysis for diagnosis of faults in rolling element bearings is complicated when the rotor speed is variable or slow. In the former case, the time interval between the fault-induced impact responses in the vibration signal are non-uniform and the signal strength is variable. In the latter case, the fault-induced impact response strength is weak and generally gets buried in the noise, i.e. noise dominates the signal. This article proposes a diagnosis scheme based on a combination of a few signal processing techniques. The proposed scheme initially represents the vibration signal in terms of uniformly resampled angular position of the rotor shaft by using the interpolated instantaneous angular position measurements. Thereafter, intrinsic mode functions (IMFs) are generated through empirical mode decomposition (EMD) of resampled vibration signal which is followed by thresholding of IMFs and signal reconstruction to de-noise the signal and envelope order tracking to diagnose the faults. Data for validating the proposed diagnosis scheme are initially generated from a multi-body simulation model of rolling element bearing which is developed using bond graph approach. This bond graph model includes the ball and cage dynamics, localized fault geometry, contact mechanics, rotor unbalance, and friction and slip effects. The diagnosis scheme is finally validated with experiments performed with the help of a machine fault simulator (MFS) system. Some fault scenarios which could not be experimentally recreated are then generated through simulations and analyzed through the developed diagnosis scheme.
Active Inference and Learning in the Cerebellum.
Friston, Karl; Herreros, Ivan
2016-09-01
This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme's anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry-and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception.
On the security of two remote user authentication schemes for telecare medical information systems.
Kim, Kee-Won; Lee, Jae-Dong
2014-05-01
The telecare medical information systems (TMISs) support convenient and rapid health-care services. A secure and efficient authentication scheme for TMIS provides safeguarding patients' electronic patient records (EPRs) and helps health care workers and medical personnel to rapidly making correct clinical decisions. Recently, Kumari et al. proposed a password based user authentication scheme using smart cards for TMIS, and claimed that the proposed scheme could resist various malicious attacks. However, we point out that their scheme is still vulnerable to lost smart card and cannot provide forward secrecy. Subsequently, Das and Goswami proposed a secure and efficient uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care. They simulated their scheme for the formal security verification using the widely-accepted automated validation of Internet security protocols and applications (AVISPA) tool to ensure that their scheme is secure against passive and active attacks. However, we show that their scheme is still vulnerable to smart card loss attacks and cannot provide forward secrecy property. The proposed cryptanalysis discourages any use of the two schemes under investigation in practice and reveals some subtleties and challenges in designing this type of schemes.
Mishra, Raghavendra; Barnwal, Amit Kumar
2015-05-01
The Telecare medical information system (TMIS) presents effective healthcare delivery services by employing information and communication technologies. The emerging privacy and security are always a matter of great concern in TMIS. Recently, Chen at al. presented a password based authentication schemes to address the privacy and security. Later on, it is proved insecure against various active and passive attacks. To erase the drawbacks of Chen et al.'s anonymous authentication scheme, several password based authentication schemes have been proposed using public key cryptosystem. However, most of them do not present pre-smart card authentication which leads to inefficient login and password change phases. To present an authentication scheme with pre-smart card authentication, we present an improved anonymous smart card based authentication scheme for TMIS. The proposed scheme protects user anonymity and satisfies all the desirable security attributes. Moreover, the proposed scheme presents efficient login and password change phases where incorrect input can be quickly detected and a user can freely change his password without server assistance. Moreover, we demonstrate the validity of the proposed scheme by utilizing the widely-accepted BAN (Burrows, Abadi, and Needham) logic. The proposed scheme is also comparable in terms of computational overheads with relevant schemes.
Gang, G J; Siewerdsen, J H; Stayman, J W
2016-02-01
This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.
On large time step TVD scheme for hyperbolic conservation laws and its efficiency evaluation
NASA Astrophysics Data System (ADS)
Qian, ZhanSen; Lee, Chun-Hian
2012-08-01
A large time step (LTS) TVD scheme originally proposed by Harten is modified and further developed in the present paper and applied to Euler equations in multidimensional problems. By firstly revealing the drawbacks of Harten's original LTS TVD scheme, and reasoning the occurrence of the spurious oscillations, a modified formulation of its characteristic transformation is proposed and a high resolution, strongly robust LTS TVD scheme is formulated. The modified scheme is proven to be capable of taking larger number of time steps than the original one. Following the modified strategy, the LTS TVD schemes for Yee's upwind TVD scheme and Yee-Roe-Davis's symmetric TVD scheme are constructed. The family of the LTS schemes is then extended to multidimensional by time splitting procedure, and the associated boundary condition treatment suitable for the LTS scheme is also imposed. The numerical experiments on Sod's shock tube problem, inviscid flows over NACA0012 airfoil and ONERA M6 wing are performed to validate the developed schemes. Computational efficiencies for the respective schemes under different CFL numbers are also evaluated and compared. The results reveal that the improvement is sizable as compared to the respective single time step schemes, especially for the CFL number ranging from 1.0 to 4.0.
Das, Ashok Kumar; Goswami, Adrijit
2014-06-01
Recently, Awasthi and Srivastava proposed a novel biometric remote user authentication scheme for the telecare medicine information system (TMIS) with nonce. Their scheme is very efficient as it is based on efficient chaotic one-way hash function and bitwise XOR operations. In this paper, we first analyze Awasthi-Srivastava's scheme and then show that their scheme has several drawbacks: (1) incorrect password change phase, (2) fails to preserve user anonymity property, (3) fails to establish a secret session key beween a legal user and the server, (4) fails to protect strong replay attack, and (5) lacks rigorous formal security analysis. We then a propose a novel and secure biometric-based remote user authentication scheme in order to withstand the security flaw found in Awasthi-Srivastava's scheme and enhance the features required for an idle user authentication scheme. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. In addition, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool and show that our scheme is secure against passive and active attacks, including the replay and man-in-the-middle attacks. Our scheme is also efficient as compared to Awasthi-Srivastava's scheme.
Continuum limit of Bk from 2+1 flavor domain wall QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soni, A.; T. Izubuchi, et al.
2011-07-01
We determine the neutral kaon mixing matrix element B{sub K} in the continuum limit with 2+1 flavors of domain wall fermions, using the Iwasaki gauge action at two different lattice spacings. These lattice fermions have near exact chiral symmetry and therefore avoid artificial lattice operator mixing. We introduce a significant improvement to the conventional nonperturbative renormalization (NPR) method in which the bare matrix elements are renormalized nonperturbatively in the regularization invariant momentum scheme (RI-MOM) and are then converted into the MS{sup -} scheme using continuum perturbation theory. In addition to RI-MOM, we introduce and implement four nonexceptional intermediate momentum schemesmore » that suppress infrared nonperturbative uncertainties in the renormalization procedure. We compute the conversion factors relating the matrix elements in this family of regularization invariant symmetric momentum schemes (RI-SMOM) and MS{sup -} at one-loop order. Comparison of the results obtained using these different intermediate schemes allows for a more reliable estimate of the unknown higher-order contributions and hence for a correspondingly more robust estimate of the systematic error. We also apply a recently proposed approach in which twisted boundary conditions are used to control the Symanzik expansion for off-shell vertex functions leading to a better control of the renormalization in the continuum limit. We control chiral extrapolation errors by considering both the next-to-leading order SU(2) chiral effective theory, and an analytic mass expansion. We obtain B{sub K}{sup MS{sup -}} (3 GeV) = 0.529(5){sub stat}(15){sub {chi}}(2){sub FV}(11){sub NPR}. This corresponds to B{sup -}{sub K}{sup RGI{sup -}} = 0.749(7){sub stat}(21){sub {chi}}(3){sub FV}(15){sub NPR}. Adding all sources of error in quadrature, we obtain B{sup -}{sub K}{sup RGI{sup -}} = 0.749(27){sub combined}, with an overall combined error of 3.6%.« less
Validation of an Instrument and Testing Protocol for Measuring the Combinatorial Analysis Schema.
ERIC Educational Resources Information Center
Staver, John R.; Harty, Harold
1979-01-01
Designs a testing situation to examine the presence of combinatorial analysis, to establish construct validity in the use of an instrument, Combinatorial Analysis Behavior Observation Scheme (CABOS), and to investigate the presence of the schema in young adolescents. (Author/GA)
Reliability and Validity of the Dyadic Observed Communication Scale (DOCS).
Hadley, Wendy; Stewart, Angela; Hunter, Heather L; Affleck, Katelyn; Donenberg, Geri; Diclemente, Ralph; Brown, Larry K
2013-02-01
We evaluated the reliability and validity of the Dyadic Observed Communication Scale (DOCS) coding scheme, which was developed to capture a range of communication components between parents and adolescents. Adolescents and their caregivers were recruited from mental health facilities for participation in a large, multi-site family-based HIV prevention intervention study. Seventy-one dyads were randomly selected from the larger study sample and coded using the DOCS at baseline. Preliminary validity and reliability of the DOCS was examined using various methods, such as comparing results to self-report measures and examining interrater reliability. Results suggest that the DOCS is a reliable and valid measure of observed communication among parent-adolescent dyads that captures both verbal and nonverbal communication behaviors that are typical intervention targets. The DOCS is a viable coding scheme for use by researchers and clinicians examining parent-adolescent communication. Coders can be trained to reliably capture individual and dyadic components of communication for parents and adolescents and this complex information can be obtained relatively quickly.
Accelerated Edge-Preserving Image Restoration Without Boundary Artifacts
Matakos, Antonios; Ramani, Sathish; Fessler, Jeffrey A.
2013-01-01
To reduce blur in noisy images, regularized image restoration methods have been proposed that use non-quadratic regularizers (like l1 regularization or total-variation) that suppress noise while preserving edges in the image. Most of these methods assume a circulant blur (periodic convolution with a blurring kernel) that can lead to wraparound artifacts along the boundaries of the image due to the implied periodicity of the circulant model. Using a non-circulant model could prevent these artifacts at the cost of increased computational complexity. In this work we propose to use a circulant blur model combined with a masking operator that prevents wraparound artifacts. The resulting model is non-circulant, so we propose an efficient algorithm using variable splitting and augmented Lagrangian (AL) strategies. Our variable splitting scheme, when combined with the AL framework and alternating minimization, leads to simple linear systems that can be solved non-iteratively using FFTs, eliminating the need for more expensive CG-type solvers. The proposed method can also efficiently tackle a variety of convex regularizers including edge-preserving (e.g., total-variation) and sparsity promoting (e.g., l1 norm) regularizers. Simulation results show fast convergence of the proposed method, along with improved image quality at the boundaries where the circulant model is inaccurate. PMID:23372080
NASA Astrophysics Data System (ADS)
Wang, Lin; Cao, Xin; Ren, Qingyun; Chen, Xueli; He, Xiaowei
2018-05-01
Cerenkov luminescence imaging (CLI) is an imaging method that uses an optical imaging scheme to probe a radioactive tracer. Application of CLI with clinically approved radioactive tracers has opened an opportunity for translating optical imaging from preclinical to clinical applications. Such translation was further improved by developing an endoscopic CLI system. However, two-dimensional endoscopic imaging cannot identify accurate depth and obtain quantitative information. Here, we present an imaging scheme to retrieve the depth and quantitative information from endoscopic Cerenkov luminescence tomography, which can also be applied for endoscopic radio-luminescence tomography. In the scheme, we first constructed a physical model for image collection, and then a mathematical model for characterizing the luminescent light propagation from tracer to the endoscopic detector. The mathematical model is a hybrid light transport model combined with the 3rd order simplified spherical harmonics approximation, diffusion, and radiosity equations to warrant accuracy and speed. The mathematical model integrates finite element discretization, regularization, and primal-dual interior-point optimization to retrieve the depth and the quantitative information of the tracer. A heterogeneous-geometry-based numerical simulation was used to explore the feasibility of the unified scheme, which demonstrated that it can provide a satisfactory balance between imaging accuracy and computational burden.
Proposed scheme for parallel 10Gb/s VSR system and its verilog HDL realization
NASA Astrophysics Data System (ADS)
Zhou, Yi; Chen, Hongda; Zuo, Chao; Jia, Jiuchun; Shen, Rongxuan; Chen, Xiongbin
2005-02-01
This paper proposes a novel and innovative scheme for 10Gb/s parallel Very Short Reach (VSR) optical communication system. The optimized scheme properly manages the SDH/SONET redundant bytes and adjusts the position of error detecting bytes and error correction bytes. Compared with the OIF-VSR4-01.0 proposal, the scheme has a coding process module. The SDH/SONET frames in transmission direction are disposed as follows: (1) The Framer-Serdes Interface (FSI) gets 16×622.08Mb/s STM-64 frame. (2) The STM-64 frame is byte-wise stripped across 12 channels, all channels are data channels. During this process, the parity bytes and CRC bytes are generated in the similar way as OIF-VSR4-01.0 and stored in the code process module. (3) The code process module will regularly convey the additional parity bytes and CRC bytes to all 12 data channels. (4) After the 8B/10B coding, the 12 channels is transmitted to the parallel VCSEL array. The receive process approximately in reverse order of transmission process. By applying this scheme to 10Gb/s VSR system, the frame size in VSR system is reduced from 15552×12 bytes to 14040×12 bytes, the system redundancy is reduced obviously.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.
2009-01-01
Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and efficiency are studied for six nominally second-order accurate schemes: a node-centered scheme, cell-centered node-averaging schemes with and without clipping, and cell-centered schemes with unweighted, weighted, and approximately mapped least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Results from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The second class of tests are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes are less accurate, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to the complexity of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping of the surface anisotropy or modifying the scheme stencil to reflect the direction of strong coupling.
[Valuating public health in some zoos in Colombia. Phase 1: designing and validating instruments].
Agudelo-Suárez, Angela N; Villamil-Jiménez, Luis C
2009-10-01
Designing and validating instruments for identifying public health problems in some zoological parks in Colombia, thereby allowing them to be evaluated. Four instruments were designed and validated along with the participation of five zoos. The instruments were validated regarding appearance, content, sensitivity to change, reliability tests and determining the tools' usefulness. An evaluation scale was created which assigned a maximum of 400 points, having the following evaluation intervals: 350-400 points meant good public health management, 100-349 points for regular management and 0-99 points for deficient management. The instruments were applied to the five zoos as part of the validation, forming a base-line for future evaluation of public health in them. Four valid and useful instruments were obtained for evaluating public health in zoos in Colombia. The five zoos presented regular public health management. The base-line obtained when validating the instruments led to identifying strengths and weaknesses regarding public health management in the zoos. The instruments obtained generally and specifically evaluated public health management; they led to diagnosing, identifying, quantifying and scoring zoos in Colombia in terms of public health. The base-line provided a starting point for making comparisons and enabling future follow-up of public health in Colombian zoos.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.
2010-01-01
Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and complexity are studied for four nominally second-order accurate schemes: a node-centered scheme and three cell-centered schemes - a node-averaging scheme and two schemes with nearest-neighbor and adaptive compact stencils for least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Tests from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The tests of the second class are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes may degenerate on mixed grids, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to that of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping based on a distance function commonly available in practical schemes or modifying the scheme stencil to reflect the direction of strong coupling. The major conclusion is that accuracies of the node centered and the best cell-centered schemes are comparable at equivalent number of degrees of freedom.
First Prismatic Building Model Reconstruction from Tomosar Point Clouds
NASA Astrophysics Data System (ADS)
Sun, Y.; Shahzad, M.; Zhu, X.
2016-06-01
This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR) point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007) and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce) the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center) in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.
Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals
Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu
2012-01-01
Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017
NASA Astrophysics Data System (ADS)
Liu, J.; Suo, X. M.; Zhou, S. S.; Meng, S. Q.; Chen, S. S.; Mu, H. P.
2016-12-01
The tracking of the migration of ice frontal surface is crucial for the understanding of the underlying physical mechanisms in freezing soil. Owing to the distinct advantages, including non-invasive sensing, high safety, low cost and high data acquisition speed, the electrical capacitance tomography (ECT) is considered to be a promising visualization measurement method. In this paper, the ECT method is used to visualize the migration of ice frontal surface in freezing soil. With the main motivation of the improvement of imaging quality, a loss function with multiple regularizers that incorporate the prior formation related to the imaging objects is proposed to cast the ECT image reconstruction task into an optimization problem. An iteration scheme that integrates the superiority of the split Bregman iteration (SBI) method is developed for searching for the optimal solution of the proposed loss function. An unclosed electrodes sensor is designed for satisfying the requirements of practical measurements. An experimental system of one dimensional freezing in frozen soil is constructed, and the ice frontal surface migration in the freezing process of the wet soil sample containing five percent of moisture is measured. The visualization measurement results validate the feasibility and effectiveness of the ECT visualization method
Lukic, Luka; Santos-Victor, José; Billard, Aude
2014-04-01
We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye-arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye-arm-hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.
ERIC Educational Resources Information Center
Shany, Michal; Share, David L.
2011-01-01
Whereas most English language sub-typing schemes for dyslexia (e.g., Castles & Coltheart, "1993") have focused on reading accuracy for words varying in regularity, such an approach may have limited utility for reading disability sub-typing beyond English in which fluency rather than accuracy is the key discriminator of developmental and individual…
A space-frequency multiplicative regularization for force reconstruction problems
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
Dynamic forces reconstruction from vibration data is an ill-posed inverse problem. A standard approach to stabilize the reconstruction consists in using some prior information on the quantities to identify. This is generally done by including in the formulation of the inverse problem a regularization term as an additive or a multiplicative constraint. In the present article, a space-frequency multiplicative regularization is developed to identify mechanical forces acting on a structure. The proposed regularization strategy takes advantage of one's prior knowledge of the nature and the location of excitation sources, as well as that of their spectral contents. Furthermore, it has the merit to be free from the preliminary definition of any regularization parameter. The validity of the proposed regularization procedure is assessed numerically and experimentally. It is more particularly pointed out that properly exploiting the space-frequency characteristics of the excitation field to identify can improve the quality of the force reconstruction.
Introduction of Total Variation Regularization into Filtered Backprojection Algorithm
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.
A Novel IEEE 802.15.4e DSME MAC for Wireless Sensor Networks
Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin
2017-01-01
IEEE 802.15.4e standard proposes Deterministic and Synchronous Multichannel Extension (DSME) mode for wireless sensor networks (WSNs) to support industrial, commercial and health care applications. In this paper, a new channel access scheme and beacon scheduling schemes are designed for the IEEE 802.15.4e enabled WSNs in star topology to reduce the network discovery time and energy consumption. In addition, a new dynamic guaranteed retransmission slot allocation scheme is designed for devices with the failure Guaranteed Time Slot (GTS) transmission to reduce the retransmission delay. To evaluate our schemes, analytical models are designed to analyze the performance of WSNs in terms of reliability, delay, throughput and energy consumption. Our schemes are validated with simulation and analytical results and are observed that simulation results well match with the analytical one. The evaluated results of our designed schemes can improve the reliability, throughput, delay, and energy consumptions significantly. PMID:28275216
A Novel IEEE 802.15.4e DSME MAC for Wireless Sensor Networks.
Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin
2017-01-16
IEEE 802.15.4e standard proposes Deterministic and Synchronous Multichannel Extension (DSME) mode for wireless sensor networks (WSNs) to support industrial, commercial and health care applications. In this paper, a new channel access scheme and beacon scheduling schemes are designed for the IEEE 802.15.4e enabled WSNs in star topology to reduce the network discovery time and energy consumption. In addition, a new dynamic guaranteed retransmission slot allocation scheme is designed for devices with the failure Guaranteed Time Slot (GTS) transmission to reduce the retransmission delay. To evaluate our schemes, analytical models are designed to analyze the performance of WSNs in terms of reliability, delay, throughput and energy consumption. Our schemes are validated with simulation and analytical results and are observed that simulation results well match with the analytical one. The evaluated results of our designed schemes can improve the reliability, throughput, delay, and energy consumptions significantly.
A Gas-Kinetic Scheme for Reactive Flows
NASA Technical Reports Server (NTRS)
Lian,Youg-Sheng; Xu, Kun
1998-01-01
In this paper, the gas-kinetic BGK scheme for the compressible flow equations is extended to chemical reactive flow. The mass fraction of the unburnt gas is implemented into the gas kinetic equation by assigning a new internal degree of freedom to the particle distribution function. The new variable can be also used to describe fluid trajectory for the nonreactive flows. Due to the gas-kinetic BGK model, the current scheme basically solves the Navier-Stokes chemical reactive flow equations. Numerical tests validate the accuracy and robustness of the current kinetic method.
Collar grids for intersecting geometric components within the Chimera overlapped grid scheme
NASA Technical Reports Server (NTRS)
Parks, Steven J.; Buning, Pieter G.; Chan, William M.; Steger, Joseph L.
1991-01-01
A method for overcoming problems with using the Chimera overset grid scheme in the region of intersecting geometry components is presented. A 'collar grid' resolves the intersection region and provides communication between the component grids. This approach is validated by comparing computed and experimental data for a flow about a wing/body configuration. Application of the collar grid scheme to the Orbiter fuselage and vertical tail intersection in a computation of the full Space Shuttle launch vehicle demonstrates its usefulness for simulation of flow about complex aerospace vehicles.
A Novel Quantum Blind Signature Scheme with Four-particle GHZ States
NASA Astrophysics Data System (ADS)
Fan, Ling; Zhang, Ke-Jia; Qin, Su-Juan; Guo, Fen-Zhuo
2016-02-01
In an arbitrated quantum signature scheme, the signer signs the message and the receiver verifies the signature's validity with the assistance of the arbitrator. We present an arbitrated quantum blind signature scheme by using four-particle entangled Greenberger-Horne-Zeilinger (GHZ) states. By using the special relationship of four-particle GHZ states, we cannot only support the security of quantum signature, but also guarantee the anonymity of the message owner. It has a wide application to E-payment system, E-government, E-business, and etc.
Unconditionally Secure Blind Signatures
NASA Astrophysics Data System (ADS)
Hara, Yuki; Seito, Takenobu; Shikata, Junji; Matsumoto, Tsutomu
The blind signature scheme introduced by Chaum allows a user to obtain a valid signature for a message from a signer such that the message is kept secret for the signer. Blind signature schemes have mainly been studied from a viewpoint of computational security so far. In this paper, we study blind signatures in unconditional setting. Specifically, we newly introduce a model of unconditionally secure blind signature schemes (USBS, for short). Also, we propose security notions and their formalization in our model. Finally, we propose a construction method for USBS that is provably secure in our security notions.
Numerical scoring for the Classic BILAG index.
Cresswell, Lynne; Yee, Chee-Seng; Farewell, Vernon; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; Toescu, Veronica; D'Cruz, David; Khamashta, Munther A; Maddison, Peter; Isenberg, David A; Gordon, Caroline
2009-12-01
To develop an additive numerical scoring scheme for the Classic BILAG index. SLE patients were recruited into this multi-centre cross-sectional study. At every assessment, data were collected on disease activity and therapy. Logistic regression was used to model an increase in therapy, as an indicator of active disease, by the Classic BILAG score in eight systems. As both indicate inactivity, scores of D and E were set to 0 and used as the baseline in the fitted model. The coefficients from the fitted model were used to determine the numerical values for Grades A, B and C. Different scoring schemes were then compared using receiver operating characteristic (ROC) curves. Validation analysis was performed using assessments from a single centre. There were 1510 assessments from 369 SLE patients. The currently used coding scheme (A = 9, B = 3, C = 1 and D/E = 0) did not fit the data well. The regression model suggested three possible numerical scoring schemes: (i) A = 11, B = 6, C = 1 and D/E = 0; (ii) A = 12, B = 6, C = 1 and D/E = 0; and (iii) A = 11, B = 7, C = 1 and D/E = 0. These schemes produced comparable ROC curves. Based on this, A = 12, B = 6, C = 1 and D/E = 0 seemed a reasonable and practical choice. The validation analysis suggested that although the A = 12, B = 6, C = 1 and D/E = 0 coding is still reasonable, a scheme with slightly less weighting for B, such as A = 12, B = 5, C = 1 and D/E = 0, may be more appropriate. A reasonable additive numerical scoring scheme based on treatment decision for the Classic BILAG index is A = 12, B = 5, C = 1, D = 0 and E = 0.
Numerical scoring for the Classic BILAG index
Cresswell, Lynne; Yee, Chee-Seng; Farewell, Vernon; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N.; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; Toescu, Veronica; D’Cruz, David; Khamashta, Munther A.; Maddison, Peter; Isenberg, David A.
2009-01-01
Objective. To develop an additive numerical scoring scheme for the Classic BILAG index. Methods. SLE patients were recruited into this multi-centre cross-sectional study. At every assessment, data were collected on disease activity and therapy. Logistic regression was used to model an increase in therapy, as an indicator of active disease, by the Classic BILAG score in eight systems. As both indicate inactivity, scores of D and E were set to 0 and used as the baseline in the fitted model. The coefficients from the fitted model were used to determine the numerical values for Grades A, B and C. Different scoring schemes were then compared using receiver operating characteristic (ROC) curves. Validation analysis was performed using assessments from a single centre. Results. There were 1510 assessments from 369 SLE patients. The currently used coding scheme (A = 9, B = 3, C = 1 and D/E = 0) did not fit the data well. The regression model suggested three possible numerical scoring schemes: (i) A = 11, B = 6, C = 1 and D/E = 0; (ii) A = 12, B = 6, C = 1 and D/E = 0; and (iii) A = 11, B = 7, C = 1 and D/E = 0. These schemes produced comparable ROC curves. Based on this, A = 12, B = 6, C = 1 and D/E = 0 seemed a reasonable and practical choice. The validation analysis suggested that although the A = 12, B = 6, C = 1 and D/E = 0 coding is still reasonable, a scheme with slightly less weighting for B, such as A = 12, B = 5, C = 1 and D/E = 0, may be more appropriate. Conclusions. A reasonable additive numerical scoring scheme based on treatment decision for the Classic BILAG index is A = 12, B = 5, C = 1, D = 0 and E = 0. PMID:19779027
Soil Moisture Monitoring using Surface Electrical Resistivity measurements
NASA Astrophysics Data System (ADS)
Calamita, Giuseppe; Perrone, Angela; Brocca, Luca; Straface, Salvatore
2017-04-01
The relevant role played by the soil moisture (SM) for global and local natural processes results in an explicit interest for its spatial and temporal estimation in the vadose zone coming from different scientific areas - i.e. eco-hydrology, hydrogeology, atmospheric research, soil and plant sciences, etc... A deeper understanding of natural processes requires the collection of data on a higher number of points at increasingly higher spatial scales in order to validate hydrological numerical simulations. In order to take the best advantage of the Electrical Resistivity (ER) data with their non-invasive and cost-effective properties, sequential Gaussian geostatistical simulations (sGs) can be applied to monitor the SM distribution into the soil by means of a few SM measurements and a densely regular ER grid of monitoring. With this aim, co-located SM measurements using mobile TDR probes (MiniTrase), and ER measurements, obtained by using a four-electrode device coupled with a geo-resistivimeter (Syscal Junior), were collected during two surveys carried out on a 200 × 60 m2 area. Two time surveys were carried out during which Data were collected at a depth of around 20 cm for more than 800 points adopting a regular grid sampling scheme with steps (5 m) varying according to logistic and soil compaction constrains. The results of this study are robust due to the high number of measurements available for either variables which strengthen the confidence in the covariance function estimated. Moreover, the findings obtained using sGs show that it is possible to estimate soil moisture variations in the pedological zone by means of time-lapse electrical resistivity and a few SM measurements.
Report on Pairing-based Cryptography.
Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily
2015-01-01
This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST's position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed.
Report on Pairing-based Cryptography
Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily
2015-01-01
This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST’s position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed. PMID:26958435
Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.
Ullah, Azmat; Malik, Suheel Abdullah; Alimgeer, Khurram Saleem
2018-01-01
In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA) with Interior Point Algorithm (IPA) is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.
Case studies in configuration control for redundant robots
NASA Technical Reports Server (NTRS)
Seraji, H.; Lee, T.; Colbaugh, R.; Glass, K.
1989-01-01
A simple approach to configuration control of redundant robots is presented. The redundancy is utilized to control the robot configuration directly in task space, where the task will be performed. A number of task-related kinematic functions are defined and combined with the end-effector coordinates to form a set of configuration variables. An adaptive control scheme is then utilized to ensure that the configuration variables track the desired reference trajectories as closely as possible. Simulation results are presented to illustrate the control scheme. The scheme has also been implemented for direct online control of a PUMA industrial robot, and experimental results are presented. The simulation and experimental results validate the configuration control scheme for performing various realistic tasks.
A semi-implicit level set method for multiphase flows and fluid-structure interaction problems
NASA Astrophysics Data System (ADS)
Cottet, Georges-Henri; Maitre, Emmanuel
2016-06-01
In this paper we present a novel semi-implicit time-discretization of the level set method introduced in [8] for fluid-structure interaction problems. The idea stems from a linear stability analysis derived on a simplified one-dimensional problem. The semi-implicit scheme relies on a simple filter operating as a pre-processing on the level set function. It applies to multiphase flows driven by surface tension as well as to fluid-structure interaction problems. The semi-implicit scheme avoids the stability constraints that explicit scheme need to satisfy and reduces significantly the computational cost. It is validated through comparisons with the original explicit scheme and refinement studies on two-dimensional benchmarks.
NASA Astrophysics Data System (ADS)
Faussurier, G.; Blancard, C.; Combis, P.; Decoster, A.; Videau, L.
2017-10-01
We present a model to calculate the electrical and thermal electronic conductivities in plasmas using the Chester-Thellung-Kubo-Greenwood approach coupled with the Kramers approximation. The divergence in photon energy at low values is eliminated using a regularization scheme with an effective energy-dependent electron-ion collision-frequency. Doing so, we interpolate smoothly between the Drude-like and the Spitzer-like regularizations. The model still satisfies the well-known sum rule over the electrical conductivity. Such kind of approximation is also naturally extended to the average-atom model. A particular attention is paid to the Lorenz number. Its nondegenerate and degenerate limits are given and the transition towards the Drude-like limit is proved in the Kramers approximation.
Hadron physics through asymptotic SU(3) and the chiral SU(3) x SU(3) algebra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oneda, S.; Matsuda, S.; Perlmutter, A.
From Coral Gables conference on fundamental interactions for theoretical studies; Coral Gables, Florida, USA (22 Jan 1973). See CONF-730124-. The inter- SU(3)-multiplet regularities and clues to a possible level scheme of hadrons are studied in a systematic way. The hypothesis of asymptotic SU(3) is made in the presence of GMO mass splittings with mixing, which allows information to be extracted from the chiral SU(3) x SU(3) charge algebras and from the exotic commutation relations. For the ground states the schemes obtained are compatible with those of the SU(6) x O(3) classification. Sum rules are obtained which recover most of themore » good results of SU(6). (LBS)« less
Piro, Benoit; Shi, Shihui; Reisberg, Steeve; Noël, Vincent; Anquetin, Guillaume
2016-02-29
We review here the most frequently reported targets among the electrochemical immunosensors and aptasensors: antibiotics, bisphenol A, cocaine, ochratoxin A and estradiol. In each case, the immobilization procedures are described as well as the transduction schemes and the limits of detection. It is shown that limits of detections are generally two to three orders of magnitude lower for immunosensors than for aptasensors, due to the highest affinities of antibodies. No significant progresses have been made to improve these affinities, but transduction schemes were improved instead, which lead to a regular improvement of the limit of detections corresponding to ca. five orders of magnitude over these last 10 years. These progresses depend on the target, however.
Noiseless Vlasov-Poisson simulations with linearly transformed particles
Pinto, Martin C.; Sonnendrucker, Eric; Friedman, Alex; ...
2014-06-25
We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development ofmore » Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Lastly, benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.« less
Törnros, Tobias; Dorn, Helen; Reichert, Markus; Ebner-Priemer, Ulrich; Salize, Hans-Joachim; Tost, Heike; Meyer-Lindenberg, Andreas; Zipf, Alexander
2016-11-21
Self-reporting is a well-established approach within the medical and psychological sciences. In order to avoid recall bias, i.e. past events being remembered inaccurately, the reports can be filled out on a smartphone in real-time and in the natural environment. This is often referred to as ambulatory assessment and the reports are usually triggered at regular time intervals. With this sampling scheme, however, rare events (e.g. a visit to a park or recreation area) are likely to be missed. When addressing the correlation between mood and the environment, it may therefore be beneficial to include participant locations within the ambulatory assessment sampling scheme. Based on the geographical coordinates, the database query system then decides if a self-report should be triggered or not. We simulated four different ambulatory assessment sampling schemes based on movement data (coordinates by minute) from 143 voluntary participants tracked for seven consecutive days. Two location-based sampling schemes incorporating the environmental characteristics (land use and population density) at each participant's location were introduced and compared to a time-based sampling scheme triggering a report on the hour as well as to a sampling scheme incorporating physical activity. We show that location-based sampling schemes trigger a report less often, but we obtain more unique trigger positions and a greater spatial spread in comparison to sampling strategies based on time and distance. Additionally, the location-based methods trigger significantly more often at rarely visited types of land use and less often outside the study region where no underlying environmental data are available.
Steerable sound transport in a 3D acoustic network
NASA Astrophysics Data System (ADS)
Xia, Bai-Zhan; Jiao, Jun-Rui; Dai, Hong-Qing; Yin, Sheng-Wen; Zheng, Sheng-Jie; Liu, Ting-Ting; Chen, Ning; Yu, De-Jie
2017-10-01
Quasi-lossless and asymmetric sound transports, which are exceedingly desirable in various modern physical systems, are almost always based on nonlinear or angular momentum biasing effects with extremely high power levels and complex modulation schemes. A practical route for the steerable sound transport along any arbitrary acoustic pathway, especially in a three-dimensional (3D) acoustic network, can revolutionize the sound power propagation and the sound communication. Here, we design an acoustic device containing a regular-tetrahedral cavity with four cylindrical waveguides. A smaller regular-tetrahedral solid in this cavity is eccentrically emplaced to break spatial symmetry of the acoustic device. The numerical and experimental results show that the sound power flow can unimpededly transport between two waveguides away from the eccentric solid within a wide frequency range. Based on the quasi-lossless and asymmetric transport characteristic of the single acoustic device, we construct a 3D acoustic network, in which the sound power flow can flexibly propagate along arbitrary sound pathways defined by our acoustic devices with eccentrically emplaced regular-tetrahedral solids.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-04-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.
Hybrid fiber links for accurate optical frequency comparison
NASA Astrophysics Data System (ADS)
Lee, Won-Kyu; Stefani, Fabio; Bercy, Anthony; Lopez, Olivier; Amy-Klein, Anne; Pottie, Paul-Eric
2017-05-01
We present the experimental demonstration of a local two-way optical frequency comparison over a 43-km-long urban fiber network without any requirement for measurement synchronization. We combined the local two-way scheme with a regular active noise compensation scheme that was implemented on another parallel fiber leading to a highly reliable and robust frequency transfer. This hybrid scheme allowed us to investigate the major limiting factors of the local two-way comparison. We analyzed the contributions of the interferometers at both local and remote locations to the phase noise of the local two-way signal. Using the ability of this setup to be injected by either a single laser or two independent lasers, we measured the contributions of the demodulated laser instabilities to the long-term instability. We show that a fractional frequency instability level of 10-20 at 10,000 s can be obtained using this simple setup after propagation over a distance of 43 km in an urban area.
A Novel Deployment Scheme Based on Three-Dimensional Coverage Model for Wireless Sensor Networks
Xiao, Fu; Yang, Yang; Wang, Ruchuan; Sun, Lijuan
2014-01-01
Coverage pattern and deployment strategy are directly related to the optimum allocation of limited resources for wireless sensor networks, such as energy of nodes, communication bandwidth, and computing power, and quality improvement is largely determined by these for wireless sensor networks. A three-dimensional coverage pattern and deployment scheme are proposed in this paper. Firstly, by analyzing the regular polyhedron models in three-dimensional scene, a coverage pattern based on cuboids is proposed, and then relationship between coverage and sensor nodes' radius is deduced; also the minimum number of sensor nodes to maintain network area's full coverage is calculated. At last, sensor nodes are deployed according to the coverage pattern after the monitor area is subdivided into finite 3D grid. Experimental results show that, compared with traditional random method, sensor nodes number is reduced effectively while coverage rate of monitor area is ensured using our coverage pattern and deterministic deployment scheme. PMID:25045747
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.
High-order asynchrony-tolerant finite difference schemes for partial differential equations
NASA Astrophysics Data System (ADS)
Aditya, Konduri; Donzis, Diego A.
2017-12-01
Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.
Implementation of a Cross-Layer Sensing Medium-Access Control Scheme.
Su, Yishan; Fu, Xiaomei; Han, Guangyao; Xu, Naishen; Jin, Zhigang
2017-04-10
In this paper, compressed sensing (CS) theory is utilized in a medium-access control (MAC) scheme for wireless sensor networks (WSNs). We propose a new, cross-layer compressed sensing medium-access control (CL CS-MAC) scheme, combining the physical layer and data link layer, where the wireless transmission in physical layer is considered as a compress process of requested packets in a data link layer according to compressed sensing (CS) theory. We first introduced using compressive complex requests to identify the exact active sensor nodes, which makes the scheme more efficient. Moreover, because the reconstruction process is executed in a complex field of a physical layer, where no bit and frame synchronizations are needed, the asynchronous and random requests scheme can be implemented without synchronization payload. We set up a testbed based on software-defined radio (SDR) to implement the proposed CL CS-MAC scheme practically and to demonstrate the validation. For large-scale WSNs, the simulation results show that the proposed CL CS-MAC scheme provides higher throughput and robustness than the carrier sense multiple access (CSMA) and compressed sensing medium-access control (CS-MAC) schemes.
Effects of Pump-turbine S-shaped Characteristics on Transient Behaviours: Experimental Investigation
NASA Astrophysics Data System (ADS)
Zeng, Wei; Yang, Jiandong; Hu, Jinhong; Tang, Renbo
2017-05-01
A pumped storage stations model was set up and introduced in the previous paper. In the model station, the S-shaped characteristic curves was measured at the load rejection condition with the guide vanes stalling. Load rejection tests where guide-vane closed linearly were performed to validate the effect of the S-shaped characteristics on hydraulic transients. Load rejection experiments with different guide vane closing schemes were also performed to determine a suitable scheme considering the S-shaped characteristics. The condition of one pump turbine rejecting its load after another defined as one-after-another (OAA) load rejection was performed to validate the possibility of S-induced extreme draft tube pressure.
Joo, Hyun-Woo; Lee, Chang-Hwan; Rho, Jong-Seok; Jung, Hyun-Kyo
2003-08-01
In this paper, an inversion scheme for piezoelectric constants of piezoelectric transformers is proposed. The impedance of piezoelectric transducers is calculated using a three-dimensional finite element method. The validity of this is confirmed experimentally. The effects of material coefficients on piezoelectric transformers are investigated numerically. Six material coefficient variables for piezoelectric transformers were selected, and a design sensitivity method was adopted as an inversion scheme. The validity of the proposed method was confirmed by step-up ratio calculations. The proposed method is applied to the analysis of a sample piezoelectric transformer, and its resonance characteristics are obtained by numerically combined equivalent circuit method.
Fault Detection for Automotive Shock Absorber
NASA Astrophysics Data System (ADS)
Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis
2015-11-01
Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.
Seismic waveform inversion best practices: regional, global and exploration test cases
NASA Astrophysics Data System (ADS)
Modrak, Ryan; Tromp, Jeroen
2016-09-01
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.
Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms
NASA Astrophysics Data System (ADS)
Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy
2013-04-01
Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.
Analysis of an ABE Scheme with Verifiable Outsourced Decryption.
Liao, Yongjian; He, Yichuan; Li, Fagen; Jiang, Shaoquan; Zhou, Shijie
2018-01-10
Attribute-based encryption (ABE) is a popular cryptographic technology to protect the security of users' data in cloud computing. In order to reduce its decryption cost, outsourcing the decryption of ciphertexts is an available method, which enables users to outsource a large number of decryption operations to the cloud service provider. To guarantee the correctness of transformed ciphertexts computed by the cloud server via the outsourced decryption, it is necessary to check the correctness of the outsourced decryption to ensure security for the data of users. Recently, Li et al. proposed a full verifiability of the outsourced decryption of ABE scheme (ABE-VOD) for the authorized users and unauthorized users, which can simultaneously check the correctness of the transformed ciphertext for both them. However, in this paper we show that their ABE-VOD scheme cannot obtain the results which they had shown, such as finding out all invalid ciphertexts, and checking the correctness of the transformed ciphertext for the authorized user via checking it for the unauthorized user. We first construct some invalid ciphertexts which can pass the validity checking in the decryption algorithm. That means their "verify-then-decrypt" skill is unavailable. Next, we show that the method to check the validity of the outsourced decryption for the authorized users via checking it for the unauthorized users is not always correct. That is to say, there exist some invalid ciphertexts which can pass the validity checking for the unauthorized user, but cannot pass the validity checking for the authorized user.
Analysis of an ABE Scheme with Verifiable Outsourced Decryption
He, Yichuan; Li, Fagen; Jiang, Shaoquan; Zhou, Shijie
2018-01-01
Attribute-based encryption (ABE) is a popular cryptographic technology to protect the security of users’ data in cloud computing. In order to reduce its decryption cost, outsourcing the decryption of ciphertexts is an available method, which enables users to outsource a large number of decryption operations to the cloud service provider. To guarantee the correctness of transformed ciphertexts computed by the cloud server via the outsourced decryption, it is necessary to check the correctness of the outsourced decryption to ensure security for the data of users. Recently, Li et al. proposed a full verifiability of the outsourced decryption of ABE scheme (ABE-VOD) for the authorized users and unauthorized users, which can simultaneously check the correctness of the transformed ciphertext for both them. However, in this paper we show that their ABE-VOD scheme cannot obtain the results which they had shown, such as finding out all invalid ciphertexts, and checking the correctness of the transformed ciphertext for the authorized user via checking it for the unauthorized user. We first construct some invalid ciphertexts which can pass the validity checking in the decryption algorithm. That means their “verify-then-decrypt” skill is unavailable. Next, we show that the method to check the validity of the outsourced decryption for the authorized users via checking it for the unauthorized users is not always correct. That is to say, there exist some invalid ciphertexts which can pass the validity checking for the unauthorized user, but cannot pass the validity checking for the authorized user. PMID:29320418
NASA Astrophysics Data System (ADS)
Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.
2013-09-01
Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent, The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data-sets are made available to facilitate the process of model evaluation and scheme intercomparison.
NASA Astrophysics Data System (ADS)
Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.
2014-01-01
Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent. The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data sets are made available to facilitate the process of model evaluation and scheme intercomparison.
Validation of a RANS transition model using a high-order weighted compact nonlinear scheme
NASA Astrophysics Data System (ADS)
Tu, GuoHua; Deng, XiaoGang; Mao, MeiLiang
2013-04-01
A modified transition model is given based on the shear stress transport (SST) turbulence model and an intermittency transport equation. The energy gradient term in the original model is replaced by flow strain rate to saving computational costs. The model employs local variables only, and then it can be conveniently implemented in modern computational fluid dynamics codes. The fifth-order weighted compact nonlinear scheme and the fourth-order staggered scheme are applied to discrete the governing equations for the purpose of minimizing discretization errors, so as to mitigate the confusion between numerical errors and transition model errors. The high-order package is compared with a second-order TVD method on simulating the transitional flow of a flat plate. Numerical results indicate that the high-order package give better grid convergence property than that of the second-order method. Validation of the transition model is performed for transitional flows ranging from low speed to hypersonic speed.
NASA Astrophysics Data System (ADS)
Wang, Xin; Zhang, Yanqi; Zhang, Limin; Li, Jiao; Zhou, Zhongxing; Zhao, Huijuan; Gao, Feng
2016-04-01
We present a generalized strategy for direct reconstruction in pharmacokinetic diffuse fluorescence tomography (DFT) with CT-analogous scanning mode, which can accomplish one-step reconstruction of the indocyanine-green pharmacokinetic-rate images within in vivo small animals by incorporating the compartmental kinetic model into an adaptive extended Kalman filtering scheme and using an instantaneous sampling dataset. This scheme, compared with the established indirect and direct methods, eliminates the interim error of the DFT inversion and relaxes the expensive requirement of the instrument for obtaining highly time-resolved date-sets of complete 360 deg projections. The scheme is validated by two-dimensional simulations for the two-compartment model and pilot phantom experiments for the one-compartment model, suggesting that the proposed method can estimate the compartmental concentrations and the pharmacokinetic-rates simultaneously with a fair quantitative and localization accuracy, and is well suitable for cost-effective and dense-sampling instrumentation based on the highly-sensitive photon counting technique.
Inversion Schemes to Retrieve Atmospheric and Oceanic Parameters from SeaWiFS Data
NASA Technical Reports Server (NTRS)
Deschamps, P.-Y.; Frouin, R.
1997-01-01
The investigation focuses on two key issues in satellite ocean color remote sensing, namely the presence of whitecaps on the sea surface and the validity of the aerosol models selected for the atmospheric correction of SeaWiFS data. Experiments were designed and conducted at the Scripps Institution of Oceanography to measure the optical properties of whitecaps and to study the aerosol optical properties in a typical mid-latitude coastal environment. CIMEL Electronique sunphotometers, now integrated in the AERONET network, were also deployed permanently in Bermuda and in Lanai, calibration/validation sites for SeaWiFS and MODIS. Original results were obtained on the spectral reflectance of whitecaps and on the choice of aerosol models for atmospheric correction schemes and the type of measurements that should be made to verify those schemes. Bio-optical algorithms to remotely sense primary productivity from space were also evaluated, as well as current algorithms to estimate PAR at the earth's surface.
NASA Astrophysics Data System (ADS)
Cheng, Qing; Yang, Xiaofeng; Shen, Jie
2017-07-01
In this paper, we consider numerical approximations of a hydro-dynamically coupled phase field diblock copolymer model, in which the free energy contains a kinetic potential, a gradient entropy, a Ginzburg-Landau double well potential, and a long range nonlocal type potential. We develop a set of second order time marching schemes for this system using the "Invariant Energy Quadratization" approach for the double well potential, the projection method for the Navier-Stokes equation, and a subtle implicit-explicit treatment for the stress and convective term. The resulting schemes are linear and lead to symmetric positive definite systems at each time step, thus they can be efficiently solved. We further prove that these schemes are unconditionally energy stable. Various numerical experiments are performed to validate the accuracy and energy stability of the proposed schemes.
Adaptive elimination of synchronization in coupled oscillator
NASA Astrophysics Data System (ADS)
Zhou, Shijie; Ji, Peng; Zhou, Qing; Feng, Jianfeng; Kurths, Jürgen; Lin, Wei
2017-08-01
We present here an adaptive control scheme with a feedback delay to achieve elimination of synchronization in a large population of coupled and synchronized oscillators. We validate the feasibility of this scheme not only in the coupled Kuramoto’s oscillators with a unimodal or bimodal distribution of natural frequency, but also in two representative models of neuronal networks, namely, the FitzHugh-Nagumo spiking oscillators and the Hindmarsh-Rose bursting oscillators. More significantly, we analytically illustrate the feasibility of the proposed scheme with a feedback delay and reveal how the exact topological form of the bimodal natural frequency distribution influences the scheme performance. We anticipate that our developed scheme will deepen the understanding and refinement of those controllers, e.g. techniques of deep brain stimulation, which have been implemented in remedying some synchronization-induced mental disorders including Parkinson disease and epilepsy.
Energy efficient strategy for throughput improvement in wireless sensor networks.
Jabbar, Sohail; Minhas, Abid Ali; Imran, Muhammad; Khalid, Shehzad; Saleem, Kashif
2015-01-23
Network lifetime and throughput are one of the prime concerns while designing routing protocols for wireless sensor networks (WSNs). However, most of the existing schemes are either geared towards prolonging network lifetime or improving throughput. This paper presents an energy efficient routing scheme for throughput improvement in WSN. The proposed scheme exploits multilayer cluster design for energy efficient forwarding node selection, cluster heads rotation and both inter- and intra-cluster routing. To improve throughput, we rotate the role of cluster head among various nodes based on two threshold levels which reduces the number of dropped packets. We conducted simulations in the NS2 simulator to validate the performance of the proposed scheme. Simulation results demonstrate the performance efficiency of the proposed scheme in terms of various metrics compared to similar approaches published in the literature.
Energy Efficient Strategy for Throughput Improvement in Wireless Sensor Networks
Jabbar, Sohail; Minhas, Abid Ali; Imran, Muhammad; Khalid, Shehzad; Saleem, Kashif
2015-01-01
Network lifetime and throughput are one of the prime concerns while designing routing protocols for wireless sensor networks (WSNs). However, most of the existing schemes are either geared towards prolonging network lifetime or improving throughput. This paper presents an energy efficient routing scheme for throughput improvement in WSN. The proposed scheme exploits multilayer cluster design for energy efficient forwarding node selection, cluster heads rotation and both inter- and intra-cluster routing. To improve throughput, we rotate the role of cluster head among various nodes based on two threshold levels which reduces the number of dropped packets. We conducted simulations in the NS2 simulator to validate the performance of the proposed scheme. Simulation results demonstrate the performance efficiency of the proposed scheme in terms of various metrics compared to similar approaches published in the literature. PMID:25625902
Bjørkly, Stål; Moger, Tron A
2007-12-01
The Acute Project is a research project conducted on acute psychiatric admission wards in Norway. The objective is to develop and validate a structured, easy-to-use screening checklist for assessment of risk for violence in patients both during their stay in the ward and after discharge. The Preliminary Scheme 33 is a 33-item screening checklist with content domain inspired by the Historical-Clinical-Risk Management Scheme (HCR-20), the Brøset Violence Checklist, and eight risk factors extracted from the literature on risk assessment. The Preliminary Scheme 33 was designed and tested in two steps by a research group which includes the authors. The common aim of both steps was to develop this into a time economical, reliable, and valid checklist. In the first step in 2006 the predictive validity of the individual items was tested. The present work presents results from the second step, a study conducted to assess the interrater reliability of the 33 items. Eight clinicians working in an acute psychiatric unit volunteered to be raters and were trained to score the 33 items on a three-point scale in relation to 15 clinical vignettes, which contained information from 15 acute psychiatric patients' files. Analysis showed high interrater reliability for the total score with an intraclass correlation coefficient (ICC) of .86 (95% CI: 0.74-0.94). However, a substantial proportion of the items had medium to low ICCs. Consequences of this finding for further development of these items into a brief screen are discussed.
Data Mining in Institutional Economics Tasks
NASA Astrophysics Data System (ADS)
Kirilyuk, Igor; Kuznetsova, Anna; Senko, Oleg
2018-02-01
The paper discusses problems associated with the use of data mining tools to study discrepancies between countries with different types of institutional matrices by variety of potential explanatory variables: climate, economic or infrastructure indicators. An approach is presented which is based on the search of statistically valid regularities describing the dependence of the institutional type on a single variable or a pair of variables. Examples of regularities are given.
Analysis of a New Variational Model to Restore Point-Like and Curve-Like Singularities in Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aubert, Gilles, E-mail: gaubert@unice.fr; Blanc-Feraud, Laure, E-mail: Laure.Blanc-Feraud@inria.fr; Graziani, Daniele, E-mail: Daniele.Graziani@inria.fr
2013-02-15
The paper is concerned with the analysis of a new variational model to restore point-like and curve-like singularities in biological images. To this aim we investigate the variational properties of a suitable energy which governs these pathologies. Finally in order to realize numerical experiments we minimize, in the discrete setting, a regularized version of this functional by fast descent gradient scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sowell, G.A.
1982-01-01
A calculation of nonsinglet longitudinal coefficient function of deep-inelastic scattering through order-g/sup 4/ is presented, using the operator-product expansion and the renormalization group. Both ultraviolet and infrared divergences are regulated with dimensional regularization. The renormalization scheme dependence of the result is discussed along with its phenomenological application in the determination of R = sigma/sub L//sigma/sub T/.
NASA Astrophysics Data System (ADS)
Ben Achour, Jibril; Brahma, Suddhasattwa
2018-06-01
When applying the techniques of loop quantum gravity (LQG) to symmetry-reduced gravitational systems, one first regularizes the scalar constraint using holonomy corrections, prior to quantization. In inhomogeneous system, where a residual spatial diffeomorphism symmetry survives, such modification of the gauge generator generating time reparametrization can potentially lead to deformations or anomalies in the modified algebra of first-class constraints. When working with self-dual variables, it has already been shown that, for spherically symmetric geometry coupled to a scalar field, the holonomy-modified constraints do not generate any modifications to general covariance, as one faces in the real variables formulation, and can thus accommodate local degrees of freedom in such inhomogeneous models. In this paper, we extend this result to Gowdy cosmologies in the self-dual Ashtekar formulation. Furthermore, we show that the introduction of a μ ¯-scheme in midisuperspace models, as is required in the "improved dynamics" of LQG, is possible in the self-dual formalism while being out of reach in the current effective models using real-valued Ashtekar-Barbero variables. Our results indicate the advantages of using the self-dual variables to obtain a covariant loop regularization prior to quantization in inhomogeneous symmetry-reduced polymer models, additionally implementing the crucial μ ¯-scheme, and thus a consistent semiclassical limit.
The Geant4 physics validation repository
NASA Astrophysics Data System (ADS)
Wenzel, H.; Yarba, J.; Dotti, A.
2015-12-01
The Geant4 collaboration regularly performs validation and regression tests. The results are stored in a central repository and can be easily accessed via a web application. In this article we describe the Geant4 physics validation repository which consists of a relational database storing experimental data and Geant4 test results, a java API and a web application. The functionality of these components and the technology choices we made are also described.
Wen, Fengtong
2013-12-01
User authentication plays an important role to protect resources or services from being accessed by unauthorized users. In a recent paper, Das et al. proposed a secure and efficient uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care. This scheme uses three factors, e.g. biometrics, password, and smart card, to protect the security. It protects user privacy and is believed to have many abilities to resist a range of network attacks, even if the secret information stored in the smart card is compromised. In this paper, we analyze the security of Das et al.'s scheme, and show that the scheme is in fact insecure against the replay attack, user impersonation attacks and off-line guessing attacks. Then, we also propose a robust uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care. Compared with the existing schemes, our protocol uses a different user authentication mechanism to resist replay attack. We show that our proposed scheme can provide stronger security than previous protocols. Furthermore, we demonstrate the validity of the proposed scheme through the BAN (Burrows, Abadi, and Needham) logic.
NASA Astrophysics Data System (ADS)
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
2016-08-01
Numerical solutions of the hydrodynamical model of semiconductor devices are presented in one and two-space dimension. The model describes the charge transport in semiconductor devices. Mathematically, the models can be written as a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the conservation element and solution element (CE/SE) method for hyperbolic step, and a semi-implicit scheme for the relaxation step. The numerical results of the suggested scheme are compared with the splitting scheme based on Nessyahu-Tadmor (NT) central scheme for convection step and the same semi-implicit scheme for the relaxation step. The effects of various parameters such as low field mobility, device length, lattice temperature and voltages for one-space dimensional hydrodynamic model are explored to further validate the generic applicability of the CE/SE method for the current model equations. A two dimensional simulation is also performed by CE/SE method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps.
Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han; Lin, Tsung-Hung
2017-01-01
A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes.
Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps
Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han
2017-01-01
A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes. PMID:28759615
2013-01-01
Background Place and health researchers are increasingly interested in integrating individuals’ mobility and the experience they have with multiple settings in their studies. In practice, however, few tools exist which allow for rapid and accurate gathering of detailed information on the geographic location of places where people regularly undertake activities. We describe the development and validation of a new activity location questionnaire which can be useful in accounting for multiple environmental influences in large population health investigations. Methods To develop the questionnaire, we relied on a literature review of similar data collection tools and on results of a pilot study wherein we explored content validity, test-retest reliability, and face validity. To estimate convergent validity, we used data from a study of users of a public bicycle share program conducted in Montreal, Canada in 2011. We examined the spatial congruence between questionnaire data and data from three other sources: 1) one-week GPS tracks; 2) activity locations extracted from the GPS tracks; and 3) a prompted recall survey of locations visited during the day. Proximity and convex hull measures were used to compare questionnaire-derived data and GPS and prompted recall survey data. Results In the sample, 75% of questionnaire-reported activity locations were located within 400 meters of an activity location recorded on the GPS track or through the prompted recall survey. Results from convex hull analyses suggested questionnaire activity locations were more concentrated in space than GPS or prompted-recall locations. Conclusions The new questionnaire has high convergent validity and can be used to accurately collect data on regular activity spaces in terms of locations regularly visited. The methods, measures, and findings presented provide new material to further study mobility in place and health research. PMID:24025119
High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities
NASA Astrophysics Data System (ADS)
Britt, Darrell Steven, Jr.
Problems of time-harmonic wave propagation arise in important fields of study such as geological surveying, radar detection/evasion, and aircraft design. These often involve highfrequency waves, which demand high-order methods to mitigate the dispersion error. We propose a high-order method for computing solutions to the variable-coefficient inhomogeneous Helmholtz equation in two dimensions on domains bounded by piecewise smooth curves of arbitrary shape with a finite number of boundary singularities at known locations. We utilize compact finite difference (FD) schemes on regular structured grids to achieve highorder accuracy due to their efficiency and simplicity, as well as the capability to approximate variable-coefficient differential operators. In this work, a 4th-order compact FD scheme for the variable-coefficient Helmholtz equation on a Cartesian grid in 2D is derived and tested. The well known limitation of finite differences is that they lose accuracy when the boundary curve does not coincide with the discretization grid, which is a severe restriction on the geometry of the computational domain. Therefore, the algorithm presented in this work combines high-order FD schemes with the method of difference potentials (DP), which retains the efficiency of FD while allowing for boundary shapes that are not aligned with the grid without sacrificing the accuracy of the FD scheme. Additionally, the theory of DP allows for the universal treatment of the boundary conditions. One of the significant contributions of this work is the development of an implementation that accommodates general boundary conditions (BCs). In particular, Robin BCs with discontinuous coefficients are studied, for which we introduce a piecewise parameterization of the boundary curve. Problems with discontinuities in the boundary data itself are also studied. We observe that the design convergence rate suffers whenever the solution loses regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE. For this reason, we implement the method of singularity subtraction as a means for restoring the design accuracy of the scheme in the presence of singularities at the boundary. While this method is well studied for low order methods and for problems in which singularities arise from the geometry (e.g., corners), we adapt it to our high-order scheme for curved boundaries via a conformal mapping and show that it can also be used to restore accuracy when the singularity arises from the BCs rather than the geometry. Altogether, the proposed methodology for 2D boundary value problems is computationally efficient, easily handles a wide class of boundary conditions and boundary shapes that are not aligned with the discretization grid, and requires little modification for solving new problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, Sam R.; Barack, Leor
2011-01-15
To model the radiative evolution of extreme mass-ratio binary inspirals (a key target of the LISA mission), the community needs efficient methods for computation of the gravitational self-force (SF) on the Kerr spacetime. Here we further develop a practical 'm-mode regularization' scheme for SF calculations, and give the details of a first implementation. The key steps in the method are (i) removal of a singular part of the perturbation field with a suitable 'puncture' to leave a sufficiently regular residual within a finite worldtube surrounding the particle's worldline, (ii) decomposition in azimuthal (m) modes, (iii) numerical evolution of the mmore » modes in 2+1D with a finite-difference scheme, and (iv) reconstruction of the SF from the mode sum. The method relies on a judicious choice of puncture, based on the Detweiler-Whiting decomposition. We give a working definition for the ''order'' of the puncture, and show how it determines the convergence rate of the m-mode sum. The dissipative piece of the SF displays an exponentially convergent mode sum, while the m-mode sum for the conservative piece converges with a power law. In the latter case, the individual modal contributions fall off at large m as m{sup -n} for even n and as m{sup -n+1} for odd n, where n is the puncture order. We describe an m-mode implementation with a 4th-order puncture to compute the scalar-field SF along circular geodesics on Schwarzschild. In a forthcoming companion paper we extend the calculation to the Kerr spacetime.« less
A multichannel amplitude and relative-phase controller for active sound quality control
NASA Astrophysics Data System (ADS)
Mosquera-Sánchez, Jaime A.; Desmet, Wim; de Oliveira, Leopoldo P. R.
2017-05-01
The enhancement of the sound quality of periodic disturbances for a number of listeners within an enclosure often confronts difficulties given by cross-channel interferences, which arise from simultaneously profiling the primary sound at each error sensor. These interferences may deteriorate the original sound among each listener, which is an unacceptable result from the point of view of sound quality control. In this paper we provide experimental evidence on controlling both amplitude and relative-phase functions of stationary complex primary sounds for a number of listeners within a cavity, attaining amplifications of twice the original value, reductions on the order of 70 dB, and relative-phase shifts between ± π rad, still in a free-of-interference control scenario. To accomplish such burdensome control targets, we have designed a multichannel active sound profiling scheme that bases its operation on exchanging time-domain control signals among the control units during uptime. Provided the real parts of the eigenvalues of persistently excited control matrices are positive, the proposed multichannel array is able to counterbalance cross-channel interferences, while attaining demanding control targets. Moreover, regularization of unstable control matrices is not seen to prevent the proposed array to provide free-of-interference amplitude and relative-phase control, but the system performance is degraded, as a function of the amount of regularization needed. The assessment of Loudness and Roughness metrics on the controlled primary sound proves that the proposed distributed control scheme noticeably outperforms current techniques, since active amplitude- and/or relative-phase-based enhancement of the auditory qualities of a primary sound no longer implies in causing interferences among different positions. In this regard, experimental results also confirm the effectiveness of the proposed scheme on stably enhancing the sound qualities of periodic sounds for multiple listeners within a cavity.
Novel Directional Protection Scheme for the FREEDM Smart Grid System
NASA Astrophysics Data System (ADS)
Sharma, Nitish
This research primarily deals with the design and validation of the protection system for a large scale meshed distribution system. The large scale system simulation (LSSS) is a system level PSCAD model which is used to validate component models for different time-scale platforms, to provide a virtual testing platform for the Future Renewable Electric Energy Delivery and Management (FREEDM) system. It is also used to validate the cases of power system protection, renewable energy integration and storage, and load profiles. The protection of the FREEDM system against any abnormal condition is one of the important tasks. The addition of distributed generation and power electronic based solid state transformer adds to the complexity of the protection. The FREEDM loop system has a fault current limiter and in addition, the Solid State Transformer (SST) limits the fault current at 2.0 per unit. Former students at ASU have developed the protection scheme using fiber-optic cable. However, during the NSF-FREEDM site visit, the National Science Foundation (NSF) team regarded the system incompatible for the long distances. Hence, a new protection scheme with a wireless scheme is presented in this thesis. The use of wireless communication is extended to protect the large scale meshed distributed generation from any fault. The trip signal generated by the pilot protection system is used to trigger the FID (fault isolation device) which is an electronic circuit breaker operation (switched off/opening the FIDs). The trip signal must be received and accepted by the SST, and it must block the SST operation immediately. A comprehensive protection system for the large scale meshed distribution system has been developed in PSCAD with the ability to quickly detect the faults. The validation of the protection system is performed by building a hardware model using commercial relays at the ASU power laboratory.
Development of a monitoring instrument to assess the performance of the Swiss primary care system.
Ebert, Sonja T; Pittet, Valérie; Cornuz, Jacques; Senn, Nicolas
2017-11-29
The Swiss health system is customer-driven with fee-for-service paiement scheme and universal coverage. It is highly performing but expensive and health information systems are scarcely implemented. The Swiss Primary Care Active Monitoring (SPAM) program aims to develop an instrument able to describe the performance and effectiveness of the Swiss PC system. Based on a Literature review we developed a conceptual framework and selected indicators according to their ability to reflect the Swiss PC system. A two round modified RAND method with 24 inter-/national experts took place to select primary/secondary indicators (validity, clarity, agreement). A limited set of priority indicators was selected (importance, priority) in a third round. A conceptual framework covering three domains (structure, process, outcome) subdivided into twelve sections (funding, access, organisation/ workflow of resources, (Para-)Medical training, management of knowledge, clinical-/interpersonal care, health status, satisfaction of PC providers/ consumers, equity) was generated. 365 indicators were pre-selected and 335 were finally retained. 56 were kept as priority indicators.- Among the remaining, 199 were identified as primary and 80 as secondary indicators. All domains and sections are represented. The development of the SPAM program allowed the construction of a consensual instrument in a traditionally unregulated health system through a modified RAND method. The selected 56 priority indicators render the SPAM instrument a comprehensive tool supporting a better understanding of the Swiss PC system's performance and effectiveness as well as in identifying potential ways to improve quality of care. Further challenges will be to update indicators regularly and to assess validity and sensitivity-to-change over time.
Exploration of exposure conditions with a novel wireless detector for bedside digital radiography
NASA Astrophysics Data System (ADS)
Bosmans, Hilde; Nens, Joris; Delzenne, Louis; Marshall, Nicholas; Pauwels, Herman; De Wever, Walter; Oyen, Raymond
2012-03-01
We propose, apply and validate an optimization scheme for a new wireless CsI based DR detector in combination with a regular mobile X-ray system for bedside imaging applications. Three different grids were tested in this combination. Signal-difference-to-noise was investigated in two ways, using a 1mm Cu piece in combination with different thicknesses of PMMA and by means of the CDRAD phantom using 10 images per condition and an automated evaluation method. A Figure of Merit (FOM), namely SDNR2/Imparted Energy, was calculated for a large range of exposure conditions, without and with grid in place. Misalignment of the grids was evaluated via the same FOMs. This optimization study was validated with comparative X-ray acquisitions performed on dead bodies. An experienced radiologist scored the quality of several specific aspects for all these exposures. Signal difference to noise ratios measured with the Cu method correlated well with the threshold contrasts from the CDRAD analysis (R2 > 0.9). The analysis showed optimal FOM with detector air kerma rates as typically used in clinical practice. Lower tube voltages provide higher FOM than the higher values but their practical use depends on the limitations of X-ray tubes, linked to patient motion artefacts. The use of high resolution grids should be encouraged, as the FOM increases with 47% at 75kV. These scores from the Visual grading study confirmed the results obtained with the FOM. The switch to (wireless) DR technology for bedside imaging could benefit from devices to improve grid positioning or any scatter reduction technique.
Negative refraction using Raman transitions and chirality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikes, D. E.; Yavuz, D. D.
2011-11-15
We present a scheme that achieves negative refraction with low absorption in far-off resonant atomic systems. The scheme utilizes Raman resonances and does not require the simultaneous presence of an electric-dipole transition and a magnetic-dipole transition near the same wavelength. We show that two interfering Raman tran-sitions coupled to a magnetic-dipole transition can achieve a negative index of refraction with low absorption through magnetoelectric cross-coupling. We confirm the validity of the analytical results with exact numerical simulations of the density matrix. We also discuss possible experimental implementations of the scheme in rare-earth metal atomic systems.
SIMINOFF, LAURA A.; STEP, MARY M.
2011-01-01
Many observational coding schemes have been offered to measure communication in health care settings. These schemes fall short of capturing multiple functions of communication among providers, patients, and other participants. After a brief review of observational communication coding, the authors present a comprehensive scheme for coding communication that is (a) grounded in communication theory, (b) accounts for instrumental and relational communication, and (c) captures important contextual features with tailored coding templates: the Siminoff Communication Content & Affect Program (SCCAP). To test SCCAP reliability and validity, the authors coded data from two communication studies. The SCCAP provided reliable measurement of communication variables including tailored content areas and observer ratings of speaker immediacy, affiliation, confirmation, and disconfirmation behaviors. PMID:21213170
NASA Astrophysics Data System (ADS)
Le Hardy, D.; Favennec, Y.; Rousseau, B.
2016-08-01
The 2D radiative transfer equation coupled with specular reflection boundary conditions is solved using finite element schemes. Both Discontinuous Galerkin and Streamline-Upwind Petrov-Galerkin variational formulations are fully developed. These two schemes are validated step-by-step for all involved operators (transport, scattering, reflection) using analytical formulations. Numerical comparisons of the two schemes, in terms of convergence rate, reveal that the quadratic SUPG scheme proves efficient for solving such problems. This comparison constitutes the main issue of the paper. Moreover, the solution process is accelerated using block SOR-type iterative methods, for which the determination of the optimal parameter is found in a very cheap way.
ERIC Educational Resources Information Center
Koster, Marloes; Minnaert, Alexander E. M. G.; Nakken, Han; Pijl, Sip Jan; van Houten, Els J.
2011-01-01
This study addresses the convergent validity of a new teacher questionnaire to assess the social participation of students with special needs in regular primary schools. The Social Participation Questionnaire (SPQ) consists of four subscales representing four key themes of social participation: friendships/relationships, contacts/interactions,…
ERIC Educational Resources Information Center
Gao, Zan; Lee, Amelia M.; Solmon, Melinda A.; Kosma, Maria; Carson, Russell L.; Zhang, Tao; Domangue, Elizabeth; Moore, Delilah
2010-01-01
The purpose of this study was to validate physical activity time in middle school physical education as measured by pedometers in relation to a criterion measure, namely, students' accelerometer determined moderate to vigorous physical activity (MVPA). Participants were 155 sixth to eighth graders participating in regularly scheduled physical…
Development and Initial Validation of the Volition in Exercise Questionnaire (VEQ)
ERIC Educational Resources Information Center
Elsborg, P.; Wikman, J. M.; Nielsen, G.; Tolver, A.; Elbe, A.-M.
2017-01-01
The present study describes the development and validation of an instrument to measure volition in the exercise context. Volition describes an individual's self-regulatory mental processes that are responsible for taking and maintaining a desirable action (e.g., exercising regularly). The scale structure was developed in an exploratory factor…
Development and practical implications of the Exercise Resourcefulness Inventory.
Fast, Hilary V; Kennett, Deborah J
2015-05-01
To determine the validity and reliability of the Exercise Resourcefulness Inventory (ERI) designed to assess the self-regulatory strategies used to promote regular exercise. In Study 1, the inventory's relationship with other established scales in the exercise behavior change field was examined. In Study 2, the test-retest reliability and predictive validity of the ERI was established by having participants from Study 1 complete the inventory a second time. Internal consistency, and convergent, discriminant, and concurrent validity were supported in both studies. The test-retest correlation of the ERI was .80. As well, participants scoring higher on the ERI in Study 1 were more likely to be at a higher stage of change in Study 2, and greater increases in exercise resourcefulness over time were predictive of advancement to higher stages of change. ERI is a reliable and valid measure to assess the self-regulatory strategies used to promote regular exercise. Facilitators may want to tailor exercise programs for individuals scoring lower in resourcefulness to prevent them from relapsing. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
High-Accuracy Comparison Between the Post-Newtonian and Self-Force Dynamics of Black-Hole Binaries
NASA Astrophysics Data System (ADS)
Blanchet, Luc; Detweiler, Steven; Le Tiec, Alexandre; Whiting, Bernard F.
The relativistic motion of a compact binary system moving in circular orbit is investigated using the post-Newtonian (PN) approximation and the perturbative self-force (SF) formalism. A particular gauge-invariant observable quantity is computed as a function of the binary's orbital frequency. The conservative effect induced by the gravitational SF is obtained numerically with high precision, and compared to the PN prediction developed to high order. The PN calculation involves the computation of the 3PN regularized metric at the location of the particle. Its divergent self-field is regularized by means of dimensional regularization. The poles ∝ {(d - 3)}^{-1} that occur within dimensional regularization at the 3PN order disappear from the final gauge-invariant result. The leading 4PN and next-to-leading 5PN conservative logarithmic contributions originating from gravitational wave tails are also obtained. Making use of these exact PN results, some previously unknown PN coefficients are measured up to the very high 7PN order by fitting to the numerical SF data. Using just the 2PN and new logarithmic terms, the value of the 3PN coefficient is also confirmed numerically with very high precision. The consistency of this cross-cultural comparison provides a crucial test of the very different regularization methods used in both SF and PN formalisms, and illustrates the complementarity of these approximation schemes when modeling compact binary systems.
The use of financial incentives in Australian general practice.
Kecmanovic, Milica; Hall, Jane P
2015-05-18
To examine the uptake of financial incentive payments in general practice, and identify what types of practitioners are more likely to participate in these schemes. Analysis of data on general practitioners and GP registrars from the Medicine in Australia - Balancing Employment and Life (MABEL) longitudinal panel survey of medical practitioners in Australia, from 2008 to 2011. Income received by GPs from government incentive schemes and grants and factors associated with the likelihood of claiming such incentives. Around half of GPs reported receiving income from financial incentives in 2008, and there was a small fall in this proportion by 2011. There was considerable movement into and out of the incentives schemes, with more GPs exiting than taking up grants and payments. GPs working in larger practices with greater administrative support, GPs practising in rural areas and those who were principals or partners in practices were more likely to use grants and incentive payments. Administrative support available to GPs appears to be an increasingly important predictor of incentive use, suggesting that the administrative burden of claiming incentives is large and not always worth the effort. It is, therefore, crucial to consider such costs (especially relative to the size of the payment) when designing incentive payments. As market conditions are also likely to influence participation in incentive schemes, the impact of incentives can change over time and these schemes should be reviewed regularly.
Intelligent Power Swing Detection Scheme to Prevent False Relay Tripping Using S-Transform
NASA Astrophysics Data System (ADS)
Mohamad, Nor Z.; Abidin, Ahmad F.; Musirin, Ismail
2014-06-01
Distance relay design is equipped with out-of-step tripping scheme to ensure correct distance relay operation during power swing. The out-of-step condition is a consequence result from unstable power swing. It requires proper detection of power swing to initiate a tripping signal followed by separation of unstable part from the entire power system. The distinguishing process of unstable swing from stable swing poses a challenging task. This paper presents an intelligent approach to detect power swing based on S-Transform signal processing tool. The proposed scheme is based on the use of S-Transform feature of active power at the distance relay measurement point. It is demonstrated that the proposed scheme is able to detect and discriminate the unstable swing from stable swing occurring in the system. To ascertain validity of the proposed scheme, simulations were carried out with the IEEE 39 bus system and its performance has been compared with the wavelet transform-based power swing detection scheme.
Central Upwind Scheme for a Compressible Two-Phase Flow Model
Ahmed, Munshoor; Saleem, M. Rehan; Zia, Saqib; Qamar, Shamsul
2015-01-01
In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS) and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme. PMID:26039242
Central upwind scheme for a compressible two-phase flow model.
Ahmed, Munshoor; Saleem, M Rehan; Zia, Saqib; Qamar, Shamsul
2015-01-01
In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS) and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme.
Adding statistical regularity results in a global slowdown in visual search.
Vaskevich, Anna; Luria, Roy
2018-05-01
Current statistical learning theories predict that embedding implicit regularities within a task should further improve online performance, beyond general practice. We challenged this assumption by contrasting performance in a visual search task containing either a consistent-mapping (regularity) condition, a random-mapping condition, or both conditions, mixed. Surprisingly, performance in a random visual search, without any regularity, was better than performance in a mixed design search that contained a beneficial regularity. This result was replicated using different stimuli and different regularities, suggesting that mixing consistent and random conditions leads to an overall slowing down of performance. Relying on the predictive-processing framework, we suggest that this global detrimental effect depends on the validity of the regularity: when its predictive value is low, as it is in the case of a mixed design, reliance on all prior information is reduced, resulting in a general slowdown. Our results suggest that our cognitive system does not maximize speed, but rather continues to gather and implement statistical information at the expense of a possible slowdown in performance. Copyright © 2018 Elsevier B.V. All rights reserved.
The Geant4 physics validation repository
Wenzel, H.; Yarba, J.; Dotti, A.
2015-12-23
The Geant4 collaboration regularly performs validation and regression tests. The results are stored in a central repository and can be easily accessed via a web application. In this article we describe the Geant4 physics validation repository which consists of a relational database storing experimental data and Geant4 test results, a java API and a web application. Lastly, the functionality of these components and the technology choices we made are also described
Optimal Tikhonov regularization for DEER spectroscopy
NASA Astrophysics Data System (ADS)
Edwards, Thomas H.; Stoll, Stefan
2018-03-01
Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.
Multipole Vortex Blobs (MVB): Symplectic Geometry and Dynamics.
Holm, Darryl D; Jacobs, Henry O
2017-01-01
Vortex blob methods are typically characterized by a regularization length scale, below which the dynamics are trivial for isolated blobs. In this article, we observe that the dynamics need not be trivial if one is willing to consider distributional derivatives of Dirac delta functionals as valid vorticity distributions. More specifically, a new singular vortex theory is presented for regularized Euler fluid equations of ideal incompressible flow in the plane. We determine the conditions under which such regularized Euler fluid equations may admit vorticity singularities which are stronger than delta functions, e.g., derivatives of delta functions. We also describe the symplectic geometry associated with these augmented vortex structures, and we characterize the dynamics as Hamiltonian. Applications to the design of numerical methods similar to vortex blob methods are also discussed. Such findings illuminate the rich dynamics which occur below the regularization length scale and enlighten our perspective on the potential for regularized fluid models to capture multiscale phenomena.
NASA Astrophysics Data System (ADS)
Yao, Bing; Yang, Hui
2016-12-01
This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
Glimpse: Sparsity based weak lensing mass-mapping tool
NASA Astrophysics Data System (ADS)
Lanusse, F.; Starck, J.-L.; Leonard, A.; Pires, S.
2018-02-01
Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.
The dynamics of innovation through the expansion in the adjacent possible
NASA Astrophysics Data System (ADS)
Tria, F.
2016-03-01
The experience of something new is part of our daily life. At different scales, innovation is also a crucial feature of many biological, technological and social systems. Recently, large databases witnessing human activities allowed the observation that novelties -such as the individual process of listening a song for the first time- and innovation processes -such as the fixation of new genes in a population of bacteria- share striking statistical regularities. We here indicate the expansion into the adjacent possible as a very general and powerful mechanism able to explain such regularities. Further, we will identify statistical signatures of the presence of the expansion into the adjacent possible in the analyzed datasets, and we will show that our modeling scheme is able to predict remarkably well these observations.
NASA Astrophysics Data System (ADS)
Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi
2018-06-01
Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.
NASA Astrophysics Data System (ADS)
Yang, Tao; Chen, Xue; Shi, Sheping; Sun, Erkun; Shi, Chen
2018-03-01
We propose a low-complexity and modulation-format-independent carrier phase estimation (CPE) scheme based on two-stage modified blind phase search (MBPS) with linear approximation to compensate the phase noise of arbitrary m-ary quadrature amplitude modulation (m-QAM) signals in elastic optical networks (EONs). Comprehensive numerical simulations are carried out in the case that the highest possible modulation format in EONs is 256-QAM. The simulation results not only verify its advantages of higher estimation accuracy and modulation-format independence, i.e., universality, but also demonstrate that the implementation complexity is significantly reduced by at least one-fourth in comparison with the traditional BPS scheme. In addition, the proposed scheme shows similar laser linewidth tolerance with the traditional BPS scheme. The slightly better OSNR performance of the scheme is also experimentally validated for PM-QPSK and PM-16QAM systems, respectively. The coexistent advantages of low-complexity and modulation-format-independence could make the proposed scheme an attractive candidate for flexible receiver-side DSP unit in EONs.
Do, Nhu Tri; Bao, Vo Nguyen Quoc; An, Beongku
2016-01-01
In this paper, we study relay selection in decode-and-forward wireless energy harvesting cooperative networks. In contrast to conventional cooperative networks, the relays harvest energy from the source’s radio-frequency radiation and then use that energy to forward the source information. Considering power splitting receiver architecture used at relays to harvest energy, we are concerned with the performance of two popular relay selection schemes, namely, partial relay selection (PRS) scheme and optimal relay selection (ORS) scheme. In particular, we analyze the system performance in terms of outage probability (OP) over independent and non-identical (i.n.i.d.) Rayleigh fading channels. We derive the closed-form approximations for the system outage probabilities of both schemes and validate the analysis by the Monte-Carlo simulation. The numerical results provide comprehensive performance comparison between the PRS and ORS schemes and reveal the effect of wireless energy harvesting on the outage performances of both schemes. Additionally, we also show the advantages and drawbacks of the wireless energy harvesting cooperative networks and compare to the conventional cooperative networks. PMID:26927119
Do, Nhu Tri; Bao, Vo Nguyen Quoc; An, Beongku
2016-02-26
In this paper, we study relay selection in decode-and-forward wireless energy harvesting cooperative networks. In contrast to conventional cooperative networks, the relays harvest energy from the source's radio-frequency radiation and then use that energy to forward the source information. Considering power splitting receiver architecture used at relays to harvest energy, we are concerned with the performance of two popular relay selection schemes, namely, partial relay selection (PRS) scheme and optimal relay selection (ORS) scheme. In particular, we analyze the system performance in terms of outage probability (OP) over independent and non-identical (i.n.i.d.) Rayleigh fading channels. We derive the closed-form approximations for the system outage probabilities of both schemes and validate the analysis by the Monte-Carlo simulation. The numerical results provide comprehensive performance comparison between the PRS and ORS schemes and reveal the effect of wireless energy harvesting on the outage performances of both schemes. Additionally, we also show the advantages and drawbacks of the wireless energy harvesting cooperative networks and compare to the conventional cooperative networks.
Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2016-06-08
WSNs (Wireless sensor networks) are nowadays viewed as a vital portion of the IoTs (Internet of Things). Security is a significant issue in WSNs, especially in resource-constrained environments. AKA (Authentication and key agreement) enhances the security of WSNs against adversaries attempting to get sensitive sensor data. Various AKA schemes have been developed for verifying the legitimate users of a WSN. Firstly, we scrutinize Amin-Biswas's currently scheme and demonstrate the major security loopholes in their works. Next, we propose a lightweight AKA scheme, using symmetric key cryptography based on smart card, which is resilient against all well known security attacks. Furthermore, we prove the scheme accomplishes mutual handshake and session key agreement property securely between the participates involved under BAN (Burrows, Abadi and Needham) logic. Moreover, formal security analysis and simulations are also conducted using AVISPA(Automated Validation of Internet Security Protocols and Applications) to show that our scheme is secure against active and passive attacks. Additionally, performance analysis shows that our proposed scheme is secure and efficient to apply for resource-constrained WSNs.
Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2016-01-01
WSNs (Wireless sensor networks) are nowadays viewed as a vital portion of the IoTs (Internet of Things). Security is a significant issue in WSNs, especially in resource-constrained environments. AKA (Authentication and key agreement) enhances the security of WSNs against adversaries attempting to get sensitive sensor data. Various AKA schemes have been developed for verifying the legitimate users of a WSN. Firstly, we scrutinize Amin-Biswas’s currently scheme and demonstrate the major security loopholes in their works. Next, we propose a lightweight AKA scheme, using symmetric key cryptography based on smart card, which is resilient against all well known security attacks. Furthermore, we prove the scheme accomplishes mutual handshake and session key agreement property securely between the participates involved under BAN (Burrows, Abadi and Needham) logic. Moreover, formal security analysis and simulations are also conducted using AVISPA(Automated Validation of Internet Security Protocols and Applications) to show that our scheme is secure against active and passive attacks. Additionally, performance analysis shows that our proposed scheme is secure and efficient to apply for resource-constrained WSNs. PMID:27338382
Piro, Benoit; Shi, Shihui; Reisberg, Steeve; Noël, Vincent; Anquetin, Guillaume
2016-01-01
We review here the most frequently reported targets among the electrochemical immunosensors and aptasensors: antibiotics, bisphenol A, cocaine, ochratoxin A and estradiol. In each case, the immobilization procedures are described as well as the transduction schemes and the limits of detection. It is shown that limits of detections are generally two to three orders of magnitude lower for immunosensors than for aptasensors, due to the highest affinities of antibodies. No significant progresses have been made to improve these affinities, but transduction schemes were improved instead, which lead to a regular improvement of the limit of detections corresponding to ca. five orders of magnitude over these last 10 years. These progresses depend on the target, however. PMID:26938570
The FLAME-slab method for electromagnetic wave scattering in aperiodic slabs
NASA Astrophysics Data System (ADS)
Mansha, Shampy; Tsukerman, Igor; Chong, Y. D.
2017-12-01
The proposed numerical method, "FLAME-slab," solves electromagnetic wave scattering problems for aperiodic slab structures by exploiting short-range regularities in these structures. The computational procedure involves special difference schemes with high accuracy even on coarse grids. These schemes are based on Trefftz approximations, utilizing functions that locally satisfy the governing differential equations, as is done in the Flexible Local Approximation Method (FLAME). Radiation boundary conditions are implemented via Fourier expansions in the air surrounding the slab. When applied to ensembles of slab structures with identical short-range features, such as amorphous or quasicrystalline lattices, the method is significantly more efficient, both in runtime and in memory consumption, than traditional approaches. This efficiency is due to the fact that the Trefftz functions need to be computed only once for the whole ensemble.
Multiswitching compound antisynchronization of four chaotic systems
NASA Astrophysics Data System (ADS)
Khan, Ayub; Khattar, Dinesh; Prajapati, Nitish
2017-12-01
Based on three drive-one response system, in this article, the authors investigate a novel synchronization scheme for a class of chaotic systems. The new scheme, multiswitching compound antisynchronization (MSCoAS), is a notable extension of the earlier multiswitching schemes concerning only one drive-one response system model. The concept of multiswitching synchronization is extended to compound synchronization scheme such that the state variables of three drive systems antisynchronize with different state variables of the response system, simultaneously. The study involving multiswitching of three drive systems and one response system is first of its kind. Various switched modified function projective antisynchronization schemes are obtained as special cases of MSCoAS, for a suitable choice of scaling factors. Using suitable controllers and Lyapunov stability theory, sufficient condition is obtained to achieve MSCoAS between four chaotic systems and the corresponding theoretical proof is given. Numerical simulations are performed using Lorenz system in MATLAB to demonstrate the validity of the presented method.
An Efficient Offloading Scheme For MEC System Considering Delay and Energy Consumption
NASA Astrophysics Data System (ADS)
Sun, Yanhua; Hao, Zhe; Zhang, Yanhua
2018-01-01
With the increasing numbers of mobile devices, mobile edge computing (MEC) which provides cloud computing capabilities proximate to mobile devices in 5G networks has been envisioned as a promising paradigm to enhance users experience. In this paper, we investigate a joint consideration of delay and energy consumption offloading scheme (JCDE) for MEC system in 5G heterogeneous networks. An optimization is formulated to minimize the delay as well as energy consumption of the offloading system, which the delay and energy consumption of transmitting and calculating tasks are taken into account. We adopt an iterative greedy algorithm to solve the optimization problem. Furthermore, simulations were carried out to validate the utility and effectiveness of our proposed scheme. The effect of parameter variations on the system is analysed as well. Numerical results demonstrate delay and energy efficiency promotion of our proposed scheme compared with another paper’s scheme.
Generation of steady entanglement via unilateral qubit driving in bad cavities.
Jin, Zhao; Su, Shi-Lei; Zhu, Ai-Dong; Wang, Hong-Fu; Shen, Li-Tuo; Zhang, Shou
2017-12-15
We propose a scheme for generating an entangled state for two atoms trapped in two separate cavities coupled to each other. The scheme is based on the competition between the unitary dynamics induced by the classical fields and the collective decays induced by the dissipation of two non-local bosonic modes. In this scheme, only one qubit is driven by external classical fields, whereas the other need not be manipulated via classical driving. This is meaningful for experimental implementation between separate nodes of a quantum network. The steady entanglement can be obtained regardless of the initial state, and the robustness of the scheme against parameter fluctuations is numerically demonstrated. We also give an analytical derivation of the stationary fidelity to enable a discussion of the validity of this regime. Furthermore, based on the dissipative entanglement preparation scheme, we construct a quantum state transfer setup with multiple nodes as a practical application.
NASA Technical Reports Server (NTRS)
Reed, K. W.; Stonesifer, R. B.; Atluri, S. N.
1983-01-01
A new hybrid-stress finite element algorith, suitable for analyses of large quasi-static deformations of inelastic solids, is presented. Principal variables in the formulation are the nominal stress-rate and spin. A such, a consistent reformulation of the constitutive equation is necessary, and is discussed. The finite element equations give rise to an initial value problem. Time integration has been accomplished by Euler and Runge-Kutta schemes and the superior accuracy of the higher order schemes is noted. In the course of integration of stress in time, it has been demonstrated that classical schemes such as Euler's and Runge-Kutta may lead to strong frame-dependence. As a remedy, modified integration schemes are proposed and the potential of the new schemes for suppressing frame dependence of numerically integrated stress is demonstrated. The topic of the development of valid creep fracture criteria is also addressed.
Direct adaptive control of a PUMA 560 industrial robot
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Lee, Thomas; Delpech, Michel
1989-01-01
The implementation and experimental validation of a new direct adaptive control scheme on a PUMA 560 industrial robot is described. The testbed facility consists of a Unimation PUMA 560 six-jointed robot and controller, and a DEC MicroVAX II computer which hosts the Robot Control C Library software. The control algorithm is implemented on the MicroVAX which acts as a digital controller for the PUMA robot, and the Unimation controller is effectively bypassed and used merely as an I/O device to interface the MicroVAX to the joint motors. The control algorithm for each robot joint consists of an auxiliary signal generated by a constant-gain Proportional plus Integral plus Derivative (PID) controller, and an adaptive position-velocity (PD) feedback controller with adjustable gains. The adaptive independent joint controllers compensate for the inter-joint couplings and achieve accurate trajectory tracking without the need for the complex dynamic model and parameter values of the robot. Extensive experimental results on PUMA joint control are presented to confirm the feasibility of the proposed scheme, in spite of strong interactions between joint motions. Experimental results validate the capabilities of the proposed control scheme. The control scheme is extremely simple and computationally very fast for concurrent processing with high sampling rates.
Selecting registration schemes in case of interstitial lung disease follow-up in CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros
Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information),more » four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the range of 1.985–2.156 mm and 1.966–2.234 mm, for NLP and ILD affected regions, respectively, excluding schemes with statistically significant lower performance (Wilcoxon signed-ranks test, p < 0.05), resulting in 13 finally selected registration schemes. Conclusions: Selected registration schemes in case of ILD CT follow-up analysis indicate the significance of adaptive stochastic gradient descent optimizer, as well as the importance of combined rigid and nonrigid schemes providing high accuracy and time efficiency. The selected optimal deformable registration schemes are equivalent in terms of their accuracy and thus compatible in terms of their clinical outcome.« less
Multichannel feedforward control schemes with coupling compensation for active sound profiling
NASA Astrophysics Data System (ADS)
Mosquera-Sánchez, Jaime A.; Desmet, Wim; de Oliveira, Leopoldo P. R.
2017-05-01
Active sound profiling includes a number of control techniques that enables the equalization, rather than the mere reduction, of acoustic noise. Challenges may rise when trying to achieve distinct targeted sound profiles simultaneously at multiple locations, e.g., within a vehicle cabin. This paper introduces distributed multichannel control schemes for independently tailoring structural borne sound reaching a number of locations within a cavity. The proposed techniques address the cross interactions amongst feedforward active sound profiling units, which compensate for interferences of the primary sound at each location of interest by exchanging run-time data amongst the control units, while attaining the desired control targets. Computational complexity, convergence, and stability of the proposed multichannel schemes are examined in light of the physical system at which they are implemented. The tuning performance of the proposed algorithms is benchmarked with the centralized and pure-decentralized control schemes through computer simulations on a simplified numerical model, which has also been subjected to plant magnitude variations. Provided that the representation of the plant is accurate enough, the proposed multichannel control schemes have been shown as the only ones that properly deliver targeted active sound profiling tasks at each error sensor location. Experimental results in a 1:3-scaled vehicle mock-up further demonstrate that the proposed schemes are able to attain reductions of more than 60 dB upon periodic disturbances at a number of positions, while resolving cross-channel interferences. Moreover, when the sensor/actuator placement is found as defective at a given frequency, the inclusion of a regularization parameter in the cost function is seen to not hinder the proper operation of the proposed compensation schemes, at the time that it assures their stability, at the expense of losing control performance.
NASA Technical Reports Server (NTRS)
Li, Xiao-Wen; Tao, Wei-Kuo; Khain, Alexander P.; Simpson, Joanne; Johnson, Daniel E.
2004-01-01
A cloud-resolving model is used to study sensitivities of two different microphysical schemes, one is the bulk type, and the other is an explicit bin scheme, in simulating a mid-latitude squall line case (PRE-STORM, June 10-11, 1985). Simulations using different microphysical schemes are compared with each other and also with the observations. Both the bulk and bin models reproduce the general features during the developing and mature stage of the system. The leading convective zone, the trailing stratiform region, the horizontal wind flow patterns, pressure perturbation associated with the storm dynamics, and the cool pool in front of the system all agree well with the observations. Both the observations and the bulk scheme simulation serve as validations for the newly incorporated bin scheme. However, it is also shown that, the bulk and bin simulations have distinct differences, most notably in the stratiform region. Weak convective cells exist in the stratiform region in the bulk simulation, but not in the bin simulation. These weak convective cells in the stratiform region are remnants of the previous stronger convections at the leading edge of the system. The bin simulation, on the other hand, has a horizontally homogeneous stratiform cloud structure, which agrees better with the observations. Preliminary examinations of the downdraft core strength, the potential temperature perturbation, and the evaporative cooling rate show that the differences between the bulk and bin models are due mainly to the stronger low-level evaporative cooling in convective zone simulated in the bulk model. Further quantitative analysis and sensitivity tests for this case using both the bulk and bin models will be presented in a companion paper.
Electrooptical adaptive switching network for the hypercube computer
NASA Technical Reports Server (NTRS)
Chow, E.; Peterson, J.
1988-01-01
An all-optical network design for the hyperswitch network using regular free-space interconnects between electronic processor nodes is presented. The adaptive routing model used is described, and an adaptive routing control example is presented. The design demonstrates that existing electrooptical techniques are sufficient for implementing efficient parallel architectures without the need for more complex means of implementing arbitrary interconnection schemes. The electrooptical hyperswitch network significantly improves the communication performance of the hypercube computer.
Technical Basis and Implementation Guidelines for a Technique for Human Event Analysis (ATHEANA)
2000-05-01
posted at NRC’s Web site address www.nrc.gov/NRC/NUREGS/indexnum.html are updated regularly and may differ from the last printed version. Non-NRC...distinctly different in that it provides structured search schemes for finding such EFCs, by using and integrating knowledge and experience in...Learned from Serious Accidents The record of significant incidents in nuclear power plant NPP operations shows a substantially different picture of
VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.
2015-12-01
A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.
Development and Validation of a Musical Behavior Measure for Preschool Children
ERIC Educational Resources Information Center
Yi, Gina Jisun
2013-01-01
The purpose of this study was to develop a measure for use in assessing musical behaviors of preschool children in the context of regular music instruction and to determine the validity and the reliability of the measure. The Early Childhood Musical Behavior Measure (ECMBM) was constructed for use with preschool-aged children to measure their…
Proposed new classification scheme for chemical injury to the human eye.
Bagley, Daniel M; Casterton, Phillip L; Dressler, William E; Edelhauser, Henry F; Kruszewski, Francis H; McCulley, James P; Nussenblatt, Robert B; Osborne, Rosemarie; Rothenstein, Arthur; Stitzel, Katherine A; Thomas, Karluss; Ward, Sherry L
2006-07-01
Various ocular alkali burn classification schemes have been published and used to grade human chemical eye injuries for the purpose of identifying treatments and forecasting outcomes. The ILSI chemical eye injury classification scheme was developed for the additional purpose of collecting detailed human eye injury data to provide information on the mechanisms associated with chemical eye injuries. This information will have clinical application, as well as use in the development and validation of new methods to assess ocular toxicity. A panel of ophthalmic researchers proposed the new classification scheme based upon current knowledge of the mechanisms of eye injury, and their collective clinical and research experience. Additional ophthalmologists and researchers were surveyed to critique the scheme. The draft scheme was revised, and the proposed scheme represents the best consensus from at least 23 physicians and scientists. The new scheme classifies chemical eye injury into five categories based on clinical signs, symptoms, and expected outcomes. Diagnostic classification is based primarily on two clinical endpoints: (1) the extent (area) of injury at the limbus, and (2) the degree of injury (area and depth) to the cornea. The new classification scheme provides a uniform system for scoring eye injury across chemical classes, and provides enough detail for the clinician to collect data that will be relevant to identifying the mechanisms of ocular injury.
ERIC Educational Resources Information Center
Kaya, Osman Nafiz; Kilic, Ziya
2004-01-01
Student-centered approach of scoring the concept maps consisted of three elements namely symbol system, individual portfolio and scoring scheme. We scored student-constructed concept maps based on 5 concept map criteria: validity of concepts, adequacy of propositions, significance of cross-links, relevancy of examples, and interconnectedness. With…
ERIC Educational Resources Information Center
Samanci, Osman; Ocakci, Ebru; Seçer, Ismail
2018-01-01
The purpose of this research is to conduct validity and reliability studies of the Scale for the Determining Social Participation for Children, developed to measure social participation skills of children aged 7-10 years. During the development of the scale, pilot schemes, validity analyzes, and reliability analyzes were conducted. In this…
ERIC Educational Resources Information Center
Vachliotis, Theodoros; Salta, Katerina; Tzougraki, Chryssa
2014-01-01
The purpose of this study was dual: First, to develop and validate assessment schemes for assessing 11th grade students' meaningful understanding of organic chemistry concepts, as well as their systems thinking skills in the domain. Second, to explore the relationship between the two constructs of interest based on students' performance…
NASA Astrophysics Data System (ADS)
Matsushita, Yu-ichiro; Nishi, Hirofumi; Iwata, Jun-ichi; Kosugi, Taichi; Oshiyama, Atsushi
2018-01-01
We propose an unfolding scheme to analyze energy spectra of complex large-scale systems which are inherently of double periodicity on the basis of the density-functional theory. Applying our method to a twisted bilayer graphene (tBLG) and a stack of monolayer MoS2 on graphene (MoS2/graphene) as examples, we first show that the conventional unfolding scheme in the past using a single primitive-cell representation causes serious problems in analyses of the energy spectra. We then introduce our multispace representation scheme in the unfolding method and clarify its validity. Velocity renormalization of Dirac electrons in tBLG and mini gaps of Dirac cones in MoS2/graphene are elucidated in the present unfolding scheme.
Nendza, Monika; Kühne, Ralph; Lombardo, Anna; Strempel, Sebastian; Schüürmann, Gerrit
2018-03-01
Aquatic bioconcentration factors (BCFs) are critical in PBT (persistent, bioaccumulative, toxic) and risk assessment of chemicals. High costs and use of more than 100 fish per standard BCF study (OECD 305) call for alternative methods to replace as much in vivo testing as possible. The BCF waiving scheme is a screening tool combining QSAR classifications based on physicochemical properties related to the distribution (hydrophobicity, ionisation), persistence (biodegradability, hydrolysis), solubility and volatility (Henry's law constant) of substances in water bodies and aquatic biota to predict substances with low aquatic bioaccumulation (nonB, BCF<2000). The BCF waiving scheme was developed with a dataset of reliable BCFs for 998 compounds and externally validated with another 181 substances. It performs with 100% sensitivity (no false negatives), >50% efficacy (waiving potential), and complies with the OECD principles for valid QSARs. The chemical applicability domain of the BCF waiving scheme is given by the structures of the training set, with some compound classes explicitly excluded like organometallics, poly- and perfluorinated compounds, aromatic triphenylphosphates, surfactants. The prediction confidence of the BCF waiving scheme is based on applicability domain compliance, consensus modelling, and the structural similarity with known nonB and B/vB substances. Compounds classified as nonB by the BCF waiving scheme are candidates for waiving of BCF in vivo testing on fish due to low concern with regard to the B criterion. The BCF waiving scheme supports the 3Rs with a possible reduction of >50% of BCF in vivo testing on fish. If the target chemical is outside the applicability domain of the BCF waiving scheme or not classified as nonB, further assessments with in silico, in vitro or in vivo methods are necessary to either confirm or reject bioaccumulative behaviour. Copyright © 2017 Elsevier B.V. All rights reserved.
Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Ash, Robert L.
1992-01-01
The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.
Al-Hanawi, Mohammed Khaled; Vaidya, Kirit; Alsharqi, Omar; Onwujekwe, Obinna
2018-04-01
The Saudi Healthcare System is universal, financed entirely from government revenue principally derived from oil, and is 'free at the point of delivery' (non-contributory). However, this system is unlikely to be sustainable in the medium to long term. This study investigates the feasibility and acceptability of healthcare financing reform by examining households' willingness to pay (WTP) for a contributory national health insurance scheme. Using the contingent valuation method, a pre-tested interviewer-administered questionnaire was used to collect data from 1187 heads of household in Jeddah province over a 5-month period. Multi-stage sampling was employed to select the study sample. Using a double-bounded dichotomous choice with the follow-up elicitation method, respondents were asked to state their WTP for a hypothetical contributory national health insurance scheme. Tobit regression analysis was used to examine the factors associated with WTP and assess the construct validity of elicited WTP. Over two-thirds (69.6%) indicated that they were willing to participate in and pay for a contributory national health insurance scheme. The mean WTP was 50 Saudi Riyal (US$13.33) per household member per month. Tobit regression analysis showed that household size, satisfaction with the quality of public healthcare services, perceptions about financing healthcare, education and income were the main determinants of WTP. This study demonstrates a theoretically valid WTP for a contributory national health insurance scheme by Saudi people. The research shows that willingness to participate in and pay for a contributory national health insurance scheme depends on participant characteristics. Identifying and understanding the main influencing factors associated with WTP are important to help facilitate establishing and implementing the national health insurance scheme. The results could assist policy-makers to develop and set insurance premiums, thus providing an additional source of healthcare financing.
Dynamic positioning configuration and its first-order optimization
NASA Astrophysics Data System (ADS)
Xue, Shuqiang; Yang, Yuanxi; Dang, Yamin; Chen, Wu
2014-02-01
Traditional geodetic network optimization deals with static and discrete control points. The modern space geodetic network is, on the other hand, composed of moving control points in space (satellites) and on the Earth (ground stations). The network configuration composed of these facilities is essentially dynamic and continuous. Moreover, besides the position parameter which needs to be estimated, other geophysical information or signals can also be extracted from the continuous observations. The dynamic (continuous) configuration of the space network determines whether a particular frequency of signals can be identified by this system. In this paper, we employ the functional analysis and graph theory to study the dynamic configuration of space geodetic networks, and mainly focus on the optimal estimation of the position and clock-offset parameters. The principle of the D-optimization is introduced in the Hilbert space after the concept of the traditional discrete configuration is generalized from the finite space to the infinite space. It shows that the D-optimization developed in the discrete optimization is still valid in the dynamic configuration optimization, and this is attributed to the natural generalization of least squares from the Euclidean space to the Hilbert space. Then, we introduce the principle of D-optimality invariance under the combination operation and rotation operation, and propose some D-optimal simplex dynamic configurations: (1) (Semi) circular configuration in 2-dimensional space; (2) the D-optimal cone configuration and D-optimal helical configuration which is close to the GPS constellation in 3-dimensional space. The initial design of GPS constellation can be approximately treated as a combination of 24 D-optimal helixes by properly adjusting the ascending node of different satellites to realize a so-called Walker constellation. In the case of estimating the receiver clock-offset parameter, we show that the circular configuration, the symmetrical cone configuration and helical curve configuration are still D-optimal. It shows that the given total observation time determines the optimal frequency (repeatability) of moving known points and vice versa, and one way to improve the repeatability is to increase the rotational speed. Under the Newton's law of motion, the frequency of satellite motion determines the orbital altitude. Furthermore, we study three kinds of complex dynamic configurations, one of which is the combination of D-optimal cone configurations and a so-called Walker constellation composed of D-optimal helical configuration, the other is the nested cone configuration composed of n cones, and the last is the nested helical configuration composed of n orbital planes. It shows that an effective way to achieve high coverage is to employ the configuration composed of a certain number of moving known points instead of the simplex configuration (such as D-optimal helical configuration), and one can use the D-optimal simplex solutions or D-optimal complex configurations in any combination to achieve powerful configurations with flexile coverage and flexile repeatability. Alternately, how to optimally generate and assess the discrete configurations sampled from the continuous one is discussed. The proposed configuration optimization framework has taken the well-known regular polygons (such as equilateral triangle and quadrangular) in two-dimensional space and regular polyhedrons (regular tetrahedron, cube, regular octahedron, regular icosahedron, or regular dodecahedron) into account. It shows that the conclusions made by the proposed technique are more general and no longer limited by different sampling schemes. By the conditional equation of D-optimal nested helical configuration, the relevance issues of GNSS constellation optimization are solved and some examples are performed by GPS constellation to verify the validation of the newly proposed optimization technique. The proposed technique is potentially helpful in maintenance and quadratic optimization of single GNSS of which the orbital inclination and the orbital altitude change under the precession, as well as in optimally nesting GNSSs to perform global homogeneous coverage of the Earth.
Mirza, Tahseen; Liu, Qian Julie; Vivilecchia, Richard; Joshi, Yatindra
2009-03-01
There has been a growing interest during the past decade in the use of fiber optics dissolution testing. Use of this novel technology is mainly confined to research and development laboratories. It has not yet emerged as a tool for end product release testing despite its ability to generate in situ results and efficiency improvement. One potential reason may be the lack of clear validation guidelines that can be applied for the assessment of suitability of fiber optics. This article describes a comprehensive validation scheme and development of a reliable, robust, reproducible and cost-effective dissolution test using fiber optics technology. The test was successfully applied for characterizing the dissolution behavior of a 40-mg immediate-release tablet dosage form that is under development at Novartis Pharmaceuticals, East Hanover, New Jersey. The method was validated for the following parameters: linearity, precision, accuracy, specificity, and robustness. In particular, robustness was evaluated in terms of probe sampling depth and probe orientation. The in situ fiber optic method was found to be comparable to the existing manual sampling dissolution method. Finally, the fiber optic dissolution test was successfully performed by different operators on different days, to further enhance the validity of the method. The results demonstrate that the fiber optics technology can be successfully validated for end product dissolution/release testing. (c) 2008 Wiley-Liss, Inc. and the American Pharmacists Association
Real-time validation of receiver state information in optical space-time block code systems.
Alamia, John; Kurzweg, Timothy
2014-06-15
Free space optical interconnect (FSOI) systems are a promising solution to interconnect bottlenecks in high-speed systems. To overcome some sources of diminished FSOI performance caused by close proximity of multiple optical channels, multiple-input multiple-output (MIMO) systems implementing encoding schemes such as space-time block coding (STBC) have been developed. These schemes utilize information pertaining to the optical channel to reconstruct transmitted data. The STBC system is dependent on accurate channel state information (CSI) for optimal system performance. As a result of dynamic changes in optical channels, a system in operation will need to have updated CSI. Therefore, validation of the CSI during operation is a necessary tool to ensure FSOI systems operate efficiently. In this Letter, we demonstrate a method of validating CSI, in real time, through the use of moving averages of the maximum likelihood decoder data, and its capacity to predict the bit error rate (BER) of the system.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-01-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037
NASA Technical Reports Server (NTRS)
Chulya, Abhisak; Walker, Kevin P.
1991-01-01
A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.
NASA Technical Reports Server (NTRS)
Chulya, A.; Walker, K. P.
1989-01-01
A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.
The synchronisation of fractional-order hyperchaos compound system
NASA Astrophysics Data System (ADS)
Noghredani, Naeimadeen; Riahi, Aminreza; Pariz, Naser; Karimpour, Ali
2018-02-01
This paper presents a new compound synchronisation scheme among four hyperchaotic memristor system with incommensurate fractional-order derivatives. First a new controller was designed based on adaptive technique to minimise the errors and guarantee compound synchronisation of four fractional-order memristor chaotic systems. According to the suitability of compound synchronisation as a reliable solution for secure communication, we then examined the application of the proposed adaptive compound synchronisation scheme in the presence of noise for secure communication. In addition, the unpredictability and complexity of the drive systems enhance the security of secure communication. The corresponding theoretical analysis and results of simulation validated the effectiveness of the proposed synchronisation scheme using MATLAB.
Multi-zonal Navier-Stokes code with the LU-SGS scheme
NASA Technical Reports Server (NTRS)
Klopfer, G. H.; Yoon, S.
1993-01-01
The LU-SGS (lower upper symmetric Gauss Seidel) algorithm has been implemented into the Compressible Navier-Stokes, Finite Volume (CNSFV) code and validated with a multizonal Navier-Stokes simulation of a transonic turbulent flow around an Onera M6 transport wing. The convergence rate and robustness of the code have been improved and the computational cost has been reduced by at least a factor of 2 over the diagonal Beam-Warming scheme.
Brady, Timothy F; Konkle, Talia; Alvarez, George A
2009-11-01
The information that individuals can hold in working memory is quite limited, but researchers have typically studied this capacity using simple objects or letter strings with no associations between them. However, in the real world there are strong associations and regularities in the input. In an information theoretic sense, regularities introduce redundancies that make the input more compressible. The current study shows that observers can take advantage of these redundancies, enabling them to remember more items in working memory. In 2 experiments, covariance was introduced between colors in a display so that over trials some color pairs were more likely to appear than other color pairs. Observers remembered more items from these displays than from displays where the colors were paired randomly. The improved memory performance cannot be explained by simply guessing the high-probability color pair, suggesting that observers formed more efficient representations to remember more items. Further, as observers learned the regularities, their working memory performance improved in a way that is quantitatively predicted by a Bayesian learning model and optimal encoding scheme. These results suggest that the underlying capacity of the individuals' working memory is unchanged, but the information they have to remember can be encoded in a more compressed fashion. Copyright 2009 APA
Zeng, Dong; Gao, Yuanyuan; Huang, Jing; Bian, Zhaoying; Zhang, Hua; Lu, Lijun; Ma, Jianhua
2016-10-01
Multienergy computed tomography (MECT) allows identifying and differentiating different materials through simultaneous capture of multiple sets of energy-selective data belonging to specific energy windows. However, because sufficient photon counts are not available in each energy window compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise and strong streak artifacts. To address the particular challenge, this work presents a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization, which is henceforth referred to as 'PWLS-STV' for simplicity. Specifically, the STV regularization is derived by penalizing higher-order derivatives of the desired MECT images. Thus it could provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation (TV) regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Extensive experiments with a digital XCAT phantom and meat specimen clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of both quantitative and visual quality evaluations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Assembling mesoscopic particles by various optical schemes
NASA Astrophysics Data System (ADS)
Fournier, Jean-Marc; Rohner, Johann; Jacquot, Pierre; Johann, Robert; Mias, Solon; Salathé, René-P.
2005-08-01
Shaping optical fields is the key issue in the control of optical forces that pilot the manipulation of mesoscopic polarizable dielectric particles. The latter can be positioned according to endless configurations. The scope of this paper is to review and discuss several unusual designs which produce what we think are among some of the most interesting arrangements. The simplest schemes result from interference between two or several coherent light beams, leading to periodic as well as pseudo-periodic arrays of optical traps. Complex assemblages of traps can be created with holographic-type set-ups; this case is widely used by the trapping community. Clusters of traps can also be configured through interferometric-type set-ups or by generating external standing waves by diffractive elements. The particularly remarkable possibilities of the Talbot effect to generate three-dimensional optical lattices and several schemes of self-organization represent further very interesting means for trapping. They will also be described and discussed. in this paper. The mechanisms involved in those trapping schemes do not require the use of high numerical aperture optics; by avoiding the need for bulky microscope objectives, they allow for more physical space around the trapping area to perform experiments. Moreover, very large regular arrays of traps can be manufactured, opening numerous possibilities for new applications.
Memristive device based learning for navigation in robots.
Sarim, Mohammad; Kumar, Manish; Jha, Rashmi; Minai, Ali A
2017-11-08
Biomimetic robots have gained attention recently for various applications ranging from resource hunting to search and rescue operations during disasters. Biological species are known to intuitively learn from the environment, gather and process data, and make appropriate decisions. Such sophisticated computing capabilities in robots are difficult to achieve, especially if done in real-time with ultra-low energy consumption. Here, we present a novel memristive device based learning architecture for robots. Two terminal memristive devices with resistive switching of oxide layer are modeled in a crossbar array to develop a neuromorphic platform that can impart active real-time learning capabilities in a robot. This approach is validated by navigating a robot vehicle in an unknown environment with randomly placed obstacles. Further, the proposed scheme is compared with reinforcement learning based algorithms using local and global knowledge of the environment. The simulation as well as experimental results corroborate the validity and potential of the proposed learning scheme for robots. The results also show that our learning scheme approaches an optimal solution for some environment layouts in robot navigation.
Experimental validation of the Achromatic Telescopic Squeezing (ATS) scheme at the LHC
NASA Astrophysics Data System (ADS)
Fartoukh, S.; Bruce, R.; Carlier, F.; Coello De Portugal, J.; Garcia-Tabares, A.; Maclean, E.; Malina, L.; Mereghetti, A.; Mirarchi, D.; Persson, T.; Pojer, M.; Ponce, L.; Redaelli, S.; Salvachua, B.; Skowronski, P.; Solfaroli, M.; Tomas, R.; Valuch, D.; Wegscheider, A.; Wenninger, J.
2017-07-01
The Achromatic Telescopic Squeezing scheme offers new techniques to deliver unprecedentedly small beam spot size at the interaction points of the ATLAS and CMS experiments of the LHC, while perfectly controlling the chromatic properties of the corresponding optics (linear and non-linear chromaticities, off-momentum beta-beating, spurious dispersion induced by the crossing bumps). The first series of beam tests with ATS optics were achieved during the LHC Run I (2011/2012) for a first validation of the basics of the scheme at small intensity. In 2016, a new generation of more performing ATS optics was developed and more extensively tested in the machine, still with probe beams for optics measurement and correction at β* = 10 cm, but also with a few nominal bunches to establish first collisions at nominal β* (40 cm) and beyond (33 cm), and to analysis the robustness of these optics in terms of collimation and machine protection. The paper will highlight the most relevant and conclusive results which were obtained during this second series of ATS tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ho; Xing Lei; Lee, Rena
2012-05-15
Purpose: X-ray scatter incurred to detectors degrades the quality of cone-beam computed tomography (CBCT) and represents a problem in volumetric image guided and adaptive radiation therapy. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, due to missing information resulting from the obstruction of the blocker, such methods require dual scanning or dynamically moving blocker to obtain a complete volumetric image. Here, we propose a half beam blocker-based approach, in conjunction with a total variation (TV) regularized Feldkamp-Davis-Kress (FDK) algorithm, to correct scatter-induced artifacts by simultaneously acquiring image and scatter information frommore » a single-rotation CBCT scan. Methods: A half beam blocker, comprising lead strips, is used to simultaneously acquire image data on one side of the projection data and scatter data on the other half side. One-dimensional cubic B-Spline interpolation/extrapolation is applied to derive patient specific scatter information by using the scatter distributions on strips. The estimated scatter is subtracted from the projection image acquired at the opposite view. With scatter-corrected projections where this subtraction is completed, the FDK algorithm based on a cosine weighting function is performed to reconstruct CBCT volume. To suppress the noise in the reconstructed CBCT images produced by geometric errors between two opposed projections and interpolated scatter information, total variation regularization is applied by a minimization using a steepest gradient descent optimization method. The experimental studies using Catphan504 and anthropomorphic phantoms were carried out to evaluate the performance of the proposed scheme. Results: The scatter-induced shading artifacts were markedly suppressed in CBCT using the proposed scheme. Compared with CBCT without a blocker, the nonuniformity value was reduced from 39.3% to 3.1%. The root mean square error relative to values inside the regions of interest selected from a benchmark scatter free image was reduced from 50 to 11.3. The TV regularization also led to a better contrast-to-noise ratio. Conclusions: An asymmetric half beam blocker-based FDK acquisition and reconstruction technique has been established. The proposed scheme enables simultaneous detection of patient specific scatter and complete volumetric CBCT reconstruction without additional requirements such as prior images, dual scans, or moving strips.« less
Impact and Penetration of Thin Aluminum 2024 Flat Panels at Oblique Angles of Incidence
NASA Technical Reports Server (NTRS)
Ruggeri, Charles R.; Revilock, Duane M.; Pereira, J. Michael; Emmerling, William; Queitzsch, Gilbert K., Jr.
2015-01-01
The U.S. Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA) are actively involved in improving the predictive capabilities of transient finite element computational methods for application to safety issues involving unintended impacts on aircraft and aircraft engine structures. One aspect of this work involves the development of an improved deformation and failure model for metallic materials, known as the Tabulated Johnson-Cook model, or MAT224, which has been implemented in the LS-DYNA commercial transient finite element analysis code (LSTC Corp., Livermore, CA) (Ref. 1). In this model the yield stress is a function of strain, strain rate and temperature and the plastic failure strain is a function of the state of stress, temperature and strain rate. The failure criterion is based on the accumulation of plastic strain in an element. The model also incorporates a regularization scheme to account for the dependency of plastic failure strain on mesh size. For a given material the model requires a significant amount of testing to determine the yield stress and failure strain as a function of the three-dimensional state of stress, strain rate and temperature. In addition, experiments are required to validate the model. Currently the model has been developed for Aluminum 2024 and validated against a series of ballistic impact tests on flat plates of various thicknesses (Refs. 1 to 3). Full development of the model for Titanium 6Al-4V is being completed, and mechanical testing for Inconel 718 has begun. The validation testing for the models involves ballistic impact tests using cylindrical projectiles impacting flat plates at a normal incidence (Ref. 2). By varying the thickness of the plates, different stress states and resulting failure modes are induced, providing a range of conditions over which the model can be validated. The objective of the study reported here was to provide experimental data to evaluate the model under more extreme conditions, using a projectile with a more complex shape and sharp contacts, impacting flat panels at oblique angles of incidence.
Well-balanced compressible cut-cell simulation of atmospheric flow.
Klein, R; Bates, K R; Nikiforakis, N
2009-11-28
Cut-cell meshes present an attractive alternative to terrain-following coordinates for the representation of topography within atmospheric flow simulations, particularly in regions of steep topographic gradients. In this paper, we present an explicit two-dimensional method for the numerical solution on such meshes of atmospheric flow equations including gravitational sources. This method is fully conservative and allows for time steps determined by the regular grid spacing, avoiding potential stability issues due to arbitrarily small boundary cells. We believe that the scheme is unique in that it is developed within a dimensionally split framework, in which each coordinate direction in the flow is solved independently at each time step. Other notable features of the scheme are: (i) its conceptual and practical simplicity, (ii) its flexibility with regard to the one-dimensional flux approximation scheme employed, and (iii) the well-balancing of the gravitational sources allowing for stable simulation of near-hydrostatic flows. The presented method is applied to a selection of test problems including buoyant bubble rise interacting with geometry and lee-wave generation due to topography.
Optimizing phonon space in the phonon-coupling model
NASA Astrophysics Data System (ADS)
Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.
2017-08-01
We present a new scheme to select the most relevant phonons in the phonon-coupling model, named here the time-blocking approximation (TBA). The new criterion, based on the phonon-nucleon coupling strengths rather than on B (E L ) values, is more selective and thus produces much smaller phonon spaces in the TBA. This is beneficial in two respects: first, it curbs the computational cost, and second, it reduces the danger of double counting in the expansion basis of the TBA. We use here the TBA in a form where the coupling strength is regularized to keep the given Hartree-Fock ground state stable. The scheme is implemented in a random-phase approximation and TBA code based on the Skyrme energy functional. We first explore carefully the cutoff dependence with the new criterion and can work out a natural (optimal) cutoff parameter. Then we use the freshly developed and tested scheme for a survey of giant resonances and low-lying collective states in six doubly magic nuclei looking also at the dependence of the results when varying the Skyrme parametrization.
RUASN: a robust user authentication framework for wireless sensor networks.
Kumar, Pardeep; Choudhury, Amlan Jyoti; Sain, Mangal; Lee, Sang-Gon; Lee, Hoon-Jae
2011-01-01
In recent years, wireless sensor networks (WSNs) have been considered as a potential solution for real-time monitoring applications and these WSNs have potential practical impact on next generation technology too. However, WSNs could become a threat if suitable security is not considered before the deployment and if there are any loopholes in their security, which might open the door for an attacker and hence, endanger the application. User authentication is one of the most important security services to protect WSN data access from unauthorized users; it should provide both mutual authentication and session key establishment services. This paper proposes a robust user authentication framework for wireless sensor networks, based on a two-factor (password and smart card) concept. This scheme facilitates many services to the users such as user anonymity, mutual authentication, secure session key establishment and it allows users to choose/update their password regularly, whenever needed. Furthermore, we have provided the formal verification using Rubin logic and compare RUASN with many existing schemes. As a result, we found that the proposed scheme possesses many advantages against popular attacks, and achieves better efficiency at low computation cost.
Urban green valuation integrating biophysical and qualitative aspects.
Lang, Stefan
2018-01-01
Urban green mapping has become an operational task in city planning, urban land management, and quality of life assessments. As a multi-dimensional, integrative concept, urban green comprising several ecological, socio-economic, and policy-related aspects. In this paper, the author advances the representation of urban green by deriving scale-adapted, policy-relevant units. These so-called geons represent areas of uniform green valuation under certain size and homogeneity constraints in a spatially explicit representation. The study accompanies a regular monitoring scheme carried out by the urban municipality of the city of Salzburg, Austria, using optical satellite data. It was conducted in two stages, namely SBG_QB (10.2 km², QuickBird data from 2005) and SBG_WV (140 km², WorldView-2 data from 2010), within the functional urban area of Salzburg. The geon delineation was validated by several quantitative measures and spatial analysis techniques, as well as ground documentation, including panorama photographs and visual interpretation. The spatial association pattern was assessed by calculating Global Moran's I with incremental search distances. The final geonscape, consisting of 1083 units with an average size of 13.5 ha, was analyzed by spatial metrics. Finally, categories were derived for different types of functional geons. Future research paths and improvements to the described strategy are outlined.
Mid-space-independent deformable image registration.
Aganj, Iman; Iglesias, Juan Eugenio; Reuter, Martin; Sabuncu, Mert Rory; Fischl, Bruce
2017-05-15
Aligning images in a mid-space is a common approach to ensuring that deformable image registration is symmetric - that it does not depend on the arbitrary ordering of the input images. The results are, however, generally dependent on the mathematical definition of the mid-space. In particular, the set of possible solutions is typically restricted by the constraints that are enforced on the transformations to prevent the mid-space from drifting too far from the native image spaces. The use of an implicit atlas has been proposed as an approach to mid-space image registration. In this work, we show that when the atlas is aligned to each image in the native image space, the data term of implicit-atlas-based deformable registration is inherently independent of the mid-space. In addition, we show that the regularization term can be reformulated independently of the mid-space as well. We derive a new symmetric cost function that only depends on the transformation morphing the images to each other, rather than to the atlas. This eliminates the need for anti-drift constraints, thereby expanding the space of allowable deformations. We provide an implementation scheme for the proposed framework, and validate it through diffeomorphic registration experiments on brain magnetic resonance images. Copyright © 2017 Elsevier Inc. All rights reserved.
Mid-Space-Independent Deformable Image Registration
Aganj, Iman; Iglesias, Juan Eugenio; Reuter, Martin; Sabuncu, Mert Rory; Fischl, Bruce
2017-01-01
Aligning images in a mid-space is a common approach to ensuring that deformable image registration is symmetric – that it does not depend on the arbitrary ordering of the input images. The results are, however, generally dependent on the mathematical definition of the mid-space. In particular, the set of possible solutions is typically restricted by the constraints that are enforced on the transformations to prevent the mid-space from drifting too far from the native image spaces. The use of an implicit atlas has been proposed as an approach to mid-space image registration. In this work, we show that when the atlas is aligned to each image in the native image space, the data term of implicit-atlas-based deformable registration is inherently independent of the mid-space. In addition, we show that the regularization term can be reformulated independently of the mid-space as well. We derive a new symmetric cost function that only depends on the transformation morphing the images to each other, rather than to the atlas. This eliminates the need for anti-drift constraints, thereby expanding the space of allowable deformations. We provide an implementation scheme for the proposed framework, and validate it through diffeomorphic registration experiments on brain magnetic resonance images. PMID:28242316
SSULI/SSUSI UV Tomographic Images of Large-Scale Plasma Structuring
NASA Astrophysics Data System (ADS)
Hei, M. A.; Budzien, S. A.; Dymond, K.; Paxton, L. J.; Schaefer, R. K.; Groves, K. M.
2015-12-01
We present a new technique that creates tomographic reconstructions of atmospheric ultraviolet emission based on data from the Special Sensor Ultraviolet Limb Imager (SSULI) and the Special Sensor Ultraviolet Spectrographic Imager (SSUSI), both flown on the Defense Meteorological Satellite Program (DMSP) Block 5D3 series satellites. Until now, the data from these two instruments have been used independently of each other. The new algorithm combines SSULI/SSUSI measurements of 135.6 nm emission using the tomographic technique; the resultant data product - whole-orbit reconstructions of atmospheric volume emission within the satellite orbital plane - is substantially improved over the original data sets. Tests using simulated atmospheric emission verify that the algorithm performs well in a variety of situations, including daytime, nighttime, and even in the challenging terminator regions. A comparison with ALTAIR radar data validates that the volume emission reconstructions can be inverted to yield maps of electron density. The algorithm incorporates several innovative new features, including the use of both SSULI and SSUSI data to create tomographic reconstructions, the use of an inversion algorithm (Richardson-Lucy; RL) that explicitly accounts for the Poisson statistics inherent in optical measurements, and a pseudo-diffusion based regularization scheme implemented between iterations of the RL code. The algorithm also explicitly accounts for extinction due to absorption by molecular oxygen.
An algorithm for deriving core magnetic field models from the Swarm data set
NASA Astrophysics Data System (ADS)
Rother, Martin; Lesur, Vincent; Schachtschneider, Reyko
2013-11-01
In view of an optimal exploitation of the Swarm data set, we have prepared and tested software dedicated to the determination of accurate core magnetic field models and of the Euler angles between the magnetic sensors and the satellite reference frame. The dedicated core field model estimation is derived directly from the GFZ Reference Internal Magnetic Model (GRIMM) inversion and modeling family. The data selection techniques and the model parameterizations are similar to what were used for the derivation of the second (Lesur et al., 2010) and third versions of GRIMM, although the usage of observatory data is not planned in the framework of the application to Swarm. The regularization technique applied during the inversion process smoothes the magnetic field model in time. The algorithm to estimate the Euler angles is also derived from the CHAMP studies. The inversion scheme includes Euler angle determination with a quaternion representation for describing the rotations. It has been built to handle possible weak time variations of these angles. The modeling approach and software have been initially validated on a simple, noise-free, synthetic data set and on CHAMP vector magnetic field measurements. We present results of test runs applied to the synthetic Swarm test data set.
Goyens, C; Jamet, C; Ruddick, K G
2013-09-09
The present study provides an extensive overview of red and near infra-red (NIR) spectral relationships found in the literature and used to constrain red or NIR-modeling schemes in current atmospheric correction (AC) algorithms with the aim to improve water-leaving reflectance retrievals, ρw(λ), in turbid waters. However, most of these spectral relationships have been developed with restricted datasets and, subsequently, may not be globally valid, explaining the need of an accurate validation exercise. Spectral relationships are validated here with turbid in situ data for ρw(λ). Functions estimating ρw(λ) in the red were only valid for moderately turbid waters (ρw(λNIR) < 3.10(-3)). In contrast, bounding equations used to limit ρw(667) retrievals according to the water signal at 555 nm, appeared to be valid for all turbidity ranges presented in the in situ dataset. In the NIR region of the spectrum, the constant NIR reflectance ratio suggested by Ruddick et al. (2006) (Limnol. Oceanogr. 51, 1167-1179), was valid for moderately to very turbid waters (ρw(λNIR) < 10(-2)) while the polynomial function, initially developed by Wang et al. (2012) (Opt. Express 20, 741-753) with remote sensing reflectances over the Western Pacific, was also valid for extremely turbid waters (ρw(λNIR) > 10(-2)). The results of this study suggest to use the red bounding equations and the polynomial NIR function to constrain red or NIR-modeling schemes in AC processes with the aim to improve ρw(λ) retrievals where current AC algorithms fail.
Memory-efficient decoding of LDPC codes
NASA Technical Reports Server (NTRS)
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
NASA Astrophysics Data System (ADS)
Lashkov, V. A.; Levashko, E. I.; Safin, R. G.
2006-05-01
The heat and mass transfer in the process of drying of high-humidity materials by their depressurization has been investigated. The results of experimental investigation and mathematical simulation of the indicated process are presented. They allow one to determine the regularities of this process and predict the quality of the finished product. A technological scheme and an engineering procedure for calculating the drying of the liquid base of a soap are presented.
Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction
2016-01-01
reconstruction. The array topology samples the scene on a regular grid of phase centers, using a tiling of Boundary Arrays (BAs). Following a simple correction...hardware. Fig. 1 depicts the multistatic array topology. As seen, the topology is a tiled arrangement of Boundary Arrays (BAs). The BA is a well-known...sparse array layout comprised of two linear transmit arrays, and two linear receive arrays [6]. A slightly different tiled arrangement of BAs was used
Accelerating NBODY6 with graphics processing units
NASA Astrophysics Data System (ADS)
Nitadori, Keigo; Aarseth, Sverre J.
2012-07-01
We describe the use of graphics processing units (GPUs) for speeding up the code NBODY6 which is widely used for direct N-body simulations. Over the years, the N2 nature of the direct force calculation has proved a barrier for extending the particle number. Following an early introduction of force polynomials and individual time steps, the calculation cost was first reduced by the introduction of a neighbour scheme. After a decade of GRAPE computers which speeded up the force calculation further, we are now in the era of GPUs where relatively small hardware systems are highly cost effective. A significant gain in efficiency is achieved by employing the GPU to obtain the so-called regular force which typically involves some 99 per cent of the particles, while the remaining local forces are evaluated on the host. However, the latter operation is performed up to 20 times more frequently and may still account for a significant cost. This effort is reduced by parallel SSE/AVX procedures where each interaction term is calculated using mainly single precision. We also discuss further strategies connected with coordinate and velocity prediction required by the integration scheme. This leaves hard binaries and multiple close encounters which are treated by several regularization methods. The present NBODY6-GPU code is well balanced for simulations in the particle range 104-2 × 105 for a dual-GPU system attached to a standard PC.
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2015-01-01
Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.
On the regularity of the covariance matrix of a discretized scalar field on the sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilbao-Ahedo, J.D.; Barreiro, R.B.; Herranz, D.
2017-02-01
We present a comprehensive study of the regularity of the covariance matrix of a discretized field on the sphere. In a particular situation, the rank of the matrix depends on the number of pixels, the number of spherical harmonics, the symmetries of the pixelization scheme and the presence of a mask. Taking into account the above mentioned components, we provide analytical expressions that constrain the rank of the matrix. They are obtained by expanding the determinant of the covariance matrix as a sum of determinants of matrices made up of spherical harmonics. We investigate these constraints for five different pixelizationsmore » that have been used in the context of Cosmic Microwave Background (CMB) data analysis: Cube, Icosahedron, Igloo, GLESP and HEALPix, finding that, at least in the considered cases, the HEALPix pixelization tends to provide a covariance matrix with a rank closer to the maximum expected theoretical value than the other pixelizations. The effect of the propagation of numerical errors in the regularity of the covariance matrix is also studied for different computational precisions, as well as the effect of adding a certain level of noise in order to regularize the matrix. In addition, we investigate the application of the previous results to a particular example that requires the inversion of the covariance matrix: the estimation of the CMB temperature power spectrum through the Quadratic Maximum Likelihood algorithm. Finally, some general considerations in order to achieve a regular covariance matrix are also presented.« less
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
Regular Deployment of Wireless Sensors to Achieve Connectivity and Information Coverage
Cheng, Wei; Li, Yong; Jiang, Yi; Yin, Xipeng
2016-01-01
Coverage and connectivity are two of the most critical research subjects in WSNs, while regular deterministic deployment is an important deployment strategy and results in some pattern-based lattice WSNs. Some studies of optimal regular deployment for generic values of rc/rs were shown recently. However, most of these deployments are subject to a disk sensing model, and cannot take advantage of data fusion. Meanwhile some other studies adapt detection techniques and data fusion to sensing coverage to enhance the deployment scheme. In this paper, we provide some results on optimal regular deployment patterns to achieve information coverage and connectivity as a variety of rc/rs, which are all based on data fusion by sensor collaboration, and propose a novel data fusion strategy for deployment patterns. At first the relation between variety of rc/rs and density of sensors needed to achieve information coverage and connectivity is derived in closed form for regular pattern-based lattice WSNs. Then a dual triangular pattern deployment based on our novel data fusion strategy is proposed, which can utilize collaborative data fusion more efficiently. The strip-based deployment is also extended to a new pattern to achieve information coverage and connectivity, and its characteristics are deduced in closed form. Some discussions and simulations are given to show the efficiency of all deployment patterns, including previous patterns and the proposed patterns, to help developers make more impactful WSN deployment decisions. PMID:27529246
Haghdoost, AA; Momtazmanesh, N; Shoghi, F; Mohagheghi, M; Mehrolhassani, MH
2013-01-01
Background: In order to improve the quality of education in universities of medical sciences (UMS), and because of the key role of education development centers (EDCs), an accreditation scheme was developed to evaluate their performance. Method: A group of experts in the medical education field was selected based on pre-defined criteria by EDC of Ministry of Health and Medical education. The team, worked intensively for 6 months to develop a list of essential standards to assess the performance of EDCs. Having checked for the content validity of standards, clear and measurable indicators were created via consensus. Then, required information were collected from UMS EDCs; the first round of accreditation was carried out just to check the acceptability of this scheme, and make force universities to prepare themselves for the next factual round of accreditation. Results: Five standards domains were developed as the conceptual framework for defining main categories of indicators. This included: governing and leadership, educational planning, faculty development, assessment and examination and research in education. Nearly all of UMS filled all required data forms precisely with minimum confusion which shows the practicality of this accreditation scheme. Conclusion: It seems that the UMS have enough interest to provide required information for this accreditation scheme. However, in order to receive promising results, most of universities have to work intensively in order to prepare minimum levels in all required standards. However, it seems that in long term, implementation of a valid accreditation scheme plays an important role in improvement of the quality of medical education around the country. PMID:23865031
Haghdoost, Aa; Momtazmanesh, N; Shoghi, F; Mohagheghi, M; Mehrolhassani, Mh
2013-01-01
In order to improve the quality of education in universities of medical sciences (UMS), and because of the key role of education development centers (EDCs), an accreditation scheme was developed to evaluate their performance. A group of experts in the medical education field was selected based on pre-defined criteria by EDC of Ministry of Health and Medical education. The team, worked intensively for 6 months to develop a list of essential standards to assess the performance of EDCs. Having checked for the content validity of standards, clear and measurable indicators were created via consensus. Then, required information were collected from UMS EDCs; the first round of accreditation was carried out just to check the acceptability of this scheme, and make force universities to prepare themselves for the next factual round of accreditation. Five standards domains were developed as the conceptual framework for defining main categories of indicators. This included: governing and leadership, educational planning, faculty development, assessment and examination and research in education. Nearly all of UMS filled all required data forms precisely with minimum confusion which shows the practicality of this accreditation scheme. It seems that the UMS have enough interest to provide required information for this accreditation scheme. However, in order to receive promising results, most of universities have to work intensively in order to prepare minimum levels in all required standards. However, it seems that in long term, implementation of a valid accreditation scheme plays an important role in improvement of the quality of medical education around the country.
Qualification of APOLLO2 BWR calculation scheme on the BASALA mock-up
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaglio-Gaudard, C.; Santamarina, A.; Sargeni, A.
2006-07-01
A new neutronic APOLLO2/MOC/SHEM/CEA2005 calculation scheme for BWR applications has been developed by the French 'Commissariat a l'Energie Atomique'. This scheme is based on the latest calculation methodology (accurate mutual and self-shielding formalism, MOC treatment of the transport equation) and the recent JEFF3.1 nuclear data library. This paper presents the experimental validation of this new calculation scheme on the BASALA BWR mock-up The BASALA programme is devoted to the measurements of the physical parameters of high moderation 100% MOX BWR cores, in hot and cold conditions. The experimental validation of the calculation scheme deals with core reactivity, fission rate maps,more » reactivity worth of void and absorbers (cruciform control blades and Gd pins), as well as temperature coefficient. Results of the analysis using APOLLO2/MOC/SHEM/CEA2005 show an overestimation of the core reactivity by 600 pcm for BASALA-Hot and 750 pcm for BASALA-Cold. Reactivity worth of gadolinium poison pins and hafnium or B{sub 4}C control blades are predicted by APOLLO2 calculation within 2% accuracy. Furthermore, the radial power map is well predicted for every core configuration, including Void configuration and Hf / B{sub 4}C configurations: fission rates in the central assembly are calculated within the {+-}2% experimental uncertainty for the reference cores. The C/E bias on the isothermal Moderator Temperature Coefficient, using the CEA2005 library based on JEFF3.1 file, amounts to -1.7{+-}03 pcm/ deg. C on the range 10 deg. C-80 deg. C. (authors)« less
NASA Astrophysics Data System (ADS)
Zhu, Y.; Ren, L.; Lü, H.
2017-12-01
On the Huaibei Plain of Anhui Province, China, winter wheat (WW) is the most prominent crop. The study area belongs to transitional climate, with shallow water table. The original climate change is complex, in addition, global warming make the climate change more complex. The winter wheat growth period is from October to June, just during the rainless season, the WW growth always depends on part of irrigation water. Under such complex climate change, the rainfall varies during the growing seasons, and water table elevations also vary. Thus, water tables supply variable moisture change between soil water and groundwater, which impact the irrigation and discharge scheme for plant growth and yield. In Huaibei plain, the environmental pollution is very serious because of agricultural use of chemical fertilizer, pesticide, herbicide and etc. In order to protect river water and groundwater from pollution, the irrigation and discharge scheme should be estimated accurately. Therefore, determining the irrigation and discharge scheme for winter wheat under climate change is important for the plant growth management decision-making. Based on field observations and local weather data of 2004-2005 and 2005-2006, the numerical model HYDRUS-1D was validated and calibrated by comparing simulated and measured root-zone soil water contents. The validated model was used to estimate the irrigation and discharge scheme in 2010-2090 under the scenarios described by HadCM3 (1970 to 2000 climate states are taken as baselines) with winter wheat growth in an optimum state indicated by growth height and LAI.
Chalmers, Rachel M; Pérez-Cordón, Gregorio; Cacció, Simone M; Klotz, Christian; Robertson, Lucy J
2018-06-13
Due to the occurrence of genetic recombination, a reliable and discriminatory method to genotype Cryptosporidium isolates at the intra-species level requires the analysis of multiple loci, but a standardised scheme is not currently available. A workshop was held at the Robert Koch Institute, Berlin in 2016 that gathered 23 scientists with appropriate expertise (in either Cryptosporidium genotyping and/or surveillance, epidemiology or outbreaks) to discuss the processes for the development of a robust, standardised, multi-locus genotyping (MLG) scheme and propose an approach. The background evidence and main conclusions were outlined in a previously published report; the objectives of this further report are to describe 1) the current use of Cryptosporidium genotyping, 2) the elicitation and synthesis of the participants' opinions, and 3) the agreed processes and criteria for the development, evaluation and validation of a standardised MLG scheme for Cryptosporidium surveillance and outbreak investigations. Cryptosporidium was characterised to the species level in 7/12 (58%) participating European countries, mostly for human outbreak investigations. Further genotyping was mostly by sequencing the gp60 gene. A ranking exercise of performance and convenience criteria found that portability, biological robustness, typeability, and discriminatory power were considered by participants as the most important attributes in developing a multilocus scheme. The major barrier to implementation was lack of funding. A structured process for marker identification, evaluation, validation, implementation, and maintenance was proposed and outlined for application to Cryptosporidium, with prioritisation of Cryptosporidium parvum to support investigation of transmission in Europe. Copyright © 2018 Elsevier Inc. All rights reserved.
Campbell, Princess Christina; Korie, Patrick Chukwuemeka; Nnaji, Feziechukwu Collins
2014-01-01
Background: The National Health Insurance Scheme (NHIS), operated majorly in Nigeria by health maintenance organisations (HMOs), took off formally in June 2005. In view of the inherent risks in the operation of any social health insurance, it is necessary to efficiently manage these risks for sustainability of the scheme. Consequently the risk-management strategies deployed by HMOs need regular assessment. This study assessed the risk management in the Nigeria social health insurance scheme among HMOs. Materials and Methods: Cross-sectional survey of 33 HMOs participating in the NHIS. Results: Utilisation of standard risk-management strategies by the HMOs was 11 (52.6%). The other risk-management strategies not utilised in the NHIS 10 (47.4%) were risk equalisation and reinsurance. As high as 11 (52.4%) of participating HMOs had a weak enrollee base (less than 30,000 and poor monthly premium and these impacted negatively on the HMOs such that a large percentage 12 (54.1%) were unable to meet up with their financial obligations. Most of the HMOs 15 (71.4%) participated in the Millennium development goal (MDG) maternal and child health insurance programme. Conclusions: Weak enrollee base and poor monthly premium predisposed the HMOs to financial risk which impacted negatively on the overall performance in service delivery in the NHIS, further worsened by the non-utilisation of risk equalisation and reinsurance as risk-management strategies in the NHIS. There is need to make the scheme compulsory and introduce risk equalisation and reinsurance. PMID:25298605
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, M.K.; Fomel, S.B.; Sethian, J.A.
2009-01-01
In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approachmore » is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.« less
ERIC Educational Resources Information Center
Roy, Amélie; Guay, Frédéric; Valois, Pierre
2013-01-01
In the province of Quebec, Canada, a trend towards full inclusion has impelled teachers to adapt their instruction to meet the needs of both advanced and weaker learners in regular school settings. The main purpose of the present investigation was to develop and validate the Differentiated Instruction Scale (DIS), which assesses the use of…
Reliability and Validity of Goal Orientation in Exercise Measure (GOEM)--Turkish Version
ERIC Educational Resources Information Center
Ersöz, Gözde; Müftüler, Mine; Lapa, Tennur Yerlisu; Tümer, Adile
2017-01-01
The aim of this study was to examine validity and reliability of the Turkish version of the Goal Orientation in Exercise Measure (GOEM). There were 408 participants who were regularly exercising and their age ranged from 17 to 61 years old. The psychometric characteristics of the scale were investigated using exploratory factor analysis (EFA),…
Symmetry-preserving contact interaction model for heavy-light mesons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serna, F. E.; Brito, M. A.; Krein, G.
2016-01-22
We use a symmetry-preserving regularization method of ultraviolet divergences in a vector-vector contact interaction model for low-energy QCD. The contact interaction is a representation of nonperturbative kernels used Dyson-Schwinger and Bethe-Salpeter equations. The regularization method is based on a subtraction scheme that avoids standard steps in the evaluation of divergent integrals that invariably lead to symmetry violation. Aiming at the study of heavy-light mesons, we have implemented the method to the pseudoscalar π and K mesons. We have solved the Dyson-Schwinger equation for the u, d and s quark propagators, and obtained the bound-state Bethe-Salpeter amplitudes in a way thatmore » the Ward-Green-Takahashi identities reflecting global symmetries of the model are satisfied for arbitrary routing of the momenta running in loop integrals.« less
Zeroth order regular approximation approach to electric dipole moment interactions of the electron.
Gaul, Konstantin; Berger, Robert
2017-07-07
A quasi-relativistic two-component approach for an efficient calculation of P,T-odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.
Zeroth order regular approximation approach to electric dipole moment interactions of the electron
NASA Astrophysics Data System (ADS)
Gaul, Konstantin; Berger, Robert
2017-07-01
A quasi-relativistic two-component approach for an efficient calculation of P ,T -odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.
Comparison of four stable numerical methods for Abel's integral equation
NASA Technical Reports Server (NTRS)
Murio, Diego A.; Mejia, Carlos E.
1991-01-01
The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.
Tan, Edwin T.; Martin, Sarah R.; Fortier, Michelle A.; Kain, Zeev N.
2012-01-01
Objective To develop and validate a behavioral coding measure, the Children's Behavior Coding System-PACU (CBCS-P), for children's distress and nondistress behaviors while in the postanesthesia recovery unit. Methods A multidisciplinary team examined videotapes of children in the PACU and developed a coding scheme that subsequently underwent a refinement process (CBCS-P). To examine the reliability and validity of the coding system, 121 children and their parents were videotaped during their stay in the PACU. Participants were healthy children undergoing elective, outpatient surgery and general anesthesia. The CBCS-P was utilized and objective data from medical charts (analgesic consumption and pain scores) were extracted to establish validity. Results Kappa values indicated good-to-excellent (κ's > .65) interrater reliability of the individual codes. The CBCS-P had good criterion validity when compared to children's analgesic consumption and pain scores. Conclusions The CBCS-P is a reliable, observational coding method that captures children's distress and nondistress postoperative behaviors. These findings highlight the importance of considering context in both the development and application of observational coding schemes. PMID:22167123
Green's function enriched Poisson solver for electrostatics in many-particle systems
NASA Astrophysics Data System (ADS)
Sutmann, Godehard
2016-06-01
A highly accurate method is presented for the construction of the charge density for the solution of the Poisson equation in particle simulations. The method is based on an operator adjusted source term which can be shown to produce exact results up to numerical precision in the case of a large support of the charge distribution, therefore compensating the discretization error of finite difference schemes. This is achieved by balancing an exact representation of the known Green's function of regularized electrostatic problem with a discretized representation of the Laplace operator. It is shown that the exact calculation of the potential is possible independent of the order of the finite difference scheme but the computational efficiency for higher order methods is found to be superior due to a faster convergence to the exact result as a function of the charge support.
Fault diagnosis for analog circuits utilizing time-frequency features and improved VVRKFA
NASA Astrophysics Data System (ADS)
He, Wei; He, Yigang; Luo, Qiwu; Zhang, Chaolong
2018-04-01
This paper proposes a novel scheme for analog circuit fault diagnosis utilizing features extracted from the time-frequency representations of signals and an improved vector-valued regularized kernel function approximation (VVRKFA). First, the cross-wavelet transform is employed to yield the energy-phase distribution of the fault signals over the time and frequency domain. Since the distribution is high-dimensional, a supervised dimensionality reduction technique—the bilateral 2D linear discriminant analysis—is applied to build a concise feature set from the distributions. Finally, VVRKFA is utilized to locate the fault. In order to improve the classification performance, the quantum-behaved particle swarm optimization technique is employed to gradually tune the learning parameter of the VVRKFA classifier. The experimental results for the analog circuit faults classification have demonstrated that the proposed diagnosis scheme has an advantage over other approaches.
Geometric integration in Born-Oppenheimer molecular dynamics.
Odell, Anders; Delin, Anna; Johansson, Börje; Cawkwell, Marc J; Niklasson, Anders M N
2011-12-14
Geometric integration schemes for extended Lagrangian self-consistent Born-Oppenheimer molecular dynamics, including a weak dissipation to remove numerical noise, are developed and analyzed. The extended Lagrangian framework enables the geometric integration of both the nuclear and electronic degrees of freedom. This provides highly efficient simulations that are stable and energy conserving even under incomplete and approximate self-consistent field (SCF) convergence. We investigate three different geometric integration schemes: (1) regular time reversible Verlet, (2) second order optimal symplectic, and (3) third order optimal symplectic. We look at energy conservation, accuracy, and stability as a function of dissipation, integration time step, and SCF convergence. We find that the inclusion of dissipation in the symplectic integration methods gives an efficient damping of numerical noise or perturbations that otherwise may accumulate from finite arithmetics in a perfect reversible dynamics. © 2011 American Institute of Physics
Generalized teleportation by quantum walks
NASA Astrophysics Data System (ADS)
Wang, Yu; Shang, Yun; Xue, Peng
2017-09-01
We develop a generalized teleportation scheme based on quantum walks with two coins. For an unknown qubit state, we use two-step quantum walks on the line and quantum walks on the cycle with four vertices for teleportation. For any d-dimensional states, quantum walks on complete graphs and quantum walks on d-regular graphs can be used for implementing teleportation. Compared with existing d-dimensional states teleportation, prior entangled state is not required and the necessary maximal entanglement resource is generated by the first step of quantum walk. Moreover, two projective measurements with d elements are needed by quantum walks on the complete graph, rather than one joint measurement with d^2 basis states. Quantum walks have many applications in quantum computation and quantum simulations. This is the first scheme of realizing communicating protocol with quantum walks, thus opening wider applications.
Rohan, A J
2014-07-01
To examine the association of pain assessment scores achieved through regular reassessment practice, as required by the Joint Commission (JC), with painful events and the use of analgesics in premature, ventilated infants. A cross-sectional study was performed in two tertiary level neonatal intensive care units. Pain was assessed at regular intervals at each center using validated multidimensional instruments in accordance with the JC standards. Sample comprised 196 ventilated premature infant patient-days. Overall, 2% of scores suggested the presence of pain, and 0.1% of pain scores were associated with analgesia. Ventilated infants who were exposed to multiple pain-associated procedures in a day never demonstrated pain score elevations despite infrequent preemptive or continuous analgesic administration. Pain assessment scores achieved using regular reassessment processes were poorly correlated with exposure to pain-associated procedures or conditions. Low pain scores achieved through regular reassessment may not correlate to low pain exposure. Resources that are expended on regular reassessment processes may need to be reconsidered in light of the low yield for clinical alterations in care in this setting.
Phase-Image Encryption Based on 3D-Lorenz Chaotic System and Double Random Phase Encoding
NASA Astrophysics Data System (ADS)
Sharma, Neha; Saini, Indu; Yadav, AK; Singh, Phool
2017-12-01
In this paper, an encryption scheme for phase-images based on 3D-Lorenz chaotic system in Fourier domain under the 4f optical system is presented. The encryption scheme uses a random amplitude mask in the spatial domain and a random phase mask in the frequency domain. Its inputs are phase-images, which are relatively more secure as compared to the intensity images because of non-linearity. The proposed scheme further derives its strength from the use of 3D-Lorenz transform in the frequency domain. Although the experimental setup for optical realization of the proposed scheme has been provided, the results presented here are based on simulations on MATLAB. It has been validated for grayscale images, and is found to be sensitive to the encryption parameters of the Lorenz system. The attacks analysis shows that the key-space is large enough to resist brute-force attack, and the scheme is also resistant to the noise and occlusion attacks. Statistical analysis and the analysis based on correlation distribution of adjacent pixels have been performed to test the efficacy of the encryption scheme. The results have indicated that the proposed encryption scheme possesses a high level of security.
Numerical solution of special ultra-relativistic Euler equations using central upwind scheme
NASA Astrophysics Data System (ADS)
Ghaffar, Tayabia; Yousaf, Muhammad; Qamar, Shamsul
2018-06-01
This article is concerned with the numerical approximation of one and two-dimensional special ultra-relativistic Euler equations. The governing equations are coupled first-order nonlinear hyperbolic partial differential equations. These equations describe perfect fluid flow in terms of the particle density, the four-velocity and the pressure. A high-resolution shock-capturing central upwind scheme is employed to solve the model equations. To avoid excessive numerical diffusion, the considered scheme avails the specific information of local propagation speeds. By using Runge-Kutta time stepping method and MUSCL-type initial reconstruction, we have obtained 2nd order accuracy of the proposed scheme. After discussing the model equations and the numerical technique, several 1D and 2D test problems are investigated. For all the numerical test cases, our proposed scheme demonstrates very good agreement with the results obtained by well-established algorithms, even in the case of highly relativistic 2D test problems. For validation and comparison, the staggered central scheme and the kinetic flux-vector splitting (KFVS) method are also implemented to the same model. The robustness and efficiency of central upwind scheme is demonstrated by the numerical results.
Kaneko, Mei; Sato, Iori; Soejima, Takafumi; Kamibeppu, Kiyoko
2014-09-01
The purpose of the study is to develop a Japanese version of the Pediatric Quality of Life Inventory (PedsQL) Generic Core Scales Young Adult Version (PedsQL-YA-J) and determine the feasibility, reliability, and validity of the scales. Translation equivalence and content validity were verified using back-translation and cognitive debriefing tests. A total of 428 young adults recruited from one university, two vocational schools, or five companies completed questionnaires. We determined questionnaire feasibility, internal consistency, and test-retest reliability; checked concurrent validity against the Center for Epidemiologic Studies Depression Scale (CES-D); determined convergent and discriminant validity with the Medical Outcome Study 36-item Short Form Health Survey (SF-36); described known-groups validity with regard to subjective symptoms, illness or injury requiring regular medical visits, and depression; and verified factorial validity. All scales were internally consistent (Cronbach's coefficient alpha = 0.77-0.86); test-retest reliability was acceptable (intraclass correlation coefficient = 0.57-0.69); and all scales were concurrently valid with depression (Pearson's correlation coefficient = 0.43-0.57). The scales convergent and discriminant validity with the SF-36 and CES-D were acceptable. Evaluation of known-groups validity confirmed that the Physical Functioning scale was sensitive for subjective symptoms, the Emotional Functioning scale for depression, and the Work/School Functioning scale for illness or injury requiring regular medical visits. Exploratory factor analysis found a six-factor structure consistent with the assumed structure (cumulative proportion = 57.0%). The PedsQL-YA-J is suitable for assessing health-related quality of life in young adults in education, employment, or training, and for clinical trials and epidemiological research.
Numerical Simulations of Reacting Flows Using Asynchrony-Tolerant Schemes for Exascale Computing
NASA Astrophysics Data System (ADS)
Cleary, Emmet; Konduri, Aditya; Chen, Jacqueline
2017-11-01
Communication and data synchronization between processing elements (PEs) are likely to pose a major challenge in scalability of solvers at the exascale. Recently developed asynchrony-tolerant (AT) finite difference schemes address this issue by relaxing communication and synchronization between PEs at a mathematical level while preserving accuracy, resulting in improved scalability. The performance of these schemes has been validated for simple linear and nonlinear homogeneous PDEs. However, many problems of practical interest are governed by highly nonlinear PDEs with source terms, whose solution may be sensitive to perturbations caused by communication asynchrony. The current work applies the AT schemes to combustion problems with chemical source terms, yielding a stiff system of PDEs with nonlinear source terms highly sensitive to temperature. Examples shown will use single-step and multi-step CH4 mechanisms for 1D premixed and nonpremixed flames. Error analysis will be discussed both in physical and spectral space. Results show that additional errors introduced by the AT schemes are negligible and the schemes preserve their accuracy. We acknowledge funding from the DOE Computational Science Graduate Fellowship administered by the Krell Institute.
Qiu, Shuming; Xu, Guoai; Ahmad, Haseeb; Guo, Yanhui
2018-01-01
The Session Initiation Protocol (SIP) is an extensive and esteemed communication protocol employed to regulate signaling as well as for controlling multimedia communication sessions. Recently, Kumari et al. proposed an improved smart card based authentication scheme for SIP based on Farash's scheme. Farash claimed that his protocol is resistant against various known attacks. But, we observe some accountable flaws in Farash's protocol. We point out that Farash's protocol is prone to key-compromise impersonation attack and is unable to provide pre-verification in the smart card, efficient password change and perfect forward secrecy. To overcome these limitations, in this paper we present an enhanced authentication mechanism based on Kumari et al.'s scheme. We prove that the proposed protocol not only overcomes the issues in Farash's scheme, but it can also resist against all known attacks. We also provide the security analysis of the proposed scheme with the help of widespread AVISPA (Automated Validation of Internet Security Protocols and Applications) software. At last, comparing with the earlier proposals in terms of security and efficiency, we conclude that the proposed protocol is efficient and more secure.
Integrated optical 3D digital imaging based on DSP scheme
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.
2008-03-01
We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.
Advection of Microphysical Scalars in Terminal Area Simulation System (TASS)
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Proctor, Fred H.
2011-01-01
The Terminal Area Simulation System (TASS) is a large eddy scale atmospheric flow model with extensive turbulence and microphysics packages. It has been applied successfully in the past to a diverse set of problems ranging from prediction of severe convective events (Proctor et al. 2002), tracking storms and for simulating weapons effects such as the dispersion and fallout of fission debris (Bacon and Sarma 1991), etc. More recently, TASS has been used for predicting the transport and decay of wake vortices behind aircraft (Proctor 2009). An essential part of the TASS model is its comprehensive microphysics package, which relies on the accurate computation of microphysical scalar transport. This paper describes an evaluation of the Leonard scheme implemented in the TASS model for transporting microphysical scalars. The scheme is validated against benchmark cases with exact solutions and compared with two other schemes - a Monotone Upstream-centered Scheme for Conservation Laws (MUSCL)-type scheme after van Leer and LeVeque's high-resolution wave propagation method. Finally, a comparison between the schemes is made against an incident of severe tornadic super-cell convection near Del City, Oklahoma.
Wave drift damping acting on multiple circular cylinders (model tests)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinoshita, Takeshi; Sunahara, Shunji; Bao, W.
1995-12-31
The wave drift damping for the slow drift motion of a four-column platform is experimentally investigated. The estimation of damping force of the slow drift motion of moored floating structures in ocean waves, is one of the most important topics. Bao et al. calculated an interaction of multiple circular cylinders based on the potential flow theory, and showed that the wave drift damping is significantly influenced by the interaction between cylinders. This calculation method assumes that the slow drift motion is approximately replaced by steady current, that is, structures on slow drift motion are supposed to be equivalent to onesmore » in both regular waves and slow current. To validate semi-analytical solutions of Bao et al., experiments were carried out. At first, added resistance due to waves acting on a structure composed of multiple (four) vertical circular cylinders fixed to a slowly moving carriage, was measured in regular waves. Next, the added resistance of the structure moored by linear spring to the slowly moving carriage were measured in regular waves. Furthermore, to validate the assumption that the slow drift motion is replaced by steady current, free decay tests in still water and in regular waves were compared with the simulation of the slow drift motion using the wave drift damping coefficient obtained by the added resistance tests.« less
Neural Network Autopilot System for a Mathematical Model of the Boeing 747
1998-08-04
the NASA/Aurora Theseus ", Thesis WVU MAE Dept., Morgantown, WV, June 1996. [9] Napolitano, M.R., Neppach, C, Casdorph, V., Naylor, S. "On-Line...Validation Schemes for Implementation on the NASA/Aurora Theseus ", Thesis WVU MAE Dept., Morgantown, WV, June 1996. [9] Napolitano, M.R., Neppach, C...Schemes for Implementation on the NASA/Aurora Theseus ", Thesis WVU MAE Dept., Morgantown, WV, June 1996. [9] Napolitano, M.R., Neppach, C, Casdorph, V
Coded excitation for infrared non-destructive testing of carbon fiber reinforced plastics.
Mulaveesala, Ravibabu; Venkata Ghali, Subbarao
2011-05-01
This paper proposes a Barker coded excitation for defect detection using infrared non-destructive testing. Capability of the proposed excitation scheme is highlighted with recently introduced correlation based post processing approach and compared with the existing phase based analysis by taking the signal to noise ratio into consideration. Applicability of the proposed scheme has been experimentally validated on a carbon fiber reinforced plastic specimen containing flat bottom holes located at different depths.
Simulation study on combination of GRACE monthly gravity field solutions
NASA Astrophysics Data System (ADS)
Jean, Yoomin; Meyer, Ulrich; Jäggi, Adrian
2016-04-01
The GRACE monthly gravity fields from different processing centers are combined in the frame of the project EGSIEM. This combination is done on solution level first to define weights which will be used for a combination on normal equation level. The applied weights are based on the deviation of the individual gravity fields from the arithmetic mean of all involved gravity fields. This kind of weighting scheme relies on the assumption that the true gravity field is close to the arithmetic mean of the involved individual gravity fields. However, the arithmetic mean can be affected by systematic errors in individual gravity fields, which consequently results in inappropriate weights. For the future operational scientific combination service of GRACE monthly gravity fields, it is necessary to examine the validity of the weighting scheme also in possible extreme cases. To investigate this, we make a simulation study on the combination of gravity fields. Firstly, we show how a deviated gravity field can affect the combined solution in terms of signal and noise in the spatial domain. We also show the impact of systematic errors in individual gravity fields on the resulting combined solution. Then, we investigate whether the weighting scheme still works in the presence of outliers. The result of this simulation study will be useful to understand and validate the weighting scheme applied to the combination of the monthly gravity fields.
Indirect measurement of three-photon correlation in nonclassical light sources
NASA Astrophysics Data System (ADS)
Ann, Byoung-moo; Song, Younghoon; Kim, Junki; Yang, Daeho; An, Kyungwon
2016-06-01
We observe the three-photon correlation in nonclassical light sources by using an indirect measurement scheme based on the dead-time effect of photon-counting detectors. We first develop a general theory which enables us to extract the three-photon correlation from the two-photon correlation of an arbitrary light source measured with detectors with finite dead times. We then confirm the validity of our measurement scheme in experiments done with a cavity-QED microlaser operating with a large intracavity mean photon number exhibiting both sub- and super-Poissonian photon statistics. The experimental results are in good agreement with the theoretical expectation. Our measurement scheme provides an alternative approach for N -photon correlation measurement employing (N -1 ) detectors and thus a reduced measurement time for a given signal-to-noise ratio, compared to the usual scheme requiring N detectors.
Waltman, Ludo; Yan, Erjia; van Eck, Nees Jan
2011-10-01
Two commonly used ideas in the development of citation-based research performance indicators are the idea of normalizing citation counts based on a field classification scheme and the idea of recursive citation weighing (like in PageRank-inspired indicators). We combine these two ideas in a single indicator, referred to as the recursive mean normalized citation score indicator, and we study the validity of this indicator. Our empirical analysis shows that the proposed indicator is highly sensitive to the field classification scheme that is used. The indicator also has a strong tendency to reinforce biases caused by the classification scheme. Based on these observations, we advise against the use of indicators in which the idea of normalization based on a field classification scheme and the idea of recursive citation weighing are combined.
NASA Astrophysics Data System (ADS)
Liang, Dong; Zhang, Zhiyao; Liu, Yong; Li, Xiaojun; Jiang, Wei; Tan, Qinggui
2018-04-01
A real-time photonic sampling structure with effective nonlinearity suppression and excellent signal-to-noise ratio (SNR) performance is proposed. The key points of this scheme are the polarization-dependent modulators (P-DMZMs) and the sagnac loop structure. Thanks to the polarization sensitive characteristic of P-DMZMs, the differences between transfer functions of the fundamental signal and the distortion become visible. Meanwhile, the selection of specific biases in P-DMZMs is helpful to achieve a preferable linearized performance with a low noise level for real-time photonic sampling. Compared with the quadrature-biased scheme, the proposed scheme is capable of valid nonlinearity suppression and is able to provide a better SNR performance even in a large frequency range. The proposed scheme is proved to be effective and easily implemented for real time photonic applications.
Performance evaluation methodology for historical document image binarization.
Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis
2013-02-01
Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.
Polarization-basis tracking scheme for quantum key distribution using revealed sifted key bits.
Ding, Yu-Yang; Chen, Wei; Chen, Hua; Wang, Chao; Li, Ya-Ping; Wang, Shuang; Yin, Zhen-Qiang; Guo, Guang-Can; Han, Zheng-Fu
2017-03-15
The calibration of the polarization basis between the transmitter and receiver is an important task in quantum key distribution. A continuously working polarization-basis tracking scheme (PBTS) will effectively promote the efficiency of the system and reduce the potential security risk when switching between the transmission and calibration modes. Here, we proposed a single-photon level continuously working PBTS using only sifted key bits revealed during an error correction procedure, without introducing additional reference light or interrupting the transmission of quantum signals. We applied the scheme to a polarization-encoding BB84 QKD system in a 50 km fiber channel, and obtained an average quantum bit error rate (QBER) of 2.32% and a standard derivation of 0.87% during 24 h of continuous operation. The stable and relatively low QBER validates the effectiveness of the scheme.
Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.
Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei
2017-04-01
Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.
NASA Astrophysics Data System (ADS)
Li, Xiaosong; Li, Huafeng; Yu, Zhengtao; Kong, Yingchun
2015-07-01
An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.
Das, Ashok Kumar
2015-03-01
An integrated EPR (Electronic Patient Record) information system of all the patients provides the medical institutions and the academia with most of the patients' information in details for them to make corrective decisions and clinical decisions in order to maintain and analyze patients' health. In such system, the illegal access must be restricted and the information from theft during transmission over the insecure Internet must be prevented. Lee et al. proposed an efficient password-based remote user authentication scheme using smart card for the integrated EPR information system. Their scheme is very efficient due to usage of one-way hash function and bitwise exclusive-or (XOR) operations. However, in this paper, we show that though their scheme is very efficient, their scheme has three security weaknesses such as (1) it has design flaws in password change phase, (2) it fails to protect privileged insider attack and (3) it lacks the formal security verification. We also find that another recently proposed Wen's scheme has the same security drawbacks as in Lee at al.'s scheme. In order to remedy these security weaknesses found in Lee et al.'s scheme and Wen's scheme, we propose a secure and efficient password-based remote user authentication scheme using smart cards for the integrated EPR information system. We show that our scheme is also efficient as compared to Lee et al.'s scheme and Wen's scheme as our scheme only uses one-way hash function and bitwise exclusive-or (XOR) operations. Through the security analysis, we show that our scheme is secure against possible known attacks. Furthermore, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool and show that our scheme is secure against passive and active attacks.
Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration
Doherty, John E.; Hunt, Randall J.
2010-01-01
Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.
Li, Dong; Pan, Zhisong; Hu, Guyu; Zhu, Zexuan; He, Shan
2017-03-14
Active modules are connected regions in biological network which show significant changes in expression over particular conditions. The identification of such modules is important since it may reveal the regulatory and signaling mechanisms that associate with a given cellular response. In this paper, we propose a novel active module identification algorithm based on a memetic algorithm. We propose a novel encoding/decoding scheme to ensure the connectedness of the identified active modules. Based on the scheme, we also design and incorporate a local search operator into the memetic algorithm to improve its performance. The effectiveness of proposed algorithm is validated on both small and large protein interaction networks.
An Intelligent Actuator Fault Reconstruction Scheme for Robotic Manipulators.
Xiao, Bing; Yin, Shen
2018-02-01
This paper investigates a difficult problem of reconstructing actuator faults for robotic manipulators. An intelligent approach with fast reconstruction property is developed. This is achieved by using observer technique. This scheme is capable of precisely reconstructing the actual actuator fault. It is shown by Lyapunov stability analysis that the reconstruction error can converge to zero after finite time. A perfect reconstruction performance including precise and fast properties can be provided for actuator fault. The most important feature of the scheme is that, it does not depend on control law, dynamic model of actuator, faults' type, and also their time-profile. This super reconstruction performance and capability of the proposed approach are further validated by simulation and experimental results.
NASA Astrophysics Data System (ADS)
Xiao, Dan; Li, Xiaowei; Liu, Su-Juan; Wang, Qiong-Hua
2018-03-01
In this paper, a new scheme of multiple-image encryption and display based on computer-generated holography (CGH) and maximum length cellular automata (MLCA) is presented. With the scheme, the computer-generated hologram, which has the information of the three primitive images, is generated by modified Gerchberg-Saxton (GS) iterative algorithm using three different fractional orders in fractional Fourier domain firstly. Then the hologram is encrypted using MLCA mask. The ciphertext can be decrypted combined with the fractional orders and the rules of MLCA. Numerical simulations and experimental display results have been carried out to verify the validity and feasibility of the proposed scheme.
Longitudinal phase-space coating of beam in a storage ring
NASA Astrophysics Data System (ADS)
Bhat, C. M.
2014-06-01
In this Letter, I report on a novel scheme for beam stacking without any beam emittance dilution using a barrier rf system in synchrotrons. The general principle of the scheme called longitudinal phase-space coating, validation of the concept via multi-particle beam dynamics simulations applied to the Fermilab Recycler, and its experimental demonstration are presented. In addition, it has been shown and illustrated that the rf gymnastics involved in this scheme can be used in measuring the incoherent synchrotron tune spectrum of the beam in barrier buckets and in producing a clean hollow beam in longitudinal phase space. The method of beam stacking in synchrotrons presented here is the first of its kind.
ERIC Educational Resources Information Center
Deng, Meng; Wang, Sisi; Guan, Wenjun; Wang, Yan
2017-01-01
The aim of this study was to develop and validate an instrument of inclusive teachers' competencies for teaching students with special educational needs in China. Data were obtained from a preliminary and large-scale investigation in Beijing. The primary analyses included exploratory factor analysis and confirmatory factor analysis. The findings…
The hydrogen atom in D = 3 - 2ɛ dimensions
NASA Astrophysics Data System (ADS)
Adkins, Gregory S.
2018-06-01
The nonrelativistic hydrogen atom in D = 3 - 2 ɛ dimensions is the reference system for perturbative schemes used in dimensionally regularized nonrelativistic effective field theories to describe hydrogen-like atoms. Solutions to the D-dimensional Schrödinger-Coulomb equation are given in the form of a double power series. Energies and normalization integrals are obtained numerically and also perturbatively in terms of ɛ. The utility of the series expansion is demonstrated by the calculation of the divergent expectation value <(V‧)2 >.
Implementing forward recovery using checkpointing in distributed systems
NASA Technical Reports Server (NTRS)
Long, Junsheng; Fuchs, W. K.; Abraham, Jacob A.
1991-01-01
The paper describes the implementation of a forward recovery scheme using checkpoints and replicated tasks. The implementation is based on the concept of lookahead execution and rollback validation. In the experiment, two tasks are selected for the normal execution and one for rollback validation. It is shown that the recovery strategy has nearly error-free execution time and an average redundancy lower than TMR.
NASA Astrophysics Data System (ADS)
Heh, Peter
The current study examined the validation and alignment of the PASA-Science by determining whether the alternate science assessment anchors linked to the regular education science anchors; whether the PASA-Science assessment items are science; whether the PASA-Science assessment items linked to the alternate science eligible content, and what PASA-Science assessment content was considered important by parents and teachers. Special education and science education university faculty determined all but one alternate science assessment anchor linked to the regular science assessment anchors. Special education and science education teachers determined that the PASA-Science assessment items are indeed science and linked to the alternate science eligible content. Finally, parents and teachers indicated the most important science content assessed in the PASA-Science involved safety and independence.
NASA Astrophysics Data System (ADS)
Ori, Amos
2016-01-01
Almheiri, Marolf, Polchinski, and Sully pointed out that for a sufficiently old black hole (BH), the set of assumptions known as the complementarity postulates appears to be inconsistent with the assumption of local regularity at the horizon. They concluded that the horizon of an old BH is likely to be the locus of local irregularity, a "firewall". Here I point out that if one adopts a different assumption, namely that semiclassical physics holds throughout its anticipated domain of validity, then the inconsistency is avoided, and the horizon retains its regularity. In this alternative view-point, the vast portion of the original BH information remains trapped inside the BH throughout the semiclassical domain of evaporation, and possibly leaks out later on. This appears to be an inevitable outcome of semiclassical gravity (if assumed to apply throughout its anticipated domain of validity).
Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Li, Xiong
2015-11-01
The E-health care systems employ IT infrastructure for maximizing health care resources utilization as well as providing flexible opportunities to the remote patient. Therefore, transmission of medical data over any public networks is necessary in health care system. Note that patient authentication including secure data transmission in e-health care system is critical issue. Although several user authentication schemes for accessing remote services are available, their security analysis show that none of them are free from relevant security attacks. We reviewed Das et al.'s scheme and demonstrated their scheme lacks proper protection against several security attacks such as user anonymity, off-line password guessing attack, smart card theft attack, user impersonation attack, server impersonation attack, session key discloser attack. In order to overcome the mentioned security pitfalls, this paper proposes an anonymity preserving remote patient authentication scheme usable in E-health care systems. We then validated the security of the proposed scheme using BAN logic that ensures secure mutual authentication and session key agreement. We also presented the experimental results of the proposed scheme using AVISPA software and the results ensure that our scheme is secure under OFMC and CL-AtSe models. Moreover, resilience of relevant security attacks has been proved through both formal and informal security analysis. The performance analysis and comparison with other schemes are also made, and it has been found that the proposed scheme overcomes the security drawbacks of the Das et al.'s scheme and additionally achieves extra security requirements.
Experimental study on direct adaptive control of a PUMA 560 industrial robot
NASA Technical Reports Server (NTRS)
Seraji, H.; Lee, T.; Delpech, M.
1990-01-01
The implementation and experimental validation of a direct adaptive control scheme on a PUMA 560 industrial robot is discussed. The design theory for direct adaptive control of manipulators is outlined and the test facility and software are described. Results are presented from the experiments on the simultaneous control of all of the six joint angles and control of the end-effector position and orientation of the robot. Also, the possible applications of the direct adaptive control scheme are considered.
André, Nuno Sequeira; Habel, Kai; Louchet, Hadrien; Richter, André
2013-11-04
We report experimental validations of an adaptive 2nd order Volterra equalization scheme for cost effective IMDD OFDM systems. This equalization scheme was applied to both uplink and downlink transmission. Downlink settings were optimized for maximum bitrate where we achieved 34 Gb/s over 10 km of SSMF using an EML with 10 GHz bandwidth. For the uplink, maximum reach was optimized achieving 14 Gb/s using a low-cost DML with 2.5 GHz bandwidth.
A multidirectional cloak for visible light
NASA Astrophysics Data System (ADS)
Chen, Zhen Sheng; Lei Mei, Zhong; Jiang, Wei Xiang; Cui, Tie Jun
2018-04-01
A new macroscopic multidirectional cloak scheme for extraordinary rays is proposed by controlling the optical axes of uniaxial crystals. It eliminates the complicated material constraints and can also be utilized to design a cloaking device for ordinary rays or isotropic cloaks after simplification. Numerical ray tracing and full-wave simulation results validate our design. Moreover, if the uniaxial crystals are changed into other materials whose optical axes can be modulated, like liquid crystals, this scheme has the potential to fabricate direction-tunable cloaks.
Lo, Brian K C; Minaker, Leia; Chan, Alicia N T; Hrgetic, Jessica; Mah, Catherine L
2016-03-01
To adapt and validate a survey instrument to assess the nutrition environment of grab-and-go establishments at a university campus. A version of the Nutrition Environment Measures Survey for grab-and-go establishments (NEMS-GG) was adapted from existing NEMS instruments and tested for reliability and validity through a cross-sectional assessment of the grab-and-go establishments at the University of Toronto. Product availability, price, and presence of nutrition information were evaluated. Cohen's kappa coefficient and intra-class correlation coefficients (ICC) were assessed for inter-rater reliability, and construct validity was assessed using the known-groups comparison method (via store scores). Fifteen grab-and-go establishments were assessed. Inter-rater reliability was high with an almost perfect agreement for availability (mean κ = 0.995) and store scores (ICC = 0.999). The tool demonstrated good face and construct validity. About half of the venues carried fruit and vegetables (46.7% and 53.3%, respectively). Regular and healthier entrée items were generally the same price. Healthier grains were cheaper than regular options. Six establishments displayed nutrition information. Establishments operated by the university's Food Services consistently scored the highest across all food premise types for nutrition signage, availability, and cost of healthier options. Health promotion strategies are needed to address availability and variety of healthier grab-and-go options in university settings.
Study of thermosiphon cooling scheme for the production solenoid of the Mu2e experiment at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhanaraj, N.; Kashikhin, V.; Peterson, T.
2014-01-29
A thermosiphon cooling scheme is envisioned for the Production Solenoid of the Mu2e experiment at Fermi National Accelerator Laboratory. The thermosiphon cooling is achieved by indirect cooling with helium at 4.7 K. The siphon tubes are welded to the solenoid outer structure. The anticipated heat loads in the solenoid is presented as well as the cooling scheme design. A thermal model using ANSYS to simulate the temperature gradient is presented. The thermal analysis also makes provisions for including the heat load generated in the coils and structures by the secondary radiation simulated using the MARS 15 code. The impact ofmore » the heat loads from supports on the solenoid cooling is studied. The thermosiphon cooling scheme is also validated using pertinent correlations to study flow reversals and the cooling regime.« less
Yoshida, Hiroaki; Kobayashi, Takayuki; Hayashi, Hidemitsu; Kinjo, Tomoyuki; Washizu, Hitoshi; Fukuzawa, Kenji
2014-07-01
A boundary scheme in the lattice Boltzmann method (LBM) for the convection-diffusion equation, which correctly realizes the internal boundary condition at the interface between two phases with different transport properties, is presented. The difficulty in satisfying the continuity of flux at the interface in a transient analysis, which is inherent in the conventional LBM, is overcome by modifying the collision operator and the streaming process of the LBM. An asymptotic analysis of the scheme is carried out in order to clarify the role played by the adjustable parameters involved in the scheme. As a result, the internal boundary condition is shown to be satisfied with second-order accuracy with respect to the lattice interval, if we assign appropriate values to the adjustable parameters. In addition, two specific problems are numerically analyzed, and comparison with the analytical solutions of the problems numerically validates the proposed scheme.
Mafole, Prosper; Aritsugi, Masayoshi
2016-01-01
Backoff-free fragment retransmission (BFFR) scheme enhances the performance of legacy MAC layer fragmentation by eliminating contention overhead. The eliminated overhead is the result of backoff executed before a retransmission attempt is made when fragment transmission failure occurs within a fragment burst. This paper provides a mathematical analysis of BFFR energy efficiency and further assesses, by means of simulations, the energy efficiency, throughput and delay obtained when BFFR is used. The validity of the new scheme is evaluated in different scenarios namely, constant bit rate traffic, realistic bursty internet traffic, node mobility, rigid and elastic flows and their combinations at different traffic loads. We also evaluate and discuss the impact of BFFR on MAC fairness when the number of nodes is varied from 4 to 10. It is shown that BFFR has advantages over legacy MAC fragmentation scheme in all the scenarios.
NASA Astrophysics Data System (ADS)
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad
2017-07-01
This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.
Quantum Watermarking Scheme Based on INEQR
NASA Astrophysics Data System (ADS)
Zhou, Ri-Gui; Zhou, Yang; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou
2018-04-01
Quantum watermarking technology protects copyright by embedding invisible quantum signal in quantum multimedia data. In this paper, a watermarking scheme based on INEQR was presented. Firstly, the watermark image is extended to achieve the requirement of embedding carrier image. Secondly, the swap and XOR operation is used on the processed pixels. Since there is only one bit per pixel, XOR operation can achieve the effect of simple encryption. Thirdly, both the watermark image extraction and embedding operations are described, where the key image, swap operation and LSB algorithm are used. When the embedding is made, the binary image key is changed. It means that the watermark has been embedded. Of course, if the watermark image is extracted, the key's state need detected. When key's state is |1>, this extraction operation is carried out. Finally, for validation of the proposed scheme, both the Signal-to-noise ratio (PSNR) and the security of the scheme are analyzed.
A Novel Quantum Image Steganography Scheme Based on LSB
NASA Astrophysics Data System (ADS)
Zhou, Ri-Gui; Luo, Jia; Liu, XingAo; Zhu, Changming; Wei, Lai; Zhang, Xiafen
2018-06-01
Based on the NEQR representation of quantum images and least significant bit (LSB) scheme, a novel quantum image steganography scheme is proposed. The sizes of the cover image and the original information image are assumed to be 4 n × 4 n and n × n, respectively. Firstly, the bit-plane scrambling method is used to scramble the original information image. Then the scrambled information image is expanded to the same size of the cover image by using the key only known to the operator. The expanded image is scrambled to be a meaningless image with the Arnold scrambling. The embedding procedure and extracting procedure are carried out by K 1 and K 2 which are under control of the operator. For validation of the presented scheme, the peak-signal-to-noise ratio (PSNR), the capacity, the security of the images and the circuit complexity are analyzed.
Robust Stabilization of T-S Fuzzy Stochastic Descriptor Systems via Integral Sliding Modes.
Li, Jinghao; Zhang, Qingling; Yan, Xing-Gang; Spurgeon, Sarah K
2017-09-19
This paper addresses the robust stabilization problem for T-S fuzzy stochastic descriptor systems using an integral sliding mode control paradigm. A classical integral sliding mode control scheme and a nonparallel distributed compensation (Non-PDC) integral sliding mode control scheme are presented. It is shown that two restrictive assumptions previously adopted developing sliding mode controllers for Takagi-Sugeno (T-S) fuzzy stochastic systems are not required with the proposed framework. A unified framework for sliding mode control of T-S fuzzy systems is formulated. The proposed Non-PDC integral sliding mode control scheme encompasses existing schemes when the previously imposed assumptions hold. Stability of the sliding motion is analyzed and the sliding mode controller is parameterized in terms of the solutions of a set of linear matrix inequalities which facilitates design. The methodology is applied to an inverted pendulum model to validate the effectiveness of the results presented.