Quantum-classical correspondence in the vicinity of periodic orbits
NASA Astrophysics Data System (ADS)
Kumari, Meenu; Ghose, Shohini
2018-05-01
Quantum-classical correspondence in chaotic systems is a long-standing problem. We describe a method to quantify Bohr's correspondence principle and calculate the size of quantum numbers for which we can expect to observe quantum-classical correspondence near periodic orbits of Floquet systems. Our method shows how the stability of classical periodic orbits affects quantum dynamics. We demonstrate our method by analyzing quantum-classical correspondence in the quantum kicked top (QKT), which exhibits both regular and chaotic behavior. We use our correspondence conditions to identify signatures of classical bifurcations even in a deep quantum regime. Our method can be used to explain the breakdown of quantum-classical correspondence in chaotic systems.
NASA Astrophysics Data System (ADS)
Kaltenbacher, Barbara; Klassen, Andrej
2018-05-01
In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.
Transient chaos - a resolution of breakdown of quantum-classical correspondence in optomechanics.
Wang, Guanglei; Lai, Ying-Cheng; Grebogi, Celso
2016-10-17
Recently, the phenomenon of quantum-classical correspondence breakdown was uncovered in optomechanics, where in the classical regime the system exhibits chaos but in the corresponding quantum regime the motion is regular - there appears to be no signature of classical chaos whatsoever in the corresponding quantum system, generating a paradox. We find that transient chaos, besides being a physically meaningful phenomenon by itself, provides a resolution. Using the method of quantum state diffusion to simulate the system dynamics subject to continuous homodyne detection, we uncover transient chaos associated with quantum trajectories. The transient behavior is consistent with chaos in the classical limit, while the long term evolution of the quantum system is regular. Transient chaos thus serves as a bridge for the quantum-classical transition (QCT). Strikingly, as the system transitions from the quantum to the classical regime, the average chaotic transient lifetime increases dramatically (faster than the Ehrenfest time characterizing the QCT for isolated quantum systems). We develop a physical theory to explain the scaling law.
Transient chaos - a resolution of breakdown of quantum-classical correspondence in optomechanics
Wang, Guanglei; Lai, Ying-Cheng; Grebogi, Celso
2016-01-01
Recently, the phenomenon of quantum-classical correspondence breakdown was uncovered in optomechanics, where in the classical regime the system exhibits chaos but in the corresponding quantum regime the motion is regular - there appears to be no signature of classical chaos whatsoever in the corresponding quantum system, generating a paradox. We find that transient chaos, besides being a physically meaningful phenomenon by itself, provides a resolution. Using the method of quantum state diffusion to simulate the system dynamics subject to continuous homodyne detection, we uncover transient chaos associated with quantum trajectories. The transient behavior is consistent with chaos in the classical limit, while the long term evolution of the quantum system is regular. Transient chaos thus serves as a bridge for the quantum-classical transition (QCT). Strikingly, as the system transitions from the quantum to the classical regime, the average chaotic transient lifetime increases dramatically (faster than the Ehrenfest time characterizing the QCT for isolated quantum systems). We develop a physical theory to explain the scaling law. PMID:27748418
Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction
Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.
2016-01-01
X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902
High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities
2015-03-31
FD scheme is only consistent for classical solutions of the PDE . For this reason, we implement the method of singularity subtraction as a means for...regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE . For this reason, we...Introduction In the present work, we develop a high-order numerical method for solving linear elliptic PDEs with well-behaved variable coefficients on
Direct Regularized Estimation of Retinal Vascular Oxygen Tension Based on an Experimental Model
Yildirim, Isa; Ansari, Rashid; Yetik, I. Samil; Shahidi, Mahnaz
2014-01-01
Phosphorescence lifetime imaging is commonly used to generate oxygen tension maps of retinal blood vessels by classical least squares (LS) estimation method. A spatial regularization method was later proposed and provided improved results. However, both methods obtain oxygen tension values from the estimates of intermediate variables, and do not yield an optimum estimate of oxygen tension values, due to their nonlinear dependence on the ratio of intermediate variables. In this paper, we provide an improved solution by devising a regularized direct least squares (RDLS) method that exploits available knowledge in studies that provide models of oxygen tension in retinal arteries and veins, unlike the earlier regularized LS approach where knowledge about intermediate variables is limited. The performance of the proposed RDLS method is evaluated by investigating and comparing the bias, variance, oxygen tension maps, 1-D profiles of arterial oxygen tension, and mean absolute error with those of earlier methods, and its superior performance both quantitatively and qualitatively is demonstrated. PMID:23732915
NASA Astrophysics Data System (ADS)
Sumin, M. I.
2015-06-01
A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.
Zhou, Hua; Li, Lexin
2014-01-01
Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830
High-Speed Imaging Analysis of Register Transitions in Classically and Jazz-Trained Male Voices.
Dippold, Sebastian; Voigt, Daniel; Richter, Bernhard; Echternach, Matthias
2015-01-01
Little data are available concerning register functions in different styles of singing such as classically or jazz-trained voices. Differences between registers seem to be much more audible in jazz singing than classical singing, and so we hypothesized that classically trained singers exhibit a smoother register transition, stemming from more regular vocal fold oscillation patterns. High-speed digital imaging (HSDI) was used for 19 male singers (10 jazz-trained singers, 9 classically trained) who performed a glissando from modal to falsetto register across the register transition. Vocal fold oscillation patterns were analyzed in terms of different parameters of regularity such as relative average perturbation (RAP), correlation dimension (D2) and shimmer. HSDI observations showed more regular vocal fold oscillation patterns during the register transition for the classically trained singers. Additionally, the RAP and D2 values were generally lower and more consistent for the classically trained singers compared to the jazz singers. However, intergroup comparisons showed no statistically significant differences. Some of our results may support the hypothesis that classically trained singers exhibit a smoother register transition from modal to falsetto register. © 2015 S. Karger AG, Basel.
A New Challenge for Compression Algorithms: Genetic Sequences.
ERIC Educational Resources Information Center
Grumbach, Stephane; Tahi, Fariza
1994-01-01
Analyzes the properties of genetic sequences that cause the failure of classical algorithms used for data compression. A lossless algorithm, which compresses the information contained in DNA and RNA sequences by detecting regularities such as palindromes, is presented. This algorithm combines substitutional and statistical methods and appears to…
Multilinear Graph Embedding: Representation and Regularization for Images.
Chen, Yi-Lei; Hsu, Chiou-Ting
2014-02-01
Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.
Quantum localization for a kicked rotor with accelerator mode islands.
Iomin, A; Fishman, S; Zaslavsky, G M
2002-03-01
Dynamical localization of classical superdiffusion for the quantum kicked rotor is studied in the semiclassical limit. Both classical and quantum dynamics of the system become more complicated under the conditions of mixed phase space with accelerator mode islands. Recently, long time quantum flights due to the accelerator mode islands have been found. By exploration of their dynamics, it is shown here that the classical-quantum duality of the flights leads to their localization. The classical mechanism of superdiffusion is due to accelerator mode dynamics, while quantum tunneling suppresses the superdiffusion and leads to localization of the wave function. Coupling of the regular type dynamics inside the accelerator mode island structures to dynamics in the chaotic sea proves increasing the localization length. A numerical procedure and an analytical method are developed to obtain an estimate of the localization length which, as it is shown, has exponentially large scaling with the dimensionless Planck's constant (tilde)h<1 in the semiclassical limit. Conditions for the validity of the developed method are specified.
Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David
2013-01-01
Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Geodesic active fields--a geometric framework for image registration.
Zosso, Dominique; Bresson, Xavier; Thiran, Jean-Philippe
2011-05-01
In this paper we present a novel geometric framework called geodesic active fields for general image registration. In image registration, one looks for the underlying deformation field that best maps one image onto another. This is a classic ill-posed inverse problem, which is usually solved by adding a regularization term. Here, we propose a multiplicative coupling between the registration term and the regularization term, which turns out to be equivalent to embed the deformation field in a weighted minimal surface problem. Then, the deformation field is driven by a minimization flow toward a harmonic map corresponding to the solution of the registration problem. This proposed approach for registration shares close similarities with the well-known geodesic active contours model in image segmentation, where the segmentation term (the edge detector function) is coupled with the regularization term (the length functional) via multiplication as well. As a matter of fact, our proposed geometric model is actually the exact mathematical generalization to vector fields of the weighted length problem for curves and surfaces introduced by Caselles-Kimmel-Sapiro. The energy of the deformation field is measured with the Polyakov energy weighted by a suitable image distance, borrowed from standard registration models. We investigate three different weighting functions, the squared error and the approximated absolute error for monomodal images, and the local joint entropy for multimodal images. As compared to specialized state-of-the-art methods tailored for specific applications, our geometric framework involves important contributions. Firstly, our general formulation for registration works on any parametrizable, smooth and differentiable surface, including nonflat and multiscale images. In the latter case, multiscale images are registered at all scales simultaneously, and the relations between space and scale are intrinsically being accounted for. Second, this method is, to the best of our knowledge, the first reparametrization invariant registration method introduced in the literature. Thirdly, the multiplicative coupling between the registration term, i.e. local image discrepancy, and the regularization term naturally results in a data-dependent tuning of the regularization strength. Finally, by choosing the metric on the deformation field one can freely interpolate between classic Gaussian and more interesting anisotropic, TV-like regularization.
Jiang, Jingfeng; Hall, Timothy J
2011-04-01
A hybrid approach that inherits both the robustness of the regularized motion tracking approach and the efficiency of the predictive search approach is reported. The basic idea is to use regularized speckle tracking to obtain high-quality seeds in an explorative search that can be used in the subsequent intelligent predictive search. The performance of the hybrid speckle-tracking algorithm was compared with three published speckle-tracking methods using in vivo breast lesion data. We found that the hybrid algorithm provided higher displacement quality metric values, lower root mean squared errors compared with a locally smoothed displacement field, and higher improvement ratios compared with the classic block-matching algorithm. On the basis of these comparisons, we concluded that the hybrid method can further enhance the accuracy of speckle tracking compared with its real-time counterparts, at the expense of slightly higher computational demands. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi
2017-10-01
When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.
Lin, Wei; Feng, Rui; Li, Hongzhe
2014-01-01
In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642
Bilateral filter regularized accelerated Demons for improved discontinuity preserving registration.
Demirović, D; Šerifović-Trbalić, A; Prljača, N; Cattin, Ph C
2015-03-01
The classical accelerated Demons algorithm uses Gaussian smoothing to penalize oscillatory motion in the displacement fields during registration. This well known method uses the L2 norm for regularization. Whereas the L2 norm is known for producing well behaving smooth deformation fields it cannot properly deal with discontinuities often seen in the deformation field as the regularizer cannot differentiate between discontinuities and smooth part of motion field. In this paper we propose replacement the Gaussian filter of the accelerated Demons with a bilateral filter. In contrast the bilateral filter not only uses information from displacement field but also from the image intensities. In this way we can smooth the motion field depending on image content as opposed to the classical Gaussian filtering. By proper adjustment of two tunable parameters one can obtain more realistic deformations in a case of discontinuity. The proposed approach was tested on 2D and 3D datasets and showed significant improvements in the Target Registration Error (TRE) for the well known POPI dataset. Despite the increased computational complexity, the improved registration result is justified in particular abdominal data sets where discontinuities often appear due to sliding organ motion. Copyright © 2014 Elsevier Ltd. All rights reserved.
PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†
NASA Astrophysics Data System (ADS)
Naghibzadeh, Shahrzad; van der Veen, Alle-Jan
2018-06-01
Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.
NASA Astrophysics Data System (ADS)
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin
2017-08-01
Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.
ERIC Educational Resources Information Center
Sarfo, Frederick Kwaku; Eshun, Grace; Elen, Jan; Adentwi, Kobina Impraim
2014-01-01
Introduction: In this study, the effectiveness of two different interventions was investigated. The effects of a concrete abstract intervention and a regular method of teaching intervention were compared. Both interventions were designed in line with the specifications of classical principles of instructional design for learning mathematics in the…
Image super-resolution via adaptive filtering and regularization
NASA Astrophysics Data System (ADS)
Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming
2014-11-01
Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.
The Classical Heritage in America: A Curriculum Resource. Tentative Edition.
ERIC Educational Resources Information Center
Philadelphia School District, PA. Office of Curriculum and Instruction.
This curriculum resource is intended to help make students of Latin, Greek and other subjects more aware of America's classical heritage. It is designed to be used selectively by teachers to enrich the regular curriculum in classical languages in elementary and secondary schools. In providing background information for the teacher and suggestions…
Retrieving cloudy atmosphere parameters from RPG-HATPRO radiometer data
NASA Astrophysics Data System (ADS)
Kostsov, V. S.
2015-03-01
An algorithm for simultaneously determining both tropospheric temperature and humidity profiles and cloud liquid water content from ground-based measurements of microwave radiation is presented. A special feature of this algorithm is that it combines different types of measurements and different a priori information on the sought parameters. The features of its use in processing RPG-HATPRO radiometer data obtained in the course of atmospheric remote sensing experiments carried out by specialists from the Faculty of Physics of St. Petersburg State University are discussed. The results of a comparison of both temperature and humidity profiles obtained using a ground-based microwave remote sensing method with those obtained from radiosonde data are analyzed. It is shown that this combined algorithm is comparable (in accuracy) to the classical method of statistical regularization in determining temperature profiles; however, this algorithm demonstrates better accuracy (when compared to the method of statistical regularization) in determining humidity profiles.
Construction of a new regular LDPC code for optical transmission systems
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Tong, Qing-zhen; Xu, Liang; Huang, Sheng
2013-05-01
A novel construction method of the check matrix for the regular low density parity check (LDPC) code is proposed. The novel regular systematically constructed Gallager (SCG)-LDPC(3969,3720) code with the code rate of 93.7% and the redundancy of 6.69% is constructed. The simulation results show that the net coding gain (NCG) and the distance from the Shannon limit of the novel SCG-LDPC(3969,3720) code can respectively be improved by about 1.93 dB and 0.98 dB at the bit error rate (BER) of 10-8, compared with those of the classic RS(255,239) code in ITU-T G.975 recommendation and the LDPC(32640,30592) code in ITU-T G.975.1 recommendation with the same code rate of 93.7% and the same redundancy of 6.69%. Therefore, the proposed novel regular SCG-LDPC(3969,3720) code has excellent performance, and is more suitable for high-speed long-haul optical transmission systems.
ERIC Educational Resources Information Center
Notes Plus, 1984
1984-01-01
Three installments of "Classic of the Month," a regular feature of the National Council of Teachers of English publication, "Notes Plus," are presented in this compilation. Each installment of this feature is intended to provide teaching ideas related to a "classic" novel. The first article offers a variety of…
Regular black holes from semi-classical down to Planckian size
NASA Astrophysics Data System (ADS)
Spallucci, Euro; Smailagic, Anais
In this paper, we review various models of curvature singularity free black holes (BHs). In the first part of the review, we describe semi-classical solutions of the Einstein equations which, however, contains a “quantum” input through the matter source. We start by reviewing the early model by Bardeen where the metric is regularized by-hand through a short-distance cutoff, which is justified in terms of nonlinear electro-dynamical effects. This toy-model is useful to point-out the common features shared by all regular semi-classical black holes. Then, we solve Einstein equations with a Gaussian source encoding the quantum spread of an elementary particle. We identify, the a priori arbitrary, Gaussian width with the Compton wavelength of the quantum particle. This Compton-Gauss model leads to the estimate of a terminal density that a gravitationally collapsed object can achieve. We identify this density to be the Planck density, and reformulate the Gaussian model assuming this as its peak density. All these models, are physically reliable as long as the BH mass is big enough with respect to the Planck mass. In the truly Planckian regime, the semi-classical approximation breaks down. In this case, a fully quantum BH description is needed. In the last part of this paper, we propose a nongeometrical quantum model of Planckian BHs implementing the Holographic Principle and realizing the “classicalization” scenario recently introduced by Dvali and collaborators. The classical relation between the mass and radius of the BH emerges only in the classical limit, far away from the Planck scale.
Change detection of polarimetric SAR images based on the KummerU Distribution
NASA Astrophysics Data System (ADS)
Chen, Quan; Zou, Pengfei; Li, Zhen; Zhang, Ping
2014-11-01
In the society of PolSAR image segmentation, change detection and classification, the classical Wishart distribution has been used for a long time, but it especially suit to low-resolution SAR image, because in traditional sensors, only a small number of scatterers are present in each resolution cell. With the improving of SAR systems these years, the classical statistical models can therefore be reconsidered for high resolution and polarimetric information contained in the images acquired by these advanced systems. In this study, SAR image segmentation algorithm based on level-set method, added with distance regularized level-set evolution (DRLSE) is performed using Envisat/ASAR single-polarization data and Radarsat-2 polarimetric images, respectively. KummerU heterogeneous clutter model is used in the later to overcome the homogeneous hypothesis at high resolution cell. An enhanced distance regularized level-set evolution (DRLSE-E) is also applied in the later, to ensure accurate computation and stable level-set evolution. Finally, change detection based on four polarimetric Radarsat-2 time series images is carried out at Genhe area of Inner Mongolia Autonomous Region, NorthEastern of China, where a heavy flood disaster occurred during the summer of 2013, result shows the recommend segmentation method can detect the change of watershed effectively.
Sparse regularization for force identification using dictionaries
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
[Application of classic formulae in treatment of hypertension].
Xiong, Xing-Jiang; Wang, Jie
2013-06-01
Classic formulae have a wide prospect in the treatment of hypertension with such advantages as symposium relief, improvement of body constitution and uncontrollable blood pressure factors. The paper systematically reviews the application of classic formula in pre-hypertension, different stages of hypertension, special type of hypertension, secondary hypertension, and uncontrollable blood pressure factors. It is believed that classic formulae are effective under the premise of their in-depth understanding of objective indications, modern pathogenesis and evolvement regularity.
Relativistic quantum Darwinism in Dirac fermion and graphene systems
NASA Astrophysics Data System (ADS)
Ni, Xuan; Huang, Liang; Lai, Ying-Cheng; Pecora, Louis
2012-02-01
We solve the Dirac equation in two spatial dimensions in the setting of resonant tunneling, where the system consists of two symmetric cavities connected by a finite potential barrier. The shape of the cavities can be chosen to yield both regular and chaotic dynamics in the classical limit. We find that certain pointer states about classical periodic orbits can exist, which are signatures of relativistic quantum Darwinism (RQD). These localized states suppress quantum tunneling, and the effect becomes less severe as the underlying classical dynamics in the cavity is chaotic, leading to regularization of quantum tunneling. Qualitatively similar phenomena have been observed in graphene. A physical theory is developed to explain relativistic quantum Darwinism and its effects based on the spectrum of complex eigenenergies of the non-Hermitian Hamiltonian describing the open cavity system.
Scattering theory for graphs isomorphic to a regular tree at infinity
NASA Astrophysics Data System (ADS)
Colin de Verdière, Yves; Truc, Françoise
2013-06-01
We describe the spectral theory of the adjacency operator of a graph which is isomorphic to a regular tree at infinity. Using some combinatorics, we reduce the problem to a scattering problem for a finite rank perturbation of the adjacency operator on a regular tree. We develop this scattering theory using the classical recipes for Schrödinger operators in Euclidian spaces.
k-Cosymplectic Classical Field Theories: Tulczyjew and Skinner-Rusk Formulations
NASA Astrophysics Data System (ADS)
Rey, Angel M.; Román-Roy, Narciso; Salgado, Modesto; Vilariño, Silvia
2012-06-01
The k-cosymplectic Lagrangian and Hamiltonian formalisms of first-order classical field theories are reviewed and completed. In particular, they are stated for singular and almost-regular systems. Subsequently, several alternative formulations for k-cosymplectic first-order field theories are developed: First, generalizing the construction of Tulczyjew for mechanics, we give a new interpretation of the classical field equations. Second, the Lagrangian and Hamiltonian formalisms are unified by giving an extension of the Skinner-Rusk formulation on classical mechanics.
Ward identity and basis tensor gauge theory at one loop
NASA Astrophysics Data System (ADS)
Chung, Daniel J. H.
2018-06-01
Basis tensor gauge theory (BTGT) is a reformulation of ordinary gauge theory that is an analog of the vierbein formulation of gravity and is related to the Wilson line formulation. To match ordinary gauge theories coupled to matter, the BTGT formalism requires a continuous symmetry that we call the BTGT symmetry in addition to the ordinary gauge symmetry. After classically interpreting the BTGT symmetry, we construct using the BTGT formalism the Ward identities associated with the BTGT symmetry and the ordinary gauge symmetry. For a way of testing the quantum stability and the consistency of the Ward identities with a known regularization method, we explicitly renormalize the scalar QED at one loop using dimensional regularization using the BTGT formalism.
Hierarchical Regularization of Polygons for Photogrammetric Point Clouds of Oblique Images
NASA Astrophysics Data System (ADS)
Xie, L.; Hu, H.; Zhu, Q.; Wu, B.; Zhang, Y.
2017-05-01
Despite the success of multi-view stereo (MVS) reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.
Real-Time Rotational Activity Detection in Atrial Fibrillation
Ríos-Muñoz, Gonzalo R.; Arenal, Ángel; Artés-Rodríguez, Antonio
2018-01-01
Rotational activations, or spiral waves, are one of the proposed mechanisms for atrial fibrillation (AF) maintenance. We present a system for assessing the presence of rotational activity from intracardiac electrograms (EGMs). Our system is able to operate in real-time with multi-electrode catheters of different topologies in contact with the atrial wall, and it is based on new local activation time (LAT) estimation and rotational activity detection methods. The EGM LAT estimation method is based on the identification of the highest sustained negative slope of unipolar signals. The method is implemented as a linear filter whose output is interpolated on a regular grid to match any catheter topology. Its operation is illustrated on selected signals and compared to the classical Hilbert-Transform-based phase analysis. After the estimation of the LAT on the regular grid, the detection of rotational activity in the atrium is done by a novel method based on the optical flow of the wavefront dynamics, and a rotation pattern match. The methods have been validated using in silico and real AF signals. PMID:29593566
Quantum-classical correspondence for the inverted oscillator
NASA Astrophysics Data System (ADS)
Maamache, Mustapha; Ryeol Choi, Jeong
2017-11-01
While quantum-classical correspondence for a system is a very fundamental problem in modern physics, the understanding of its mechanism is often elusive, so the methods used and the results of detailed theoretical analysis have been accompanied by active debate. In this study, the differences and similarities between quantum and classical behavior for an inverted oscillator have been analyzed based on the description of a complete generalized Airy function-type quantum wave solution. The inverted oscillator model plays an important role in several branches of cosmology and particle physics. The quantum wave packet of the system is composed of many sub-packets that are localized at different positions with regular intervals between them. It is shown from illustrations of the probability density that, although the quantum trajectory of the wave propagation is somewhat different from the corresponding classical one, the difference becomes relatively small when the classical excitation is sufficiently high. We have confirmed that a quantum wave packet moving along a positive or negative direction accelerates over time like a classical wave. From these main interpretations and others in the text, we conclude that our theory exquisitely illustrates quantum and classical correspondence for the system, which is a crucial concept in quantum mechanics. Supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2016R1D1A1A09919503)
Secret-key-assisted private classical communication capacity over quantum channels
NASA Astrophysics Data System (ADS)
Hsieh, Min-Hsiu; Luo, Zhicheng; Brun, Todd
2008-10-01
We prove a regularized formula for the secret-key-assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak (e-print arXiv:quant-ph/0512015) on entanglement-assisted quantum communication capacity . This formula provides a family protocol, the private father protocol, under the resource inequality framework that includes private classical communication without secret-key assistance as a child protocol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lusanna, Luca
2004-08-19
The four (electro-magnetic, weak, strong and gravitational) interactions are described by singular Lagrangians and by Dirac-Bergmann theory of Hamiltonian constraints. As a consequence a subset of the original configuration variables are gauge variables, not determined by the equations of motion. Only at the Hamiltonian level it is possible to separate the gauge variables from the deterministic physical degrees of freedom, the Dirac observables, and to formulate a well posed Cauchy problem for them both in special and general relativity. Then the requirement of causality dictates the choice of retarded solutions at the classical level. However both the problems of themore » classical theory of the electron, leading to the choice of (1/2) (retarded + advanced) solutions, and the regularization of quantum field theory, leading to the Feynman propagator, introduce anticipatory aspects. The determination of the relativistic Darwin potential as a semi-classical approximation to the Lienard-Wiechert solution for particles with Grassmann-valued electric charges, regularizing the Coulomb self-energies, shows that these anticipatory effects live beyond the semi-classical approximation (tree level) under the form of radiative corrections, at least for the electro-magnetic interaction.Talk and 'best contribution' at The Sixth International Conference on Computing Anticipatory Systems CASYS'03, Liege August 11-16, 2003.« less
Quantifying non-linear dynamics of mass-springs in series oscillators via asymptotic approach
NASA Astrophysics Data System (ADS)
Starosta, Roman; Sypniewska-Kamińska, Grażyna; Awrejcewicz, Jan
2017-05-01
Dynamical regular response of an oscillator with two serially connected springs with nonlinear characteristics of cubic type and governed by a set of differential-algebraic equations (DAEs) is studied. The classical approach of the multiple scales method (MSM) in time domain has been employed and appropriately modified to solve the governing DAEs of two systems, i.e. with one- and two degrees-of-freedom. The approximate analytical solutions have been verified by numerical simulations.
Investigating Mathematics with PentaBlocks.
ERIC Educational Resources Information Center
Berman, Sheldon; Plummer, Gary A.; Scheuer, Don
These classic pattern blocks were introduced in the early 1960s as part of the Elementary Science Study materials developed by the Educational Development Center (EDC). The six classic shapes share one common characteristic: all of the angle measurements are multiplies of 30 degrees. Shapes include the regular triangle, square, and hexagon; the…
A regularization corrected score method for nonlinear regression models with covariate error.
Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna
2013-03-01
Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.
A method for data handling numerical results in parallel OpenFOAM simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anton, Alin; Muntean, Sebastian
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
Motion-aware temporal regularization for improved 4D cone-beam computed tomography
NASA Astrophysics Data System (ADS)
Mory, Cyril; Janssens, Guillaume; Rit, Simon
2016-09-01
Four-dimensional cone-beam computed tomography (4D-CBCT) of the free-breathing thorax is a valuable tool in image-guided radiation therapy of the thorax and the upper abdomen. It allows the determination of the position of a tumor throughout the breathing cycle, while only its mean position can be extracted from three-dimensional CBCT. The classical approaches are not fully satisfactory: respiration-correlated methods allow one to accurately locate high-contrast structures in any frame, but contain strong streak artifacts unless the acquisition is significantly slowed down. Motion-compensated methods can yield streak-free, but static, reconstructions. This work proposes a 4D-CBCT method that can be seen as a trade-off between respiration-correlated and motion-compensated reconstruction. It builds upon the existing reconstruction using spatial and temporal regularization (ROOSTER) and is called motion-aware ROOSTER (MA-ROOSTER). It performs temporal regularization along curved trajectories, following the motion estimated on a prior 4D CT scan. MA-ROOSTER does not involve motion-compensated forward and back projections: the input motion is used only during temporal regularization. MA-ROOSTER is compared to ROOSTER, motion-compensated Feldkamp-Davis-Kress (MC-FDK), and two respiration-correlated methods, on CBCT acquisitions of one physical phantom and two patients. It yields streak-free reconstructions, visually similar to MC-FDK, and robust information on tumor location throughout the breathing cycle. MA-ROOSTER also allows a variation of the lung tissue density during the breathing cycle, similar to that of planning CT, which is required for quantitative post-processing.
Regular and irregular patterns of self-localized excitation in arrays of coupled phase oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfrum, Matthias; Omel'chenko, Oleh E.; Sieber, Jan
We study a system of phase oscillators with nonlocal coupling in a ring that supports self-organized patterns of coherence and incoherence, called chimera states. Introducing a global feedback loop, connecting the phase lag to the order parameter, we can observe chimera states also for systems with a small number of oscillators. Numerical simulations show a huge variety of regular and irregular patterns composed of localized phase slipping events of single oscillators. Using methods of classical finite dimensional chaos and bifurcation theory, we can identify the emergence of chaotic chimera states as a result of transitions to chaos via period doublingmore » cascades, torus breakup, and intermittency. We can explain the observed phenomena by a mechanism of self-modulated excitability in a discrete excitable medium.« less
Generalized Second-Order Partial Derivatives of 1/r
ERIC Educational Resources Information Center
Hnizdo, V.
2011-01-01
The generalized second-order partial derivatives of 1/r, where r is the radial distance in three dimensions (3D), are obtained using a result of the potential theory of classical analysis. Some non-spherical-regularization alternatives to the standard spherical-regularization expression for the derivatives are derived. The utility of a…
An improved current potential method for fast computation of stellarator coil shapes
NASA Astrophysics Data System (ADS)
Landreman, Matt
2017-04-01
Several fast methods for computing stellarator coil shapes are compared, including the classical NESCOIL procedure (Merkel 1987 Nucl. Fusion 27 867), its generalization using truncated singular value decomposition, and a Tikhonov regularization approach we call REGCOIL in which the squared current density is included in the objective function. Considering W7-X and NCSX geometries, and for any desired level of regularization, we find the REGCOIL approach simultaneously achieves lower surface-averaged and maximum values of both current density (on the coil winding surface) and normal magnetic field (on the desired plasma surface). This approach therefore can simultaneously improve the free-boundary reconstruction of the target plasma shape while substantially increasing the minimum distances between coils, preventing collisions between coils while improving access for ports and maintenance. The REGCOIL method also allows finer control over the level of regularization, it preserves convexity to ensure the local optimum found is the global optimum, and it eliminates two pathologies of NESCOIL: the resulting coil shapes become independent of the arbitrary choice of angles used to parameterize the coil surface, and the resulting coil shapes converge rather than diverge as Fourier resolution is increased. We therefore contend that REGCOIL should be used instead of NESCOIL for applications in which a fast and robust method for coil calculation is needed, such as when targeting coil complexity in fixed-boundary plasma optimization, or for scoping new stellarator geometries.
Action and entanglement in gravity and field theory.
Neiman, Yasha
2013-12-27
In nongravitational quantum field theory, the entanglement entropy across a surface depends on the short-distance regularization. Quantum gravity should not require such regularization, and it has been conjectured that the entanglement entropy there is always given by the black hole entropy formula evaluated on the entangling surface. We show that these statements have precise classical counterparts at the level of the action. Specifically, we point out that the action can have a nonadditive imaginary part. In gravity, the latter is fixed by the black hole entropy formula, while in nongravitating theories it is arbitrary. From these classical facts, the entanglement entropy conjecture follows by heuristically applying the relation between actions and wave functions.
Soneson, Charlotte; Lilljebjörn, Henrik; Fioretos, Thoas; Fontes, Magnus
2010-04-15
With the rapid development of new genetic measurement methods, several types of genetic alterations can be quantified in a high-throughput manner. While the initial focus has been on investigating each data set separately, there is an increasing interest in studying the correlation structure between two or more data sets. Multivariate methods based on Canonical Correlation Analysis (CCA) have been proposed for integrating paired genetic data sets. The high dimensionality of microarray data imposes computational difficulties, which have been addressed for instance by studying the covariance structure of the data, or by reducing the number of variables prior to applying the CCA. In this work, we propose a new method for analyzing high-dimensional paired genetic data sets, which mainly emphasizes the correlation structure and still permits efficient application to very large data sets. The method is implemented by translating a regularized CCA to its dual form, where the computational complexity depends mainly on the number of samples instead of the number of variables. The optimal regularization parameters are chosen by cross-validation. We apply the regularized dual CCA, as well as a classical CCA preceded by a dimension-reducing Principal Components Analysis (PCA), to a paired data set of gene expression changes and copy number alterations in leukemia. Using the correlation-maximizing methods, regularized dual CCA and PCA+CCA, we show that without pre-selection of known disease-relevant genes, and without using information about clinical class membership, an exploratory analysis singles out two patient groups, corresponding to well-known leukemia subtypes. Furthermore, the variables showing the highest relevance to the extracted features agree with previous biological knowledge concerning copy number alterations and gene expression changes in these subtypes. Finally, the correlation-maximizing methods are shown to yield results which are more biologically interpretable than those resulting from a covariance-maximizing method, and provide different insight compared to when each variable set is studied separately using PCA. We conclude that regularized dual CCA as well as PCA+CCA are useful methods for exploratory analysis of paired genetic data sets, and can be efficiently implemented also when the number of variables is very large.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayissi, Raoul Domingo, E-mail: raoulayissi@yahoo.fr; Noutchegueme, Norbert, E-mail: nnoutch@yahoo.fr
Global solutions regular for the Einstein-Boltzmann equation on a magnetized Bianchi type-I cosmological model with the cosmological constant are investigated. We suppose that the metric is locally rotationally symmetric. The Einstein-Boltzmann equation has been already considered by some authors. But, in general Bancel and Choquet-Bruhat [Ann. Henri Poincaré XVIII(3), 263 (1973); Commun. Math. Phys. 33, 83 (1973)], they proved only the local existence, and in the case of the nonrelativistic Boltzmann equation. Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academymore » of Science, 2000), Vol. 52] obtained a global existence result, for the relativistic Boltzmann equation coupled with the Einstein equations and using the Yosida operator, but confusing unfortunately with the nonrelativistic case. Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)] and Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], have obtained a global solution in time, but still using the Yosida operator and considering only the uncharged case. Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)] also proved a global existence of solutions to the Maxwell-Boltzmann system using the characteristic method. In this paper, we obtain using a method totally different from those used in the works of Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)], Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)], and Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] the global in time existence and uniqueness of a regular solution to the Einstein-Maxwell-Boltzmann system with the cosmological constant. We define and we use the weighted Sobolev separable spaces for the Boltzmann equation; some special spaces for the Einstein equations, then we clearly display all the proofs leading to the global existence theorems.« less
NASA Astrophysics Data System (ADS)
Ayissi, Raoul Domingo; Noutchegueme, Norbert
2015-01-01
Global solutions regular for the Einstein-Boltzmann equation on a magnetized Bianchi type-I cosmological model with the cosmological constant are investigated. We suppose that the metric is locally rotationally symmetric. The Einstein-Boltzmann equation has been already considered by some authors. But, in general Bancel and Choquet-Bruhat [Ann. Henri Poincaré XVIII(3), 263 (1973); Commun. Math. Phys. 33, 83 (1973)], they proved only the local existence, and in the case of the nonrelativistic Boltzmann equation. Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] obtained a global existence result, for the relativistic Boltzmann equation coupled with the Einstein equations and using the Yosida operator, but confusing unfortunately with the nonrelativistic case. Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)] and Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], have obtained a global solution in time, but still using the Yosida operator and considering only the uncharged case. Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)] also proved a global existence of solutions to the Maxwell-Boltzmann system using the characteristic method. In this paper, we obtain using a method totally different from those used in the works of Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)], Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)], and Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] the global in time existence and uniqueness of a regular solution to the Einstein-Maxwell-Boltzmann system with the cosmological constant. We define and we use the weighted Sobolev separable spaces for the Boltzmann equation; some special spaces for the Einstein equations, then we clearly display all the proofs leading to the global existence theorems.
ERIC Educational Resources Information Center
Greenlee, Craig T.
2012-01-01
Black college football teams do not play in bowl games. The only venues that come close to duplicating a bowl atmosphere are the annual classic games that are played during the regular season at a variety of locales across the U.S. Over the years, "classic" games have come on the scene and the results have been mixed. Some remain viable while…
NASA Astrophysics Data System (ADS)
Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas
2018-06-01
In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.
Seghouane, Abd-Krim; Iqbal, Asif
2017-09-01
Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.
Accurate orbit propagation in the presence of planetary close encounters
NASA Astrophysics Data System (ADS)
Amato, Davide; Baù, Giulio; Bombardelli, Claudio
2017-09-01
We present an efficient strategy for the numerical propagation of small Solar system objects undergoing close encounters with massive bodies. The trajectory is split into several phases, each of them being the solution of a perturbed two-body problem. Formulations regularized with respect to different primaries are employed in two subsequent phases. In particular, we consider the Kustaanheimo-Stiefel regularization and a novel set of non-singular orbital elements pertaining to the Dromo family. In order to test the proposed strategy, we perform ensemble propagations in the Earth-Sun Circular Restricted 3-Body Problem (CR3BP) using a variable step size and order multistep integrator and an improved version of Everhart's radau solver of 15th order. By combining the trajectory splitting with regularized equations of motion in short-term propagations (1 year), we gain up to six orders of magnitude in accuracy with respect to the classical Cowell's method for the same computational cost. Moreover, in the propagation of asteroid (99942) Apophis through its 2029 Earth encounter, the position error stays within 100 metres after 100 years. In general, as to improve the performance of regularized formulations, the trajectory must be split between 1.2 and 3 Hill radii from the Earth. We also devise a robust iterative algorithm to stop the integration of regularized equations of motion at a prescribed physical time. The results rigorously hold in the CR3BP, and similar considerations may apply when considering more complex models. The methods and algorithms are implemented in the naples fortran 2003 code, which is available online as a GitHub repository.
NASA Astrophysics Data System (ADS)
Rebillat, Marc; Schoukens, Maarten
2018-05-01
Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.
Numerical simulation of a shear-thinning fluid through packed spheres
NASA Astrophysics Data System (ADS)
Liu, Hai Long; Moon, Jong Sin; Hwang, Wook Ryol
2012-12-01
Flow behaviors of a non-Newtonian fluid in spherical microstructures have been studied by a direct numerical simulation. A shear-thinning (power-law) fluid through both regular and randomly packed spheres has been numerically investigated in a representative unit cell with the tri-periodic boundary condition, employing a rigorous three-dimensional finite-element scheme combined with fictitious-domain mortar-element methods. The present scheme has been validated for the classical spherical packing problems with literatures. The flow mobility of regular packing structures, including simple cubic (SC), body-centered cubic (BCC), face-centered cubic (FCC), as well as randomly packed spheres, has been investigated quantitatively by considering the amount of shear-thinning, the pressure gradient and the porosity as parameters. Furthermore, the mechanism leading to the main flow path in a highly shear-thinning fluid through randomly packed spheres has been discussed.
Solving the hypersingular boundary integral equation for the Burton and Miller formulation.
Langrenne, Christophe; Garcia, Alexandre; Bonnet, Marc
2015-11-01
This paper presents an easy numerical implementation of the Burton and Miller (BM) formulation, where the hypersingular Helmholtz integral is regularized by identities from the associated Laplace equation and thus needing only the evaluation of weakly singular integrals. The Helmholtz equation and its normal derivative are combined directly with combinations at edge or corner collocation nodes not used when the surface is not smooth. The hypersingular operators arising in this process are regularized and then evaluated by an indirect procedure based on discretized versions of the Calderón identities linking the integral operators for associated Laplace problems. The method is valid for acoustic radiation and scattering problems involving arbitrarily shaped three-dimensional bodies. Unlike other approaches using direct evaluation of hypersingular integrals, collocation points still coincide with mesh nodes, as is usual when using conforming elements. Using higher-order shape functions (with the boundary element method model size kept fixed) reduces the overall numerical integration effort while increasing the solution accuracy. To reduce the condition number of the resulting BM formulation at low frequencies, a regularized version α = ik/(k(2 )+ λ) of the classical BM coupling factor α = i/k is proposed. Comparisons with the combined Helmholtz integral equation Formulation method of Schenck are made for four example configurations, two of them featuring non-smooth surfaces.
Entanglement as a signature of quantum chaos.
Wang, Xiaoguang; Ghose, Shohini; Sanders, Barry C; Hu, Bambi
2004-01-01
We explore the dynamics of entanglement in classically chaotic systems by considering a multiqubit system that behaves collectively as a spin system obeying the dynamics of the quantum kicked top. In the classical limit, the kicked top exhibits both regular and chaotic dynamics depending on the strength of the chaoticity parameter kappa in the Hamiltonian. We show that the entanglement of the multiqubit system, considered for both the bipartite and the pairwise entanglement, yields a signature of quantum chaos. Whereas bipartite entanglement is enhanced in the chaotic region, pairwise entanglement is suppressed. Furthermore, we define a time-averaged entangling power and show that this entangling power changes markedly as kappa moves the system from being predominantly regular to being predominantly chaotic, thus sharply identifying the edge of chaos. When this entangling power is averaged over all states, it yields a signature of global chaos. The qualitative behavior of this global entangling power is similar to that of the classical Lyapunov exponent.
Benameur, S.; Mignotte, M.; Meunier, J.; Soucy, J. -P.
2009-01-01
Image restoration is usually viewed as an ill-posed problem in image processing, since there is no unique solution associated with it. The quality of restored image closely depends on the constraints imposed of the characteristics of the solution. In this paper, we propose an original extension of the NAS-RIF restoration technique by using information fusion as prior information with application in SPECT medical imaging. That extension allows the restoration process to be constrained by efficiently incorporating, within the NAS-RIF method, a regularization term which stabilizes the inverse solution. Our restoration method is constrained by anatomical information extracted from a high resolution anatomical procedure such as magnetic resonance imaging (MRI). This structural anatomy-based regularization term uses the result of an unsupervised Markovian segmentation obtained after a preliminary registration step between the MRI and SPECT data volumes from each patient. This method was successfully tested on 30 pairs of brain MRI and SPECT acquisitions from different subjects and on Hoffman and Jaszczak SPECT phantoms. The experiments demonstrated that the method performs better, in terms of signal-to-noise ratio, than a classical supervised restoration approach using a Metz filter. PMID:19812704
Entanglement entropy of electromagnetic edge modes.
Donnelly, William; Wall, Aron C
2015-03-20
The vacuum entanglement entropy of Maxwell theory, when evaluated by standard methods, contains an unexpected term with no known statistical interpretation. We resolve this two-decades old puzzle by showing that this term is the entanglement entropy of edge modes: classical solutions determined by the electric field normal to the entangling surface. We explain how the heat kernel regularization applied to this term leads to the negative divergent expression found by Kabat. This calculation also resolves a recent puzzle concerning the logarithmic divergences of gauge fields in 3+1 dimensions.
A novel construction method of QC-LDPC codes based on CRT for optical communications
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-05-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.
The Thermal Equilibrium Solution of a Generic Bipolar Quantum Hydrodynamic Model
NASA Astrophysics Data System (ADS)
Unterreiter, Andreas
The thermal equilibrium state of a bipolar, isothermic quantum fluid confined to a bounded domain ,d = 1,2 or d = 3 is entirely described by the particle densities n, p, minimizing the energy
Watanabe, Takanori; Kessler, Daniel; Scott, Clayton; Angstadt, Michael; Sripada, Chandra
2014-01-01
Substantial evidence indicates that major psychiatric disorders are associated with distributed neural dysconnectivity, leading to strong interest in using neuroimaging methods to accurately predict disorder status. In this work, we are specifically interested in a multivariate approach that uses features derived from whole-brain resting state functional connectomes. However, functional connectomes reside in a high dimensional space, which complicates model interpretation and introduces numerous statistical and computational challenges. Traditional feature selection techniques are used to reduce data dimensionality, but are blind to the spatial structure of the connectomes. We propose a regularization framework where the 6-D structure of the functional connectome (defined by pairs of points in 3-D space) is explicitly taken into account via the fused Lasso or the GraphNet regularizer. Our method only restricts the loss function to be convex and margin-based, allowing non-differentiable loss functions such as the hinge-loss to be used. Using the fused Lasso or GraphNet regularizer with the hinge-loss leads to a structured sparse support vector machine (SVM) with embedded feature selection. We introduce a novel efficient optimization algorithm based on the augmented Lagrangian and the classical alternating direction method, which can solve both fused Lasso and GraphNet regularized SVM with very little modification. We also demonstrate that the inner subproblems of the algorithm can be solved efficiently in analytic form by coupling the variable splitting strategy with a data augmentation scheme. Experiments on simulated data and resting state scans from a large schizophrenia dataset show that our proposed approach can identify predictive regions that are spatially contiguous in the 6-D “connectome space,” offering an additional layer of interpretability that could provide new insights about various disease processes. PMID:24704268
Global Regularity of 2D Density Patches for Inhomogeneous Navier-Stokes
NASA Astrophysics Data System (ADS)
Gancedo, Francisco; García-Juárez, Eduardo
2018-07-01
This paper is about Lions' open problem on density patches (Lions in Mathematical topics in fluid mechanics. Vol. 1, volume 3 of Oxford Lecture series in mathematics and its applications, Clarendon Press, Oxford University Press, New York, 1996): whether or not inhomogeneous incompressible Navier-Stokes equations preserve the initial regularity of the free boundary given by density patches. Using classical Sobolev spaces for the velocity, we first establish the propagation of {C^{1+γ}} regularity with {0 < γ < 1} in the case of positive density. Furthermore, we go beyond this to show the persistence of a geometrical quantity such as the curvature. In addition, we obtain a proof for {C^{2+γ}} regularity.
Instantaneous Frequency Attribute Comparison
NASA Astrophysics Data System (ADS)
Yedlin, M. J.; Margrave, G. F.; Ben Horin, Y.
2013-12-01
The instantaneous seismic data attribute provides a different means of seismic interpretation, for all types of seismic data. It first came to the fore in exploration seismology in the classic paper of Taner et al (1979), entitled " Complex seismic trace analysis". Subsequently a vast literature has been accumulated on the subject, which has been given an excellent review by Barnes (1992). In this research we will compare two different methods of computation of the instantaneous frequency. The first method is based on the original idea of Taner et al (1979) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method is based on the computation of the power centroid of the time-frequency spectrum, obtained using either the Gabor Transform as computed by Margrave et al (2011) or the Stockwell Transform as described by Stockwell et al (1996). We will apply both methods to exploration seismic data and the DPRK events recorded in 2006 and 2013. In applying the classical analytic signal technique, which is known to be unstable, due to the division of the square of the envelope, we will incorporate the stabilization and smoothing method proposed in the two paper of Fomel (2007). This method employs linear inverse theory regularization coupled with the application of an appropriate data smoother. The centroid method application is straightforward and is based on the very complete theoretical analysis provided in elegant fashion by Cohen (1995). While the results of the two methods are very similar, noticeable differences are seen at the data edges. This is most likely due to the edge effects of the smoothing operator in the Fomel method, which is more computationally intensive, when an optimal search of the regularization parameter is done. An advantage of the centroid method is the intrinsic smoothing of the data, which is inherent in the sliding window application used in all Short-Time Fourier Transform methods. The Fomel technique has a larger CPU run-time, resulting from the necessary matrix inversion. Barnes, Arthur E. "The calculation of instantaneous frequency and instantaneous bandwidth.", Geophysics, 57.11 (1992): 1520-1524. Fomel, Sergey. "Local seismic attributes.", Geophysics, 72.3 (2007): A29-A33. Fomel, Sergey. "Shaping regularization in geophysical-estimation problems." , Geophysics, 72.2 (2007): R29-R36. Stockwell, Robert Glenn, Lalu Mansinha, and R. P. Lowe. "Localization of the complex spectrum: the S transform."Signal Processing, IEEE Transactions on, 44.4 (1996): 998-1001. Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. "Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063. Cohen, Leon. "Time frequency analysis theory and applications."USA: Prentice Hall, (1995). Margrave, Gary F., Michael P. Lamoureux, and David C. Henley. "Gabor deconvolution: Estimating reflectivity by nonstationary deconvolution of seismic data." Geophysics, 76.3 (2011): W15-W30.
Practical considerations on frenectomy
NASA Astrophysics Data System (ADS)
Moldoveanu, Lucia; Badea, Florin Ciprian; Odor, Alin A.
2014-01-01
Besides surgically classical frenectomy, modern dentistry currently allows its approach by dental laser. Materials and Method: We proposed clinical observation of the results obtained by frenectomy with/without frenoplasty made by laser Er,Cr:YSGG 2780 nm. Results: The patients reported no pain, bleeding, swelling or major discomfort during the postoperative control of the following day. In terms of psycho-emotional reactions, both patients well behave well, the calm being given by no pain, bleeding, suture or edema. Discussions: The accuracy of this method, as well as the use of additional means of healing, allow satisfactory results both for patient and physician. Working parameters depend on the type of laser that is used, in our case Biolase Waterlase MD Turbo, regularly used in Toldimed Clinic in Constanta. Conclusions: Our study reveals that the possibilities regarding the surgical modeling of the lower lip frenulum are higher due to laser than the classical surgical approach. Moreover, a major role in the prevention of relapse by inappropriate healing is represented by the approach of frenectomy accompanied by frenuloplasty.
Gene selection heuristic algorithm for nutrigenomics studies.
Valour, D; Hue, I; Grimard, B; Valour, B
2013-07-15
Large datasets from -omics studies need to be deeply investigated. The aim of this paper is to provide a new method (LEM method) for the search of transcriptome and metabolome connections. The heuristic algorithm here described extends the classical canonical correlation analysis (CCA) to a high number of variables (without regularization) and combines well-conditioning and fast-computing in "R." Reduced CCA models are summarized in PageRank matrices, the product of which gives a stochastic matrix that resumes the self-avoiding walk covered by the algorithm. Then, a homogeneous Markov process applied to this stochastic matrix converges the probabilities of interconnection between genes, providing a selection of disjointed subsets of genes. This is an alternative to regularized generalized CCA for the determination of blocks within the structure matrix. Each gene subset is thus linked to the whole metabolic or clinical dataset that represents the biological phenotype of interest. Moreover, this selection process reaches the aim of biologists who often need small sets of genes for further validation or extended phenotyping. The algorithm is shown to work efficiently on three published datasets, resulting in meaningfully broadened gene networks.
Limit Theorems for Dispersing Billiards with Cusps
NASA Astrophysics Data System (ADS)
Bálint, P.; Chernov, N.; Dolgopyat, D.
2011-12-01
Dispersing billiards with cusps are deterministic dynamical systems with a mild degree of chaos, exhibiting "intermittent" behavior that alternates between regular and chaotic patterns. Their statistical properties are therefore weak and delicate. They are characterized by a slow (power-law) decay of correlations, and as a result the classical central limit theorem fails. We prove that a non-classical central limit theorem holds, with a scaling factor of {sqrt{nlog n}} replacing the standard {sqrt{n}} . We also derive the respective Weak Invariance Principle, and we identify the class of observables for which the classical CLT still holds.
Second-Order Two-Sided Estimates in Nonlinear Elliptic Problems
NASA Astrophysics Data System (ADS)
Cianchi, Andrea; Maz'ya, Vladimir G.
2018-05-01
Best possible second-order regularity is established for solutions to p-Laplacian type equations with {p \\in (1, ∞)} and a square-integrable right-hand side. Our results provide a nonlinear counterpart of the classical L 2-coercivity theory for linear problems, which is missing in the existing literature. Both local and global estimates are obtained. The latter apply to solutions to either Dirichlet or Neumann boundary value problems. Minimal regularity on the boundary of the domain is required, although our conclusions are new even for smooth domains. If the domain is convex, no regularity of its boundary is needed at all.
Bell Inequalities and Group Symmetry
NASA Astrophysics Data System (ADS)
Bolonek-Lasoń, Katarzyna
2017-12-01
Recently the method based on irreducible representations of finite groups has been proposed as a tool for investigating the more sophisticated versions of Bell inequalities (V. Ugǔr Gűney, M. Hillery, Phys. Rev. A90, 062121 ([2014]) and Phys. Rev. A91, 052110 ([2015])). In the present paper an example based on the symmetry group S 4 is considered. The Bell inequality violation due to the symmetry properties of regular tetrahedron is described. A nonlocal game based on the inequalities derived is described and it is shown that the violation of Bell inequality implies that the quantum strategies outperform their classical counterparts.
Schrödinger-Poisson-Vlasov-Poisson correspondence
NASA Astrophysics Data System (ADS)
Mocz, Philip; Lancaster, Lachlan; Fialkov, Anastasia; Becerra, Fernando; Chavanis, Pierre-Henri
2018-04-01
The Schrödinger-Poisson equations describe the behavior of a superfluid Bose-Einstein condensate under self-gravity with a 3D wave function. As ℏ/m →0 , m being the boson mass, the equations have been postulated to approximate the collisionless Vlasov-Poisson equations also known as the collisionless Boltzmann-Poisson equations. The latter describe collisionless matter with a 6D classical distribution function. We investigate the nature of this correspondence with a suite of numerical test problems in 1D, 2D, and 3D along with analytic treatments when possible. We demonstrate that, while the density field of the superfluid always shows order unity oscillations as ℏ/m →0 due to interference and the uncertainty principle, the potential field converges to the classical answer as (ℏ/m )2. Thus, any dynamics coupled to the superfluid potential is expected to recover the classical collisionless limit as ℏ/m →0 . The quantum superfluid is able to capture rich phenomena such as multiple phase-sheets, shell-crossings, and warm distributions. Additionally, the quantum pressure tensor acts as a regularizer of caustics and singularities in classical solutions. This suggests the exciting prospect of using the Schrödinger-Poisson equations as a low-memory method for approximating the high-dimensional evolution of the Vlasov-Poisson equations. As a particular example we consider dark matter composed of ultralight axions, which in the classical limit (ℏ/m →0 ) is expected to manifest itself as collisionless cold dark matter.
Liu, Jie; Zhuang, Xiahai; Wu, Lianming; An, Dongaolei; Xu, Jianrong; Peters, Terry; Gu, Lixu
2017-11-01
Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients. Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients.
Weak Galerkin method for the Biot’s consolidation model
Hu, Xiaozhe; Mu, Lin; Ye, Xiu
2017-08-23
In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less
Weak Galerkin method for the Biot’s consolidation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Xiaozhe; Mu, Lin; Ye, Xiu
In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less
Tunnel determinants from spectral zeta functions. Instanton effects in quantum mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izquierdo, A. Alonso; Guilarte, J. Mateos
2014-07-23
In this paper we develop an spectral zeta function regularization procedure on the determinants of instanton fluctuation operators that describe the semi-classical order of tunnel effects between degenerate vacua.
Dynamics of coherent states in regular and chaotic regimes of the non-integrable Dicke model
NASA Astrophysics Data System (ADS)
Lerma-Hernández, S.; Chávez-Carlos, J.; Bastarrachea-Magnani, M. A.; López-del-Carpio, B.; Hirsch, J. G.
2018-04-01
The quantum dynamics of initial coherent states is studied in the Dicke model and correlated with the dynamics, regular or chaotic, of their classical limit. Analytical expressions for the survival probability, i.e. the probability of finding the system in its initial state at time t, are provided in the regular regions of the model. The results for regular regimes are compared with those of the chaotic ones. It is found that initial coherent states in regular regions have a much longer equilibration time than those located in chaotic regions. The properties of the distributions for the initial coherent states in the Hamiltonian eigenbasis are also studied. It is found that for regular states the components with no negligible contribution are organized in sequences of energy levels distributed according to Gaussian functions. In the case of chaotic coherent states, the energy components do not have a simple structure and the number of participating energy levels is larger than in the regular cases.
An Ensemble Multilabel Classification for Disease Risk Prediction
Liu, Wei; Zhao, Hongling; Zhang, Chaoyang
2017-01-01
It is important to identify and prevent disease risk as early as possible through regular physical examinations. We formulate the disease risk prediction into a multilabel classification problem. A novel Ensemble Label Power-set Pruned datasets Joint Decomposition (ELPPJD) method is proposed in this work. First, we transform the multilabel classification into a multiclass classification. Then, we propose the pruned datasets and joint decomposition methods to deal with the imbalance learning problem. Two strategies size balanced (SB) and label similarity (LS) are designed to decompose the training dataset. In the experiments, the dataset is from the real physical examination records. We contrast the performance of the ELPPJD method with two different decomposition strategies. Moreover, the comparison between ELPPJD and the classic multilabel classification methods RAkEL and HOMER is carried out. The experimental results show that the ELPPJD method with label similarity strategy has outstanding performance. PMID:29065647
Visual attention based bag-of-words model for image classification
NASA Astrophysics Data System (ADS)
Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che
2014-04-01
Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.
Navarrete-Benlloch, Carlos; Roldán, Eugenio; Chang, Yue; Shi, Tao
2014-10-06
Nonlinear optical cavities are crucial both in classical and quantum optics; in particular, nowadays optical parametric oscillators are one of the most versatile and tunable sources of coherent light, as well as the sources of the highest quality quantum-correlated light in the continuous variable regime. Being nonlinear systems, they can be driven through critical points in which a solution ceases to exist in favour of a new one, and it is close to these points where quantum correlations are the strongest. The simplest description of such systems consists in writing the quantum fields as the classical part plus some quantum fluctuations, linearizing then the dynamical equations with respect to the latter; however, such an approach breaks down close to critical points, where it provides unphysical predictions such as infinite photon numbers. On the other hand, techniques going beyond the simple linear description become too complicated especially regarding the evaluation of two-time correlators, which are of major importance to compute observables outside the cavity. In this article we provide a regularized linear description of nonlinear cavities, that is, a linearization procedure yielding physical results, taking the degenerate optical parametric oscillator as the guiding example. The method, which we call self-consistent linearization, is shown to be equivalent to a general Gaussian ansatz for the state of the system, and we compare its predictions with those obtained with available exact (or quasi-exact) methods. Apart from its operational value, we believe that our work is valuable also from a fundamental point of view, especially in connection to the question of how far linearized or Gaussian theories can be pushed to describe nonlinear dissipative systems which have access to non-Gaussian states.
A hybrid perturbation Galerkin technique with applications to slender body theory
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1989-01-01
A two-step hybrid perturbation-Galerkin method to solve a variety of applied mathematics problems which involve a small parameter is presented. The method consists of: (1) the use of a regular or singular perturbation method to determine the asymptotic expansion of the solution in terms of the small parameter; (2) construction of an approximate solution in the form of a sum of the perturbation coefficient functions multiplied by (unknown) amplitudes (gauge functions); and (3) the use of the classical Bubnov-Galerkin method to determine these amplitudes. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is applied to some singular perturbation problems in slender body theory. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the degree of applicability of the hybrid method to broader problem areas is discussed.
A hybrid perturbation Galerkin technique with applications to slender body theory
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1987-01-01
A two step hybrid perturbation-Galerkin method to solve a variety of applied mathematics problems which involve a small parameter is presented. The method consists of: (1) the use of a regular or singular perturbation method to determine the asymptotic expansion of the solution in terms of the small parameter; (2) construction of an approximate solution in the form of a sum of the perturbation coefficient functions multiplied by (unknown) amplitudes (gauge functions); and (3) the use of the classical Bubnov-Galerkin method to determine these amplitudes. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is applied to some singular perturbation problems in slender body theory. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the degree of applicability of the hybrid method to broader problem areas is discussed.
NASA Astrophysics Data System (ADS)
Zaslavsky, M.
1996-06-01
The phenomena of dynamical localization, both classical and quantum, are studied in the Fermi accelerator model. The model consists of two vertical oscillating walls and a ball bouncing between them. The classical localization boundary is calculated in the case of ``sinusoidal velocity transfer'' [A. J. Lichtenberg and M. A. Lieberman, Regular and Stochastic Motion (Springer-Verlag, Berlin, 1983)] on the basis of the analysis of resonances. In the case of the ``sawtooth'' wall velocity we show that the quantum localization is determined by the analytical properties of the canonical transformations to the action and angle coordinates of the unperturbed Hamiltonian, while the existence of the classical localization is determined by the number of continuous derivatives of the distance between the walls with respect to time.
Illumination invariant feature point matching for high-resolution planetary remote sensing images
NASA Astrophysics Data System (ADS)
Wu, Bo; Zeng, Hai; Hu, Han
2018-03-01
Despite its success with regular close-range and remote-sensing images, the scale-invariant feature transform (SIFT) algorithm is essentially not invariant to illumination differences due to the use of gradients for feature description. In planetary remote sensing imagery, which normally lacks sufficient textural information, salient regions are generally triggered by the shadow effects of keypoints, reducing the matching performance of classical SIFT. Based on the observation of dual peaks in a histogram of the dominant orientations of SIFT keypoints, this paper proposes an illumination-invariant SIFT matching method for high-resolution planetary remote sensing images. First, as the peaks in the orientation histogram are generally aligned closely with the sub-solar azimuth angle at the time of image collection, an adaptive suppression Gaussian function is tuned to level the histogram and thereby alleviate the differences in illumination caused by a changing solar angle. Next, the suppression function is incorporated into the original SIFT procedure for obtaining feature descriptors, which are used for initial image matching. Finally, as the distribution of feature descriptors changes after anisotropic suppression, and the ratio check used for matching and outlier removal in classical SIFT may produce inferior results, this paper proposes an improved matching procedure based on cross-checking and template image matching. The experimental results for several high-resolution remote sensing images from both the Moon and Mars, with illumination differences of 20°-180°, reveal that the proposed method retrieves about 40%-60% more matches than the classical SIFT method. The proposed method is of significance for matching or co-registration of planetary remote sensing images for their synergistic use in various applications. It also has the potential to be useful for flyby and rover images by integrating with the affine invariant feature detectors.
Next Generation Extended Lagrangian Quantum-based Molecular Dynamics
NASA Astrophysics Data System (ADS)
Negre, Christian
2017-06-01
A new framework for extended Lagrangian first-principles molecular dynamics simulations is presented, which overcomes shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while maintaining important advantages of the unified extended Lagrangian formulation of density functional theory pioneered by Car and Parrinello three decades ago. The new framework allows, for the first time, energy conserving, linear-scaling Born-Oppenheimer molecular dynamics simulations, which is necessary to study larger and more realistic systems over longer simulation times than previously possible. Expensive, self-consinstent-field optimizations are avoided and normal integration time steps of regular, direct Born-Oppenheimer molecular dynamics can be used. Linear scaling electronic structure theory is presented using a graph-based approach that is ideal for parallel calculations on hybrid computer platforms. For the first time, quantum based Born-Oppenheimer molecular dynamics simulation is becoming a practically feasible approach in simulations of +100,000 atoms-representing a competitive alternative to classical polarizable force field methods. In collaboration with: Anders Niklasson, Los Alamos National Laboratory.
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
Lantto, Ulla; Koivunen, Petri; Tapiainen, Terhi; Renko, Marjo
2016-12-01
To compare the effectiveness of tonsillectomy and the long-term outcome of periodic fever, aphthous stomatitis, pharyngitis, and adenitis (PFAPA) syndrome in patients fulfilling the classic diagnostic criteria and in those with regularly recurring fever as the only symptom or with onset of symptoms after age 5 years or both. We reviewed the medical records of 3852 children who underwent tonsillectomy between 1990 and 2007 and identified 108 children who did so because of regularly recurring fevers. The patients were invited to an outpatient visit and were classified into 2 groups: those who met (N = 58) and those who did not meet (N = 50) Thomas diagnostic criteria. We then compared the clinical profile and outcome of PFAPA symptoms after tonsillectomy between the 2 groups. In the group that met Thomas criteria, 97% (56/58) had complete resolution of fever episodes after tonsillectomy; in the group that did not meet Thomas criteria (50/50) had complete resolution of fever episodes after tonsillectomy (P = .25). The clinical profile of the periodic fevers and the occurrence of other illnesses during follow-up were similar in both groups. Thomas criteria identified 56 of 106 patients responding to tonsillectomy. Tonsillectomy was an effective treatment for patients with regularly recurring fever episodes who failed to meet the classic Thomas criteria. We suggest that PFAPA syndrome should be suspected and tonsillectomy considered in children with a late onset of symptoms (>5 years of age) or when fever is the only symptom during the episodes. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osovski, Shmuel; Moiseyev, Nimrod
The recent pioneering experiments of the [Nature 412, 52 (2001)] and [Science, 293, 274 (2001)] groups have demonstrated the dynamical tunneling of cold atoms interacting with standing electromagnetic waves. It has been shown [Phys. Rev. Lett. 89, 253201 (2002)], that the tunneling oscillations observed in these experiments correspondingly stems from two- and three-Floquet quantum state mechanism and can be controlled by varying the experimental parameters. The question where are the fingerprints of the classical chaotic dynamics in a quantum dynamical process which is controlled by 2 or 3 quantum states remains open. Our calculations show that although the effective ({Dirac_h}/2{pi})more » associated with the two experiments is large, and the quantum system is far from its semiclassical limit, the quantum Floquet-Bloch quasienergy states still can be classified as regular and chaotic states. In both experiments the quantum and the classical phase-space entropies are quite similar, although the classical phase space is a mixed regular-chaotic space. It is also shown that as the wave packet which is initially localized at one of the two inner regular islands oscillates between them through the chaotic sea, it accumulates a random phase which causes the decay of the amplitude of the oscillating mean momentum,
, as measured in both experiments. The extremely high sensitivity of the rate of decay of the oscillations of
to the very small changes in the population of different Floquet-Bloch states, is another type of fingerprint of chaos in quantum dynamics that presumably was measured in the NIST and AUSTIN experiments for the first time.« less
Torus as phase space: Weyl quantization, dequantization, and Wigner formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ligabò, Marilena, E-mail: marilena.ligabo@uniba.it
2016-08-15
The Weyl quantization of classical observables on the torus (as phase space) without regularity assumptions is explicitly computed. The equivalence class of symbols yielding the same Weyl operator is characterized. The Heisenberg equation for the dynamics of general quantum observables is written through the Moyal brackets on the torus and the support of the Wigner transform is characterized. Finally, a dequantization procedure is introduced that applies, for instance, to the Pauli matrices. As a result we obtain the corresponding classical symbols.
Effect of non-classical current paths in networks of 1-dimensional wires
NASA Astrophysics Data System (ADS)
Echternach, P. M.; Mikhalchuk, A. G.; Bozler, H. M.; Gershenson, M. E.; Bogdanov, A. L.; Nilsson, B.
1996-04-01
At low temperatures, the quantum corrections to the resistance due to weak localization and electron-electron interaction are affected by the shape and topology of samples. We observed these effects in the resistance of 2D percolation networks made from 1D wires and in a series of long 1D wires with regularly spaced side branches. Branches outside the classical current path strongly reduce the quantum corrections to the resistance and these reductions become a measure of the quantum lengths.
2013-09-30
specifying the wave-maker driving signal . The short intense envelope solitons possess vertical asymmetry similar to regular Stokes waves with the same...presented in [P1], [P2]. 2. Physical model of sea wave period from altimeter data We use the asymptotic theory of wind wave growth proposed in [R6...relationship can be used for processing altimeter data assuming the wave field to be stationary and spatially inhomogeneous. It is consistent with
Dimensionally regularized Tsallis' statistical mechanics and two-body Newton's gravitation
NASA Astrophysics Data System (ADS)
Zamora, J. D.; Rocca, M. C.; Plastino, A.; Ferri, G. L.
2018-05-01
Typical Tsallis' statistical mechanics' quantifiers like the partition function and the mean energy exhibit poles. We are speaking of the partition function Z and the mean energy 〈 U 〉 . The poles appear for distinctive values of Tsallis' characteristic real parameter q, at a numerable set of rational numbers of the q-line. These poles are dealt with dimensional regularization resources. The physical effects of these poles on the specific heats are studied here for the two-body classical gravitation potential.
Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing
NASA Astrophysics Data System (ADS)
Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.
2008-07-01
Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.
NASA Astrophysics Data System (ADS)
Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud
2017-11-01
Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.
On the theory of Carriers's Electrostatic Interaction near an Interface
NASA Astrophysics Data System (ADS)
Waters, Michael; Hashemi, Hossein; Kieffer, John
2015-03-01
Heterojunction interfaces are common in non-traditional photovoltaic device designs, such as those based small molecules, polymers, and perovskites. We have examined a number of the effects of the heterojunction interface region on carrier/exciton energetics using a mixture of both semi-classical and quantum electrostatic methods, ab initio methods, and statistical mechanics. Our theoretical analysis has yielded several useful relationships and numerical recipes that should be considered in device design regardless of the particular materials system. As a demonstration, we highlight these formalisms as applied to carriers and polaron pairs near a C60/subphthalocyanine interface. On the regularly ordered areas of the heterojunction, the effect of the interface is a significant set of corrections to the carrier energies, which in turn directly affects device performance.
Applications of quantum entropy to statistics
NASA Astrophysics Data System (ADS)
Silver, R. N.; Martz, H. F.
This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.
Dynamic coupling of subsurface and seepage flows solved within a regularized partition formulation
NASA Astrophysics Data System (ADS)
Marçais, J.; de Dreuzy, J.-R.; Erhel, J.
2017-11-01
Hillslope response to precipitations is characterized by sharp transitions from purely subsurface flow dynamics to simultaneous surface and subsurface flows. Locally, the transition between these two regimes is triggered by soil saturation. Here we develop an integrative approach to simultaneously solve the subsurface flow, locate the potential fully saturated areas and deduce the generated saturation excess overland flow. This approach combines the different dynamics and transitions in a single partition formulation using discontinuous functions. We propose to regularize the system of partial differential equations and to use classic spatial and temporal discretization schemes. We illustrate our methodology on the 1D hillslope storage Boussinesq equations (Troch et al., 2003). We first validate the numerical scheme on previous numerical experiments without saturation excess overland flow. Then we apply our model to a test case with dynamic transitions from purely subsurface flow dynamics to simultaneous surface and subsurface flows. Our results show that discretization respects mass balance both locally and globally, converges when the mesh or time step are refined. Moreover the regularization parameter can be taken small enough to ensure accuracy without suffering of numerical artefacts. Applied to some hundreds of realistic hillslope cases taken from Western side of France (Brittany), the developed method appears to be robust and efficient.
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
Model-based Clustering of High-Dimensional Data in Astrophysics
NASA Astrophysics Data System (ADS)
Bouveyron, C.
2016-05-01
The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790
Makarov, D V; Kon'kov, L E; Uleysky, M Yu; Petrov, P S
2013-01-01
The problem of sound propagation in a randomly inhomogeneous oceanic waveguide is considered. An underwater sound channel in the Sea of Japan is taken as an example. Our attention is concentrated on the domains of finite-range ray stability in phase space and their influence on wave dynamics. These domains can be found by means of the one-step Poincare map. To study manifestations of finite-range ray stability, we introduce the finite-range evolution operator (FREO) describing transformation of a wave field in the course of propagation along a finite segment of a waveguide. Carrying out statistical analysis of the FREO spectrum, we estimate the contribution of regular domains and explore their evanescence with increasing length of the segment. We utilize several methods of spectral analysis: analysis of eigenfunctions by expanding them over modes of the unperturbed waveguide, approximation of level-spacing statistics by means of the Berry-Robnik distribution, and the procedure used by A. Relano and coworkers [Relano et al., Phys. Rev. Lett. 89, 244102 (2002); Relano, Phys. Rev. Lett. 100, 224101 (2008)]. Comparing the results obtained with different methods, we find that the method based on the statistical analysis of FREO eigenfunctions is the most favorable for estimating the contribution of regular domains. It allows one to find directly the waveguide modes whose refraction is regular despite the random inhomogeneity. For example, it is found that near-axial sound propagation in the Sea of Japan preserves stability even over distances of hundreds of kilometers due to the presence of a shearless torus in the classical phase space. Increasing the acoustic wavelength degrades scattering, resulting in recovery of eigenfunction localization near periodic orbits of the one-step Poincaré map.
Green operators for low regularity spacetimes
NASA Astrophysics Data System (ADS)
Sanchez Sanchez, Yafet; Vickers, James
2018-02-01
In this paper we define and construct advanced and retarded Green operators for the wave operator on spacetimes with low regularity. In order to do so we require that the spacetime satisfies the condition of generalised hyperbolicity which is equivalent to well-posedness of the classical inhomogeneous problem with zero initial data where weak solutions are properly supported. Moreover, we provide an explicit formula for the kernel of the Green operators in terms of an arbitrary eigenbasis of H 1 and a suitable Green matrix that solves a system of second order ODEs.
Classical and quantum dynamics of a kicked relativistic particle in a box
NASA Astrophysics Data System (ADS)
Yusupov, J. R.; Otajanov, D. M.; Eshniyazov, V. E.; Matrasulov, D. U.
2018-03-01
We study classical and quantum dynamics of a kicked relativistic particle confined in a one dimensional box. It is found that in classical case for chaotic motion the average kinetic energy grows in time, while for mixed regime the growth is suppressed. However, in case of regular motion energy fluctuates around certain value. Quantum dynamics is treated by solving the time-dependent Dirac equation with delta-kicking potential, whose exact solution is obtained for single kicking period. In quantum case, depending on the values of the kicking parameters, the average kinetic energy can be quasi periodic, or fluctuating around some value. Particle transport is studied by considering spatio-temporal evolution of the Gaussian wave packet and by analyzing the trembling motion.
Refahi, Yassin; Brunoud, Géraldine; Farcot, Etienne; Jean-Marie, Alain; Pulkkinen, Minna; Vernoux, Teva; Godin, Christophe
2016-01-01
Exploration of developmental mechanisms classically relies on analysis of pattern regularities. Whether disorders induced by biological noise may carry information on building principles of developmental systems is an important debated question. Here, we addressed theoretically this question using phyllotaxis, the geometric arrangement of plant aerial organs, as a model system. Phyllotaxis arises from reiterative organogenesis driven by lateral inhibitions at the shoot apex. Motivated by recurrent observations of disorders in phyllotaxis patterns, we revisited in depth the classical deterministic view of phyllotaxis. We developed a stochastic model of primordia initiation at the shoot apex, integrating locality and stochasticity in the patterning system. This stochastic model recapitulates phyllotactic patterns, both regular and irregular, and makes quantitative predictions on the nature of disorders arising from noise. We further show that disorders in phyllotaxis instruct us on the parameters governing phyllotaxis dynamics, thus that disorders can reveal biological watermarks of developmental systems. DOI: http://dx.doi.org/10.7554/eLife.14093.001 PMID:27380805
A Revision on Classical Solutions to the Cauchy Boltzmann Problem for Soft Potentials
NASA Astrophysics Data System (ADS)
Alonso, Ricardo J.; Gamba, Irene M.
2011-05-01
This short note complements the recent paper of the authors (Alonso, Gamba in J. Stat. Phys. 137(5-6):1147-1165, 2009). We revisit the results on propagation of regularity and stability using L p estimates for the gain and loss collision operators which had the exponent range misstated for the loss operator. We show here the correct range of exponents. We require a Lebesgue's exponent α>1 in the angular part of the collision kernel in order to obtain finiteness in some constants involved in the regularity and stability estimates. As a consequence the L p regularity associated to the Cauchy problem of the space inhomogeneous Boltzmann equation holds for a finite range of p≥1 explicitly determined.
Bacteriophages of Yersinia pestis.
Zhao, Xiangna; Skurnik, Mikael
2016-01-01
Bacteriophage play many varied roles in microbial ecology and evolution. This chapter collates a vast body of knowledge and expertise on Yersinia pestis phages, including the history of their isolation and classical methods for their isolation and identification. The genomic diversity of Y. pestis phage and bacteriophage islands in the Y. pestis genome are also discussed because all phage research represents a branch of genetics. In addition, our knowledge of the receptors that are recognized by Y. pestis phage, advances in phage therapy for Y. pestis infections, the application of phage in the detection of Y. pestis, and clustered regularly interspaced short palindromic repeats (CRISPRs) sequences of Y. pestis from prophage DNA are all reviewed here.
On the n-body problem on surfaces of revolution
NASA Astrophysics Data System (ADS)
Stoica, Cristina
2018-05-01
We explore the n-body problem, n ≥ 3, on a surface of revolution with a general interaction depending on the pairwise geodesic distance. Using the geometric methods of classical mechanics we determine a large set of properties. In particular, we show that Saari's conjecture fails on surfaces of revolution admitting a geodesic circle. We define homographic motions and, using the discrete symmetries, prove that when the masses are equal, they form an invariant manifold. On this manifold the dynamics are reducible to a one-degree of freedom system. We also find that for attractive interactions, regular n-gon shaped relative equilibria with trajectories located on geodesic circles typically experience a pitchfork bifurcation. Some applications are included.
Cuesta, D; Varela, M; Miró, P; Galdós, P; Abásolo, D; Hornero, R; Aboy, M
2007-07-01
Body temperature is a classical diagnostic tool for a number of diseases. However, it is usually employed as a plain binary classification function (febrile or not febrile), and therefore its diagnostic power has not been fully developed. In this paper, we describe how body temperature regularity can be used for diagnosis. Our proposed methodology is based on obtaining accurate long-term temperature recordings at high sampling frequencies and analyzing the temperature signal using a regularity metric (approximate entropy). In this study, we assessed our methodology using temperature registers acquired from patients with multiple organ failure admitted to an intensive care unit. Our results indicate there is a correlation between the patient's condition and the regularity of the body temperature. This finding enabled us to design a classifier for two outcomes (survival or death) and test it on a dataset including 36 subjects. The classifier achieved an accuracy of 72%.
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.
A semiparametric graphical modelling approach for large-scale equity selection.
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.
Chau, Thinh; Parsi, Kory K; Ogawa, Toru; Kiuru, Maija; Konia, Thomas; Li, Chin-Shang; Fung, Maxwell A
2017-12-01
Psoriasis is usually diagnosed clinically, so only non-classic or refractory cases tend to be biopsied. Diagnostic uncertainty persists when dermatopathologists encounter features regarded as non-classic for psoriasis. Define and document classic and non-classic histologic features in skin biopsies from patients with clinically confirmed psoriasis. Minimal clinical diagnostic criteria were informally validated and applied to a consecutive series of biopsies histologically consistent with psoriasis. Clinical confirmation required 2 of the following criteria: (1) classic morphology, (2) classic distribution, (3) nail pitting, and (4) family history, with #1 and/or #2 as 1 criterion in every case RESULTS: Fifty-one biopsies from 46 patients were examined. Classic features of psoriasis included hypogranulosis (96%), club-shaped rete ridges (96%), dermal papilla capillary ectasia (90%), Munro microabscess (78%), suprapapillary plate thinning (63%), spongiform pustules (53%), and regular acanthosis (14%). Non-classic features included irregular acanthosis (84%), junctional vacuolar alteration (76%), spongiosis (76%), dermal neutrophils (69%), necrotic keratinocytes (67%), hypergranulosis (65%), neutrophilic spongiosis (61%), dermal eosinophils (49%), compact orthokeratosis (37%), papillary dermal fibrosis (35%), lichenoid infiltrate (25%), plasma cells (16%), and eosinophilic spongiosis (8%). Psoriasis exhibits a broader histopathologic spectrum. The presence of some non-classic features does not necessarily exclude the possibility of psoriasis. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Nonlinear Dynamics, Poor Data, and What to Make of Them?
NASA Astrophysics Data System (ADS)
Ghil, M.; Zaliapin, I. V.
2005-12-01
The analysis of univariate or multivariate time series provides crucial information to describe, understand, and predict variability in the geosciences. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical field of study. Considerable progress has also been made in interpreting the information so obtained in terms of dynamical systems theory. In this talk we will describe the connections between time series analysis and nonlinear dynamics, discuss signal-to-noise enhancement, and present some of the novel methods for spectral analysis. These fall into two broad categories: (i) methods that try to ferret out regularities of the time series; and (ii) methods aimed at describing the characteristics of irregular processes. The former include singular-spectrum analysis (SSA), the multi-taper method (MTM), and the maximum-entropy method (MEM). The various steps, as well as the advantages and disadvantages of these methods, will be illustrated by their application to several important climatic time series, such as the Southern Oscillation Index (SOI), paleoclimatic time series, and instrumental temperature time series. The SOI index captures major features of interannual climate variability and is used extensively in its prediction. The other time series cover interdecadal and millennial time scales. The second category includes the calculation of fractional dimension, leading Lyapunov exponents, and Hurst exponents. More recently, multi-trend analysis (MTA), binary-decomposition analysis (BDA), and related methods have attempted to describe the structure of time series that include both regular and irregular components. Within the time available, I will try to give a feeling for how these methods work, and how well.
Regular exercise during haemodialysis promotes an anti-inflammatory leucocyte profile
Dungey, Maurice; Young, Hannah M L; Churchward, Darren R; Burton, James O; Smith, Alice C
2017-01-01
Abstract Background Cardiovascular disease is the most common cause of mortality in haemodialysis (HD) patients and is highly predicted by markers of chronic inflammation. Regular exercise may have beneficial anti-inflammatory effects, but this is unclear in HD patients. This study assessed the effect of regular intradialytic exercise on soluble inflammatory factors and inflammatory leucocyte phenotypes. Methods Twenty-two HD patients from a centre where intradialytic cycling was offered thrice weekly and 16 HD patients receiving usual care volunteered. Exercising patients aimed to cycle for 30 min at rating of perceived exertion of ‘somewhat hard’. Baseline characteristics were compared with 16 healthy age-matched individuals. Physical function, soluble inflammatory markers and leucocyte phenotypes were assessed again after 6 months of regular exercise. Results Patients were less active than their healthy counterparts and had significant elevations in measures of inflammation [interleukin-6 (IL-6), C-reactive protein (CRP), tumour necrosis factor-α (TNF-α), intermediate and non-classical monocytes; all P < 0.001]. Six months of regular intradialytic exercise improved physical function (sit-to-stand 60). After 6 months, the proportion of intermediate monocytes in the exercising patients reduced compared with non-exercisers (7.58 ± 1.68% to 6.38 ± 1.81% versus 6.86 ± 1.45% to 7.88 ± 1.66%; P < 0.01). Numbers (but not proportion) of regulatory T cells decreased in the non-exercising patients only (P < 0.05). Training had no significant effect on circulating IL-6, CRP or TNF-α concentrations. Conclusions These findings suggest that regular intradialytic exercise is associated with an anti-inflammatory effect at a circulating cellular level but not in circulating cytokines. This may be protective against the increased risk of cardiovascular disease and mortality that is associated with chronic inflammation and elevated numbers of intermediate monocytes. PMID:29225811
Autism Spectrum Disorders (ASD) and Diet
... affects brain function, particularly in the areas of social interaction and communication skills. Classic symptoms include delayed talking, ... plenty of fluids and regular physical activity. Medication interactions. Some stimulant medications used with autism, such as Ritalin, ... on its usage. Marketing Cookies We use some social sharing plugins, to allow you to share certain ...
ERIC Educational Resources Information Center
Robertson, Erin K.; Joanisse, Marc F.; Desroches, Amy S.; Terry, Alexandra
2013-01-01
The authors investigated past-tense morphology problems in children with dyslexia compared to those classically observed in children with oral language impairment (LI). Children were tested on a past-tense elicitation task involving regulars ("look-looked"), irregulars ("take-took"), and nonwords ("murn-murned").…
Signatures of chaos in the Brillouin zone.
Barr, Aaron; Barr, Ariel; Porter, Max D; Reichl, Linda E
2017-10-01
When the classical dynamics of a particle in a finite two-dimensional billiard undergoes a transition to chaos, the quantum dynamics of the particle also shows manifestations of chaos in the form of scarring of wave functions and changes in energy level spacing distributions. If we "tile" an infinite plane with such billiards, we find that the Bloch states on the lattice undergo avoided crossings, energy level spacing statistics change from Poisson-like to Wigner-like, and energy sheets of the Brillouin zone begin to "mix" as the classical dynamics of the billiard changes from regular to chaotic behavior.
Spectral/ hp element methods: Recent developments, applications, and perspectives
NASA Astrophysics Data System (ADS)
Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.
2018-02-01
The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.
MIB Galerkin method for elliptic interface problems.
Xia, Kelin; Zhan, Meng; Wei, Guo-Wei
2014-12-15
Material interfaces are omnipresent in the real-world structures and devices. Mathematical modeling of material interfaces often leads to elliptic partial differential equations (PDEs) with discontinuous coefficients and singular sources, which are commonly called elliptic interface problems. The development of high-order numerical schemes for elliptic interface problems has become a well defined field in applied and computational mathematics and attracted much attention in the past decades. Despite of significant advances, challenges remain in the construction of high-order schemes for nonsmooth interfaces, i.e., interfaces with geometric singularities, such as tips, cusps and sharp edges. The challenge of geometric singularities is amplified when they are associated with low solution regularities, e.g., tip-geometry effects in many fields. The present work introduces a matched interface and boundary (MIB) Galerkin method for solving two-dimensional (2D) elliptic PDEs with complex interfaces, geometric singularities and low solution regularities. The Cartesian grid based triangular elements are employed to avoid the time consuming mesh generation procedure. Consequently, the interface cuts through elements. To ensure the continuity of classic basis functions across the interface, two sets of overlapping elements, called MIB elements, are defined near the interface. As a result, differentiation can be computed near the interface as if there is no interface. Interpolation functions are constructed on MIB element spaces to smoothly extend function values across the interface. A set of lowest order interface jump conditions is enforced on the interface, which in turn, determines the interpolation functions. The performance of the proposed MIB Galerkin finite element method is validated by numerical experiments with a wide range of interface geometries, geometric singularities, low regularity solutions and grid resolutions. Extensive numerical studies confirm the designed second order convergence of the MIB Galerkin method in the L ∞ and L 2 errors. Some of the best results are obtained in the present work when the interface is C 1 or Lipschitz continuous and the solution is C 2 continuous.
NASA Astrophysics Data System (ADS)
Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng
2014-06-01
The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.
Path Following in the Exact Penalty Method of Convex Programming.
Zhou, Hua; Lange, Kenneth
2015-07-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.
Path Following in the Exact Penalty Method of Convex Programming
Zhou, Hua; Lange, Kenneth
2015-01-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
Sutherland, Chris; Royle, Andy
2016-01-01
This chapter provides a non-technical overview of ‘closed population capture–recapture’ models, a class of well-established models that are widely applied in ecology, such as removal sampling, covariate models, and distance sampling. These methods are regularly adopted for studies of reptiles, in order to estimate abundance from counts of marked individuals while accounting for imperfect detection. Thus, the chapter describes some classic closed population models for estimating abundance, with considerations for some recent extensions that provide a spatial context for the estimation of abundance, and therefore density. Finally, the chapter suggests some software for use in data analysis, such as the Windows-based program MARK, and provides an example of estimating abundance and density of reptiles using an artificial cover object survey of Slow Worms (Anguis fragilis).
Estimating abundance: Chapter 27
Royle, J. Andrew
2016-01-01
This chapter provides a non-technical overview of ‘closed population capture–recapture’ models, a class of well-established models that are widely applied in ecology, such as removal sampling, covariate models, and distance sampling. These methods are regularly adopted for studies of reptiles, in order to estimate abundance from counts of marked individuals while accounting for imperfect detection. Thus, the chapter describes some classic closed population models for estimating abundance, with considerations for some recent extensions that provide a spatial context for the estimation of abundance, and therefore density. Finally, the chapter suggests some software for use in data analysis, such as the Windows-based program MARK, and provides an example of estimating abundance and density of reptiles using an artificial cover object survey of Slow Worms (Anguis fragilis).
A semiparametric graphical modelling approach for large-scale equity selection
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507
Computing quantum discord is NP-complete
NASA Astrophysics Data System (ADS)
Huang, Yichen
2014-03-01
We study the computational complexity of quantum discord (a measure of quantum correlation beyond entanglement), and prove that computing quantum discord is NP-complete. Therefore, quantum discord is computationally intractable: the running time of any algorithm for computing quantum discord is believed to grow exponentially with the dimension of the Hilbert space so that computing quantum discord in a quantum system of moderate size is not possible in practice. As by-products, some entanglement measures (namely entanglement cost, entanglement of formation, relative entropy of entanglement, squashed entanglement, classical squashed entanglement, conditional entanglement of mutual information, and broadcast regularization of mutual information) and constrained Holevo capacity are NP-hard/NP-complete to compute. These complexity-theoretic results are directly applicable in common randomness distillation, quantum state merging, entanglement distillation, superdense coding, and quantum teleportation; they may offer significant insights into quantum information processing. Moreover, we prove the NP-completeness of two typical problems: linear optimization over classical states and detecting classical states in a convex set, providing evidence that working with classical states is generically computationally intractable.
Music Preferences and Civic Activism of Young People
ERIC Educational Resources Information Center
Leung, Ambrose; Kier, Cheryl
2008-01-01
This study examines the relationship between music preferences and civic activism among 182 participants aged 14-24 years. Our analyses show that participants who regularly listened to certain music genres such as classical, opera, musicals, new age, easy listening, house, world music, heavy metal, punk, and ska were significantly more likely to…
Observing campaign on 5 variables in Cygnus
NASA Astrophysics Data System (ADS)
Waagen, Elizabeth O.
2015-10-01
Dr. George Wallerstein (University of Washington) has requested AAVSO assistance in monitoring 5 variable stars in Cygnus now through December 2015. He is working to complete the radial velocity curves for these stars, and needs optical light curves for correlation with the spectra he will be obtaining. Wallerstein writes: "I need to know the time of max or min so I can assign a phase to each spectrum. Most classical Cepheids are quite regular so once a time of max or min can be established I can derive the phase of each observation even if my obs are several cycles away from the established max or min. MZ Cyg is a type II Cepheid and they are less regular than their type I cousins." SZ Cyg, X Cyg, VX Cyg, and TX Cyg are all classical Cepheids. V and visual observations are requested. These are long-period Cepheids, so nightly observations are sufficient. Finder charts with sequence may be created using the AAVSO Variable Star Plotter (https://www.aavso.org/vsp). Observations should be submitted to the AAVSO International Database. See full Alert Notice for more details.
Convex foundations for generalized MaxEnt models
NASA Astrophysics Data System (ADS)
Frongillo, Rafael; Reid, Mark D.
2014-12-01
We present an approach to maximum entropy models that highlights the convex geometry and duality of generalized exponential families (GEFs) and their connection to Bregman divergences. Using our framework, we are able to resolve a puzzling aspect of the bijection of Banerjee and coauthors between classical exponential families and what they call regular Bregman divergences. Their regularity condition rules out all but Bregman divergences generated from log-convex generators. We recover their bijection and show that a much broader class of divergences correspond to GEFs via two key observations: 1) Like classical exponential families, GEFs have a "cumulant" C whose subdifferential contains the mean: Eo˜pθ[φ(o)]∈∂C(θ) ; 2) Generalized relative entropy is a C-Bregman divergence between parameters: DF(pθ,pθ')= D C(θ,θ') , where DF becomes the KL divergence for F = -H. We also show that every incomplete market with cost function C can be expressed as a complete market, where the prices are constrained to be a GEF with cumulant C. This provides an entirely new interpretation of prediction markets, relating their design back to the principle of maximum entropy.
Shearlet-based regularization in sparse dynamic tomography
NASA Astrophysics Data System (ADS)
Bubba, T. A.; März, M.; Purisha, Z.; Lassas, M.; Siltanen, S.
2017-08-01
Classical tomographic imaging is soundly understood and widely employed in medicine, nondestructive testing and security applications. However, it still offers many challenges when it comes to dynamic tomography. Indeed, in classical tomography, the target is usually assumed to be stationary during the data acquisition, but this is not a realistic model. Moreover, to ensure a lower X-ray radiation dose, only a sparse collection of measurements per time step is assumed to be available. With such a set up, we deal with a sparse data, dynamic tomography problem, which clearly calls for regularization, due to the loss of information in the data and the ongoing motion. In this paper, we propose a 3D variational formulation based on 3D shearlets, where the third dimension accounts for the motion in time, to reconstruct a moving 2D object. Results are presented for real measured data and compared against a 2D static model, in the case of fan-beam geometry. Results are preliminary but show that better reconstructions can be achieved when motion is taken into account.
Classical Measurement Methods and Laser Scanning Usage in Shaft Hoist Assembly Inventory
NASA Astrophysics Data System (ADS)
Jaśkowski, Wojciech; Lipecki, Tomasz; Matwij, Wojciech; Jabłoński, Mateusz
2018-03-01
The shaft hoist assembly is the base of underground mining plant. Its efficiency and correct operation is subject to restrictive legal regulations and is controlled on a daily visual assessment by shaft crew and energomechanics. In addition, in the regular interval, the shaft hoist assembly is subject to a thorough inventory, which includes the determination of the geometrical relationships between the hoisting machine, the headframe and the shaft with its housing. Inventory measurements for shaft and headframe are used for years of conventional geodetic methods including mechanical or laser plumbing and tachymetric surveys. Additional precision levelling is also used for measuring shafts of hoisting machines and rope pulleys. Continuous modernization of measuring technology makes it possible to implement the further methods to the above mentioned purposes. The comparison of the accuracy and the economics of performing measurements based on many years of experience with comprehensive inventory of shaft hoist assembly using various research techniques was made and detailed in the article.
NASA Astrophysics Data System (ADS)
Benfenati, A.; La Camera, A.; Carbillet, M.
2016-02-01
Aims: High-dynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright point-wise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources. Methods: The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized Kullback-Leibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the trade-off between the Kullback-Leibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise. Results: The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources' position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources' position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star's position concerns a connected region instead of isolated pixels.
NASA Astrophysics Data System (ADS)
Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.
2017-12-01
Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.
A nearest-neighbour discretisation of the regularized stokeslet boundary integral equation
NASA Astrophysics Data System (ADS)
Smith, David J.
2018-04-01
The method of regularized stokeslets is extensively used in biological fluid dynamics due to its conceptual simplicity and meshlessness. This simplicity carries a degree of cost in computational expense and accuracy because the number of degrees of freedom used to discretise the unknown surface traction is generally significantly higher than that required by boundary element methods. We describe a meshless method based on nearest-neighbour interpolation that significantly reduces the number of degrees of freedom required to discretise the unknown traction, increasing the range of problems that can be practically solved, without excessively complicating the task of the modeller. The nearest-neighbour technique is tested against the classical problem of rigid body motion of a sphere immersed in very viscous fluid, then applied to the more complex biophysical problem of calculating the rotational diffusion timescales of a macromolecular structure modelled by three closely-spaced non-slender rods. A heuristic for finding the required density of force and quadrature points by numerical refinement is suggested. Matlab/GNU Octave code for the key steps of the algorithm is provided, which predominantly use basic linear algebra operations, with a full implementation being provided on github. Compared with the standard Nyström discretisation, more accurate and substantially more efficient results can be obtained by de-refining the force discretisation relative to the quadrature discretisation: a cost reduction of over 10 times with improved accuracy is observed. This improvement comes at minimal additional technical complexity. Future avenues to develop the algorithm are then discussed.
Modern Pathologic Diagnosis of Renal Oncocytoma.
Wobker, Sara E; Williamson, Sean R
2017-01-01
Oncocytoma is a well-defined benign renal tumor, with classic gross and histologic features, including a tan or mahogany-colored mass with central scar, microscopic nested architecture, bland cytology, and round, regular nuclei with prominent central nucleoli. As a result of variations in this classic appearance, difficulty in standardizing diagnostic criteria, and entities that mimic oncocytoma, such as eosinophilic variant chromophobe renal cell carcinoma and succinate dehydrogenase-deficient renal cell carcinoma, pathologic diagnosis remains a challenge. This review addresses the current state of pathologic diagnosis of oncocytoma, with emphasis on modern diagnostic markers, areas of controversy, and emerging techniques for less invasive diagnosis, including renal mass biopsy and advanced imaging.
Multisymplectic Lagrangian and Hamiltonian Formalisms of Classical Field Theories
NASA Astrophysics Data System (ADS)
Román-Roy, Narciso
2009-11-01
This review paper is devoted to presenting the standard multisymplectic formulation for describing geometrically classical field theories, both the regular and singular cases. First, the main features of the Lagrangian formalism are revisited and, second, the Hamiltonian formalism is constructed using Hamiltonian sections. In both cases, the variational principles leading to the Euler-Lagrange and the Hamilton-De Donder-Weyl equations, respectively, are stated, and these field equations are given in different but equivalent geometrical ways in each formalism. Finally, both are unified in a new formulation (which has been developed in the last years), following the original ideas of Rusk and Skinner for mechanical systems.
A Modified Sparse Representation Method for Facial Expression Recognition.
Wang, Wei; Xu, LiHong
2016-01-01
In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.
An RBF-FD closest point method for solving PDEs on surfaces
NASA Astrophysics Data System (ADS)
Petras, A.; Ling, L.; Ruuth, S. J.
2018-10-01
Partial differential equations (PDEs) on surfaces appear in many applications throughout the natural and applied sciences. The classical closest point method (Ruuth and Merriman (2008) [17]) is an embedding method for solving PDEs on surfaces using standard finite difference schemes. In this paper, we formulate an explicit closest point method using finite difference schemes derived from radial basis functions (RBF-FD). Unlike the orthogonal gradients method (Piret (2012) [22]), our proposed method uses RBF centers on regular grid nodes. This formulation not only reduces the computational cost but also avoids the ill-conditioning from point clustering on the surface and is more natural to couple with a grid based manifold evolution algorithm (Leung and Zhao (2009) [26]). When compared to the standard finite difference discretization of the closest point method, the proposed method requires a smaller computational domain surrounding the surface, resulting in a decrease in the number of sampling points on the surface. In addition, higher-order schemes can easily be constructed by increasing the number of points in the RBF-FD stencil. Applications to a variety of examples are provided to illustrate the numerical convergence of the method.
A Modified Sparse Representation Method for Facial Expression Recognition
Wang, Wei; Xu, LiHong
2016-01-01
In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878
Geodesics in nonexpanding impulsive gravitational waves with Λ. II
NASA Astrophysics Data System (ADS)
Sämann, Clemens; Steinbauer, Roland
2017-11-01
We investigate all geodesics in the entire class of nonexpanding impulsive gravitational waves propagating in an (anti-)de Sitter universe using the distributional metric. We extend the regularization approach of part I [Sämann, C. et al., Classical Quantum Gravity 33(11), 115002 (2016)] to a full nonlinear distributional analysis within the geometric theory of generalized functions. We prove global existence and uniqueness of geodesics that cross the impulsive wave and hence geodesic completeness in full generality for this class of low regularity spacetimes. This, in particular, prepares the ground for a mathematically rigorous account on the "physical equivalence" of the continuous form with the distributional "form" of the metric.
The gravitational potential of axially symmetric bodies from a regularized green kernel
NASA Astrophysics Data System (ADS)
Trova, A.; Huré, J.-M.; Hersant, F.
2011-12-01
The determination of the gravitational potential inside celestial bodies (rotating stars, discs, planets, asteroids) is a common challenge in numerical Astrophysics. Under axial symmetry, the potential is classically found from a two-dimensional integral over the body's meridional cross-section. Because it involves an improper integral, high accuracy is generally difficult to reach. We have discovered that, for homogeneous bodies, the singular Green kernel can be converted into a regular kernel by direct analytical integration. This new kernel, easily managed with standard techniques, opens interesting horizons, not only for numerical calculus but also to generate approximations, in particular for geometrically thin discs and rings.
Geostatistical regularization operators for geophysical inverse problems on irregular meshes
NASA Astrophysics Data System (ADS)
Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA
2018-05-01
Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.
Xu, Tiantian; Feng, Yuanjing; Wu, Ye; Zeng, Qingrun; Zhang, Jun; He, Jianzhong; Zhuge, Qichuan
2017-01-01
Diffusion-weighted magnetic resonance imaging is a non-invasive imaging method that has been increasingly used in neuroscience imaging over the last decade. Partial volume effects (PVEs) exist in sampling signal for many physical and actual reasons, which lead to inaccurate fiber imaging. We overcome the influence of PVEs by separating isotropic signal from diffusion-weighted signal, which can provide more accurate estimation of fiber orientations. In this work, we use a novel response function (RF) and the correspondent fiber orientation distribution function (fODF) to construct different signal models, in which case the fODF is represented using dictionary basis function. We then put forward a new index Piso, which is a part of fODF to quantify white and gray matter. The classic Richardson-Lucy (RL) model is usually used in the field of digital image processing to solve the problem of spherical deconvolution caused by highly ill-posed least-squares algorithm. In this case, we propose an innovative model integrating RL model with spatial regularization to settle the suggested double-models, which improve noise resistance and accuracy of imaging. Experimental results of simulated and real data show that the proposal method, which we call iRL, can robustly reconstruct a more accurate fODF and the quantitative index Piso performs better than fractional anisotropy and general fractional anisotropy.
Feng, Yuanjing; Wu, Ye; Zeng, Qingrun; Zhang, Jun; He, Jianzhong; Zhuge, Qichuan
2017-01-01
Diffusion-weighted magnetic resonance imaging is a non-invasive imaging method that has been increasingly used in neuroscience imaging over the last decade. Partial volume effects (PVEs) exist in sampling signal for many physical and actual reasons, which lead to inaccurate fiber imaging. We overcome the influence of PVEs by separating isotropic signal from diffusion-weighted signal, which can provide more accurate estimation of fiber orientations. In this work, we use a novel response function (RF) and the correspondent fiber orientation distribution function (fODF) to construct different signal models, in which case the fODF is represented using dictionary basis function. We then put forward a new index Piso, which is a part of fODF to quantify white and gray matter. The classic Richardson-Lucy (RL) model is usually used in the field of digital image processing to solve the problem of spherical deconvolution caused by highly ill-posed least-squares algorithm. In this case, we propose an innovative model integrating RL model with spatial regularization to settle the suggested double-models, which improve noise resistance and accuracy of imaging. Experimental results of simulated and real data show that the proposal method, which we call iRL, can robustly reconstruct a more accurate fODF and the quantitative index Piso performs better than fractional anisotropy and general fractional anisotropy. PMID:28081561
Car and Motorcycle Show Brings “Gearheads” and Fans Together | Poster
By Carolynne Keenan, Contributing Writer On Sept. 24, the Building 549 parking lot was full of cars; however, unlike any regular work day, the spaces were filled with a variety of classic cars, street rods, motorcycles, and unique modern cars for display in the first car and motorcycle show hosted at NCI at Frederick.
Measurement of "g" Using a Flashing LED
ERIC Educational Resources Information Center
Terzella, T.; Sundermier, J.; Sinacore, J.; Owen, C.; Takai, H.
2008-01-01
In one of the classic free-fall experiments, a small mass is attached to a strip of paper tape and both are allowed to fall through a spark timer, where sparks are generated at regular time intervals. Students analyze marks (dots) left on the tape by the timer, thereby generating distance-versus-time data, which they analyze to extract the…
Sensor network based solar forecasting using a local vector autoregressive ridge framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J.; Yoo, S.; Heiser, J.
2016-04-04
The significant improvements and falling costs of photovoltaic (PV) technology make solar energy a promising resource, yet the cloud induced variability of surface solar irradiance inhibits its effective use in grid-tied PV generation. Short-term irradiance forecasting, especially on the minute scale, is critically important for grid system stability and auxiliary power source management. Compared to the trending sky imaging devices, irradiance sensors are inexpensive and easy to deploy but related forecasting methods have not been well researched. The prominent challenge of applying classic time series models on a network of irradiance sensors is to address their varying spatio-temporal correlations duemore » to local changes in cloud conditions. We propose a local vector autoregressive framework with ridge regularization to forecast irradiance without explicitly determining the wind field or cloud movement. By using local training data, our learned forecast model is adaptive to local cloud conditions and by using regularization, we overcome the risk of overfitting from the limited training data. Our systematic experimental results showed an average of 19.7% RMSE and 20.2% MAE improvement over the benchmark Persistent Model for 1-5 minute forecasts on a comprehensive 25-day dataset.« less
A Path Algorithm for Constrained Estimation
Zhou, Hua; Lange, Kenneth
2013-01-01
Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online. PMID:24039382
Genome Engineering with TALE and CRISPR Systems in Neuroscience
Lee, Han B.; Sundberg, Brynn N.; Sigafoos, Ashley N.; Clark, Karl J.
2016-01-01
Recent advancement in genome engineering technology is changing the landscape of biological research and providing neuroscientists with an opportunity to develop new methodologies to ask critical research questions. This advancement is highlighted by the increased use of programmable DNA-binding agents (PDBAs) such as transcription activator-like effector (TALE) and RNA-guided clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR associated (Cas) systems. These PDBAs fused or co-expressed with various effector domains allow precise modification of genomic sequences and gene expression levels. These technologies mirror and extend beyond classic gene targeting methods contributing to the development of novel tools for basic and clinical neuroscience. In this Review, we discuss the recent development in genome engineering and potential applications of this technology in the field of neuroscience. PMID:27092173
Genome Engineering with TALE and CRISPR Systems in Neuroscience.
Lee, Han B; Sundberg, Brynn N; Sigafoos, Ashley N; Clark, Karl J
2016-01-01
Recent advancement in genome engineering technology is changing the landscape of biological research and providing neuroscientists with an opportunity to develop new methodologies to ask critical research questions. This advancement is highlighted by the increased use of programmable DNA-binding agents (PDBAs) such as transcription activator-like effector (TALE) and RNA-guided clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR associated (Cas) systems. These PDBAs fused or co-expressed with various effector domains allow precise modification of genomic sequences and gene expression levels. These technologies mirror and extend beyond classic gene targeting methods contributing to the development of novel tools for basic and clinical neuroscience. In this Review, we discuss the recent development in genome engineering and potential applications of this technology in the field of neuroscience.
NASA Astrophysics Data System (ADS)
Liu, Cheng-Wei
Phase transitions and their associated critical phenomena are of fundamental importance and play a crucial role in the development of statistical physics for both classical and quantum systems. Phase transitions embody diverse aspects of physics and also have numerous applications outside physics, e.g., in chemistry, biology, and combinatorial optimization problems in computer science. Many problems can be reduced to a system consisting of a large number of interacting agents, which under some circumstances (e.g., changes of external parameters) exhibit collective behavior; this type of scenario also underlies phase transitions. The theoretical understanding of equilibrium phase transitions was put on a solid footing with the establishment of the renormalization group. In contrast, non-equilibrium phase transition are relatively less understood and currently a very active research topic. One important milestone here is the Kibble-Zurek (KZ) mechanism, which provides a useful framework for describing a system with a transition point approached through a non-equilibrium quench process. I developed two efficient Monte Carlo techniques for studying phase transitions, one is for classical phase transition and the other is for quantum phase transitions, both are under the framework of KZ scaling. For classical phase transition, I develop a non-equilibrium quench (NEQ) simulation that can completely avoid the critical slowing down problem. For quantum phase transitions, I develop a new algorithm, named quasi-adiabatic quantum Monte Carlo (QAQMC) algorithm for studying quantum quenches. I demonstrate the utility of QAQMC quantum Ising model and obtain high-precision results at the transition point, in particular showing generalized dynamic scaling in the quantum system. To further extend the methods, I study more complex systems such as spin-glasses and random graphs. The techniques allow us to investigate the problems efficiently. From the classical perspective, using the NEQ approach I verify the universality class of the 3D Ising spin-glasses. I also investigate the random 3-regular graphs in terms of both classical and quantum phase transitions. I demonstrate that under this simulation scheme, one can extract information associated with the classical and quantum spin-glass transitions without any knowledge prior to the simulation.
NASA Astrophysics Data System (ADS)
Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.
2015-10-01
In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography
NASA Astrophysics Data System (ADS)
Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography.
Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Finsler Geometry of Nonlinear Elastic Solids with Internal Structure
2017-01-01
should enable regularized numerical solutions with discretization -size independence for representation of materials demonstrating softening, e.g...additional possibility of a discrete larger void/cavity forming at the core of the sphere. In the second case, comparison with the classical...core of the domain. This hollow sphere physically represents a discrete cavity, while the constant field ξH physically represents a continuous
USDA-ARS?s Scientific Manuscript database
Two sets of nonwoven fabrics of nominal 80 g/m2 density were produced on commercial equipment, using two distinctly different forms of greige cotton lint. One was a regular cotton taken from a randomly picked classical bale and the other was a uniquely pre-cleaned UltraCleanTM cotton produced by a w...
On the dynamical and geometrical symmetries of Keplerian motion
NASA Astrophysics Data System (ADS)
Wulfman, Carl E.
2009-05-01
The dynamical symmetries of classical, relativistic and quantum-mechanical Kepler systems are considered to arise from geometric symmetries in PQET phase space. To establish their interconnection, the symmetries are related with the aid of a Lie-algebraic extension of Dirac's correspondence principle, a canonical transformation containing a Cunningham-Bateman inversion, and a classical limit involving a preliminary canonical transformation in ET space. The Lie-algebraic extension establishes the conditions under which the uncertainty principle allows the local dynamical symmetry of a quantum-mechanical system to be the same as the geometrical phase-space symmetry of its classical counterpart. The canonical transformation converts Poincaré-invariant free-particle systems into ISO(3,1) invariant relativistic systems whose classical limit produces Keplerian systems. Locally Cartesian relativistic PQET coordinates are converted into a set of eight conjugate position and momentum coordinates whose classical limit contains Fock projective momentum coordinates and the components of Runge-Lenz vectors. The coordinate systems developed via the transformations are those in which the evolution and degeneracy groups of the classical system are generated by Poisson-bracket operators that produce ordinary rotation, translation and hyperbolic motions in phase space. The way in which these define classical Keplerian symmetries and symmetry coordinates is detailed. It is shown that for each value of the energy of a Keplerian system, the Poisson-bracket operators determine two invariant functions of positions and momenta, which together with its regularized Hamiltonian, define the manifold in six-dimensional phase space upon which motions evolve.
NASA Astrophysics Data System (ADS)
Zhou, Hang
Quantum walks are the quantum mechanical analogue of classical random walks. Discrete-time quantum walks have been introduced and studied mostly on the line Z or higher dimensional space Zd but rarely defined on graphs with fractal dimensions because the coin operator depends on the position and the Fourier transform on the fractals is not defined. Inspired by its nature of classical walks, different quantum walks will be defined by choosing different shift and coin operators. When the coin operator is uniform, the results of classical walks will be obtained upon measurement at each step. Moreover, with measurement at each step, our results reveal more information about the classical random walks. In this dissertation, two graphs with fractal dimensions will be considered. The first one is Sierpinski gasket, a degree-4 regular graph with Hausdorff dimension of df = ln 3/ ln 2. The second is the Cantor graph derived like Cantor set, with Hausdorff dimension of df = ln 2/ ln 3. The definitions and amplitude functions of the quantum walks will be introduced. The main part of this dissertation is to derive a recursive formula to compute the amplitude Green function. The exiting probability will be computed and compared with the classical results. When the generation of graphs goes to infinity, the recursion of the walks will be investigated and the convergence rates will be obtained and compared with the classical counterparts.
Estimating Tree Height-Diameter Models with the Bayesian Method
Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733
Estimating tree height-diameter models with the Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.
Classical methods and modern analysis for studying fungal diversity
John Paul Schmit
2005-01-01
In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...
Classical Methods and Modern Analysis for Studying Fungal Diversity
J. P. Schmit; D. J. Lodge
2005-01-01
In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...
Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.
Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K
2016-03-01
Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.
A spatially adaptive total variation regularization method for electrical resistance tomography
NASA Astrophysics Data System (ADS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2015-12-01
The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.
NASA Astrophysics Data System (ADS)
Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán
2016-07-01
We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.
New fundamental evidence of non-classical structure in the combination of natural concepts.
Aerts, D; Sozzo, S; Veloz, T
2016-01-13
We recently performed cognitive experiments on conjunctions and negations of two concepts with the aim of investigating the combination problem of concepts. Our experiments confirmed the deviations (conceptual vagueness, underextension, overextension etc.) from the rules of classical (fuzzy) logic and probability theory observed by several scholars in concept theory, while our data were successfully modelled in a quantum-theoretic framework developed by ourselves. In this paper, we isolate a new, very stable and systematic pattern of violation of classicality that occurs in concept combinations. In addition, the strength and regularity of this non-classical effect leads us to believe that it occurs at a more fundamental level than the deviations observed up to now. It is our opinion that we have identified a deep non-classical mechanism determining not only how concepts are combined but, rather, how they are formed. We show that this effect can be faithfully modelled in a two-sector Fock space structure, and that it can be exactly explained by assuming that human thought is the superposition of two processes, a 'logical reasoning', guided by 'logic', and a 'conceptual reasoning', guided by 'emergence', and that the latter generally prevails over the former. All these findings provide new fundamental support to our quantum-theoretic approach to human cognition. © 2015 The Author(s).
Resolution of the 1D regularized Burgers equation using a spatial wavelet approximation
NASA Technical Reports Server (NTRS)
Liandrat, J.; Tchamitchian, PH.
1990-01-01
The Burgers equation with a small viscosity term, initial and periodic boundary conditions is resolved using a spatial approximation constructed from an orthonormal basis of wavelets. The algorithm is directly derived from the notions of multiresolution analysis and tree algorithms. Before the numerical algorithm is described these notions are first recalled. The method uses extensively the localization properties of the wavelets in the physical and Fourier spaces. Moreover, the authors take advantage of the fact that the involved linear operators have constant coefficients. Finally, the algorithm can be considered as a time marching version of the tree algorithm. The most important point is that an adaptive version of the algorithm exists: it allows one to reduce in a significant way the number of degrees of freedom required for a good computation of the solution. Numerical results and description of the different elements of the algorithm are provided in combination with different mathematical comments on the method and some comparison with more classical numerical algorithms.
Quantum Enhanced Inference in Markov Logic Networks
NASA Astrophysics Data System (ADS)
Wittek, Peter; Gogolin, Christian
2017-04-01
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.
Belhaj, Khaoula; Chaparro-Garcia, Angela; Kamoun, Sophien; Nekrasov, Vladimir
2013-10-11
Targeted genome engineering (also known as genome editing) has emerged as an alternative to classical plant breeding and transgenic (GMO) methods to improve crop plants. Until recently, available tools for introducing site-specific double strand DNA breaks were restricted to zinc finger nucleases (ZFNs) and TAL effector nucleases (TALENs). However, these technologies have not been widely adopted by the plant research community due to complicated design and laborious assembly of specific DNA binding proteins for each target gene. Recently, an easier method has emerged based on the bacterial type II CRISPR (clustered regularly interspaced short palindromic repeats)/Cas (CRISPR-associated) immune system. The CRISPR/Cas system allows targeted cleavage of genomic DNA guided by a customizable small noncoding RNA, resulting in gene modifications by both non-homologous end joining (NHEJ) and homology-directed repair (HDR) mechanisms. In this review we summarize and discuss recent applications of the CRISPR/Cas technology in plants.
Explicit resolutions for the complex of several Fueter operators
NASA Astrophysics Data System (ADS)
Bureš, Jarolim; Damiano, Alberto; Sabadini, Irene
2007-02-01
An analogue of the Dolbeault complex is introduced for regular functions of several quaternionic variables and studied by means of two different methods. The first one comes from algebraic analysis (for a thorough treatment see the book [F. Colombo, I. Sabadini, F. Sommen, D.C. Struppa, Analysis of Dirac systems and computational algebra, Progress in Mathematical Physics, Vol. 39, Birkhäuser, Boston, 2004]), while the other one relies on the symmetry of the equations and the methods of representation theory (see [F. Colombo, V. Souček, D.C. Struppa, Invariant resolutions for several Fueter operators, J. Geom. Phys. 56 (2006) 1175-1191; R.J. Baston, Quaternionic Complexes, J. Geom. Phys. 8 (1992) 29-52]). The comparison of the two results allows one to describe the operators appearing in the complex in an explicit form. This description leads to a duality theorem which is the generalization of the classical Martineau-Harvey theorem and which is related to hyperfunctions of several quaternionic variables.
Quantum Enhanced Inference in Markov Logic Networks.
Wittek, Peter; Gogolin, Christian
2017-04-19
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.
Quantum Enhanced Inference in Markov Logic Networks
Wittek, Peter; Gogolin, Christian
2017-01-01
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning. PMID:28422093
An Excel‐based implementation of the spectral method of action potential alternans analysis
Pearman, Charles M.
2014-01-01
Abstract Action potential (AP) alternans has been well established as a mechanism of arrhythmogenesis and sudden cardiac death. Proper interpretation of AP alternans requires a robust method of alternans quantification. Traditional methods of alternans analysis neglect higher order periodicities that may have greater pro‐arrhythmic potential than classical 2:1 alternans. The spectral method of alternans analysis, already widely used in the related study of microvolt T‐wave alternans, has also been used to study AP alternans. Software to meet the specific needs of AP alternans analysis is not currently available in the public domain. An AP analysis tool is implemented here, written in Visual Basic for Applications and using Microsoft Excel as a shell. This performs a sophisticated analysis of alternans behavior allowing reliable distinction of alternans from random fluctuations, quantification of alternans magnitude, and identification of which phases of the AP are most affected. In addition, the spectral method has been adapted to allow detection and quantification of higher order regular oscillations. Analysis of action potential morphology is also performed. A simple user interface enables easy import, analysis, and export of collated results. PMID:25501439
Tonelli, Paul; Mouret, Jean-Baptiste
2013-01-01
A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1) the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2) synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT). Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1) in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2) whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities. PMID:24236099
Distillation of secret-key from a class of compound memoryless quantum sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boche, H., E-mail: boche@tum.de; Janßen, G., E-mail: gisbert.janssen@tum.de
We consider secret-key distillation from tripartite compound classical-quantum-quantum (cqq) sources with free forward public communication under strong security criterion. We design protocols which are universally reliable and secure in this scenario. These are shown to achieve asymptotically optimal rates as long as a certain regularity condition is fulfilled by the set of its generating density matrices. We derive a multi-letter formula which describes the optimal forward secret-key capacity for all compound cqq sources being regular in this sense. We also determine the forward secret-key distillation capacity for situations where the legitimate sending party has perfect knowledge of his/her marginal statemore » deriving from the source statistics. In this case regularity conditions can be dropped. Our results show that the capacities with and without the mentioned kind of state knowledge are equal as long as the source is generated by a regular set of density matrices. We demonstrate that regularity of cqq sources is not only a technical but also an operational issue. For this reason, we give an example of a source which has zero secret-key distillation capacity without sender knowledge, while achieving positive rates is possible if sender marginal knowledge is provided.« less
The Impact of Deployment Separation on Army Families
1984-08-01
REFERENCES 15 I-.- i , L. "I INTRODUCTION Brief family separations resulting from military training exercises are a common phenomenon in Army :ommuni...there have been numerous . studies of military family separation beginning with Rill’s ... (1949) classic study of military- induced separation during...were increased reports of headaches, weight change, sleep disturbances, and changes in menstrual regularity: specif- ically amenorrhea (cessation of
Tensor calculus in polar coordinates using Jacobi polynomials
NASA Astrophysics Data System (ADS)
Vasil, Geoffrey M.; Burns, Keaton J.; Lecoanet, Daniel; Olver, Sheehan; Brown, Benjamin P.; Oishi, Jeffrey S.
2016-11-01
Spectral methods are an efficient way to solve partial differential equations on domains possessing certain symmetries. The utility of a method depends strongly on the choice of spectral basis. In this paper we describe a set of bases built out of Jacobi polynomials, and associated operators for solving scalar, vector, and tensor partial differential equations in polar coordinates on a unit disk. By construction, the bases satisfy regularity conditions at r = 0 for any tensorial field. The coordinate singularity in a disk is a prototypical case for many coordinate singularities. The work presented here extends to other geometries. The operators represent covariant derivatives, multiplication by azimuthally symmetric functions, and the tensorial relationship between fields. These arise naturally from relations between classical orthogonal polynomials, and form a Heisenberg algebra. Other past work uses more specific polynomial bases for solving equations in polar coordinates. The main innovation in this paper is to use a larger set of possible bases to achieve maximum bandedness of linear operations. We provide a series of applications of the methods, illustrating their ease-of-use and accuracy.
Selection of regularization parameter in total variation image restoration.
Liao, Haiyong; Li, Fang; Ng, Michael K
2009-11-01
We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.
Symmetries and "simple" solutions of the classical n-body problem
NASA Astrophysics Data System (ADS)
Chenciner, Alain
2006-03-01
The Lagrangian of the classical n-body problem has well known symmetries: isometries of the ambient Euclidean space (translations, rotations, reflexions) and changes of scale coming from the homogeneity of the potential. To these symmetries are associated "simple" solutions of the problem, the so-called homographic motions, which play a basic role in the global understanding of the dynamics. The classical subproblems (planar, isosceles) are also consequences of the existence of symmetries: invariance under reflexion through a plane in the first case, invariance under exchange of two equal masses in the second. In these two cases, the symmetry acts at the level of the "shape space" (the oriented one in the first case) whose existence is the main difference between the 2-body problem and the (n ≥ 3)-body problem. These symmetries of the Lagrangian imply symmetries of the action functional, which is defined on the space of regular enough loops of a given period in the configuration space of the problem. Minimization of the action under well-chosen symmetry constraints leads to remarkable solutions of the n-body problem which may also be called simple and could play after the homographic ones the role of organizing centers in the global dynamics. In [13] and [16], I have given a survey of the new classes of solutions which had been obtained in this way, mainly choreographies of n equal masses in a plane or in space and generalized Hip-Hops of at least 4 arbitrary masses in space. I give here an updated overview of the results and a quick glance at the methods of proofs.
[Doping with illegal and legal substances in old age].
Münzer, Thomas
2018-02-01
The number of old persons who participate in sports and can even achieve peak performances is increasing steadily. Normal aging, however, is associated with decreased muscle strength and a decline in cardiovascular endurance even in those persons who regularly participate in sports. Thus, it seems obvious to impact on muscle mass and muscle strength by using anabolic substances. The number of older persons who illegally use doping substances is currently unknown. Besides classical anabolic drugs, other proteins and amino acids are used to impact on muscle mass or strength. This article provides some insights into clinical trials of classical anabolic drugs in older persons and gives an overview on more recent studies examining the potential effects of taurine, creatine and whey protein in older persons.
Plasmodial vein networks of the slime mold Physarum polycephalum form regular graphs
NASA Astrophysics Data System (ADS)
Baumgarten, Werner; Ueda, Tetsuo; Hauser, Marcus J. B.
2010-10-01
The morphology of a typical developing biological transportation network, the vein network of the plasmodium of the myxomycete Physarum polycephalum is analyzed during its free extension. The network forms a classical, regular graph, and has exclusively nodes of degree 3. This contrasts to most real-world transportation networks which show small-world or scale-free properties. The complexity of the vein network arises from the weighting of the lengths, widths, and areas of the vein segments. The lengths and areas follow exponential distributions, while the widths are distributed log-normally. These functional dependencies are robust during the entire evolution of the network, even though the exponents change with time due to the coarsening of the vein network.
Plasmodial vein networks of the slime mold Physarum polycephalum form regular graphs.
Baumgarten, Werner; Ueda, Tetsuo; Hauser, Marcus J B
2010-10-01
The morphology of a typical developing biological transportation network, the vein network of the plasmodium of the myxomycete Physarum polycephalum is analyzed during its free extension. The network forms a classical, regular graph, and has exclusively nodes of degree 3. This contrasts to most real-world transportation networks which show small-world or scale-free properties. The complexity of the vein network arises from the weighting of the lengths, widths, and areas of the vein segments. The lengths and areas follow exponential distributions, while the widths are distributed log-normally. These functional dependencies are robust during the entire evolution of the network, even though the exponents change with time due to the coarsening of the vein network.
Fault Diagnosis Strategies for SOFC-Based Power Generation Plants
Costamagna, Paola; De Giorgi, Andrea; Gotelli, Alberto; Magistri, Loredana; Moser, Gabriele; Sciaccaluga, Emanuele; Trucco, Andrea
2016-01-01
The success of distributed power generation by plants based on solid oxide fuel cells (SOFCs) is hindered by reliability problems that can be mitigated through an effective fault detection and isolation (FDI) system. However, the numerous operating conditions under which such plants can operate and the random size of the possible faults make identifying damaged plant components starting from the physical variables measured in the plant very difficult. In this context, we assess two classical FDI strategies (model-based with fault signature matrix and data-driven with statistical classification) and the combination of them. For this assessment, a quantitative model of the SOFC-based plant, which is able to simulate regular and faulty conditions, is used. Moreover, a hybrid approach based on the random forest (RF) classification method is introduced to address the discrimination of regular and faulty situations due to its practical advantages. Working with a common dataset, the FDI performances obtained using the aforementioned strategies, with different sets of monitored variables, are observed and compared. We conclude that the hybrid FDI strategy, realized by combining a model-based scheme with a statistical classifier, outperforms the other strategies. In addition, the inclusion of two physical variables that should be measured inside the SOFCs can significantly improve the FDI performance, despite the actual difficulty in performing such measurements. PMID:27556472
NASA Astrophysics Data System (ADS)
Dong, Bo-Qing; Jia, Yan; Li, Jingna; Wu, Jiahong
2018-05-01
This paper focuses on a system of the 2D magnetohydrodynamic (MHD) equations with the kinematic dissipation given by the fractional operator (-Δ )^α and the magnetic diffusion by partial Laplacian. We are able to show that this system with any α >0 always possesses a unique global smooth solution when the initial data is sufficiently smooth. In addition, we make a detailed study on the large-time behavior of these smooth solutions and obtain optimal large-time decay rates. Since the magnetic diffusion is only partial here, some classical tools such as the maximal regularity property for the 2D heat operator can no longer be applied. A key observation on the structure of the MHD equations allows us to get around the difficulties due to the lack of full Laplacian magnetic diffusion. The results presented here are the sharpest on the global regularity problem for the 2D MHD equations with only partial magnetic diffusion.
Nonequilibrium dynamics of the O( N ) model on dS3 and AdS crunches
NASA Astrophysics Data System (ADS)
Kumar, S. Prem; Vaganov, Vladislav
2018-03-01
We study the nonperturbative quantum evolution of the interacting O( N ) vector model at large- N , formulated on a spatial two-sphere, with time dependent couplings which diverge at finite time. This model - the so-called "E-frame" theory, is related via a conformal transformation to the interacting O( N ) model in three dimensional global de Sitter spacetime with time independent couplings. We show that with a purely quartic, relevant deformation the quantum evolution of the E-frame model is regular even when the classical theory is rendered singular at the end of time by the diverging coupling. Time evolution drives the E-frame theory to the large- N Wilson-Fisher fixed point when the classical coupling diverges. We study the quantum evolution numerically for a variety of initial conditions and demonstrate the finiteness of the energy at the classical "end of time". With an additional (time dependent) mass deformation, quantum backreaction lowers the mass, with a putative smooth time evolution only possible in the limit of infinite quartic coupling. We discuss the relevance of these results for the resolution of crunch singularities in AdS geometries dual to E-frame theories with a classical gravity dual.
Seismic imaging: From classical to adjoint tomography
NASA Astrophysics Data System (ADS)
Liu, Q.; Gu, Y. J.
2012-09-01
Seismic tomography has been a vital tool in probing the Earth's internal structure and enhancing our knowledge of dynamical processes in the Earth's crust and mantle. While various tomographic techniques differ in data types utilized (e.g., body vs. surface waves), data sensitivity (ray vs. finite-frequency approximations), and choices of model parameterization and regularization, most global mantle tomographic models agree well at long wavelengths, owing to the presence and typical dimensions of cold subducted oceanic lithospheres and hot, ascending mantle plumes (e.g., in central Pacific and Africa). Structures at relatively small length scales remain controversial, though, as will be discussed in this paper, they are becoming increasingly resolvable with the fast expanding global and regional seismic networks and improved forward modeling and inversion techniques. This review paper aims to provide an overview of classical tomography methods, key debates pertaining to the resolution of mantle tomographic models, as well as to highlight recent theoretical and computational advances in forward-modeling methods that spearheaded the developments in accurate computation of sensitivity kernels and adjoint tomography. The first part of the paper is devoted to traditional traveltime and waveform tomography. While these approaches established a firm foundation for global and regional seismic tomography, data coverage and the use of approximate sensitivity kernels remained as key limiting factors in the resolution of the targeted structures. In comparison to classical tomography, adjoint tomography takes advantage of full 3D numerical simulations in forward modeling and, in many ways, revolutionizes the seismic imaging of heterogeneous structures with strong velocity contrasts. For this reason, this review provides details of the implementation, resolution and potential challenges of adjoint tomography. Further discussions of techniques that are presently popular in seismic array analysis, such as noise correlation functions, receiver functions, inverse scattering imaging, and the adaptation of adjoint tomography to these different datasets highlight the promising future of seismic tomography.
NASA Astrophysics Data System (ADS)
Sagui, Celeste; Pedersen, Lee G.; Darden, Thomas A.
2004-01-01
The accurate simulation of biologically active macromolecules faces serious limitations that originate in the treatment of electrostatics in the empirical force fields. The current use of "partial charges" is a significant source of errors, since these vary widely with different conformations. By contrast, the molecular electrostatic potential (MEP) obtained through the use of a distributed multipole moment description, has been shown to converge to the quantum MEP outside the van der Waals surface, when higher order multipoles are used. However, in spite of the considerable improvement to the representation of the electronic cloud, higher order multipoles are not part of current classical biomolecular force fields due to the excessive computational cost. In this paper we present an efficient formalism for the treatment of higher order multipoles in Cartesian tensor formalism. The Ewald "direct sum" is evaluated through a McMurchie-Davidson formalism [L. McMurchie and E. Davidson, J. Comput. Phys. 26, 218 (1978)]. The "reciprocal sum" has been implemented in three different ways: using an Ewald scheme, a particle mesh Ewald (PME) method, and a multigrid-based approach. We find that even though the use of the McMurchie-Davidson formalism considerably reduces the cost of the calculation with respect to the standard matrix implementation of multipole interactions, the calculation in direct space remains expensive. When most of the calculation is moved to reciprocal space via the PME method, the cost of a calculation where all multipolar interactions (up to hexadecapole-hexadecapole) are included is only about 8.5 times more expensive than a regular AMBER 7 [D. A. Pearlman et al., Comput. Phys. Commun. 91, 1 (1995)] implementation with only charge-charge interactions. The multigrid implementation is slower but shows very promising results for parallelization. It provides a natural way to interface with continuous, Gaussian-based electrostatics in the future. It is hoped that this new formalism will facilitate the systematic implementation of higher order multipoles in classical biomolecular force fields.
Katashima, Takuya; Urayama, Kenji; Chung, Ung-il; Sakai, Takamasa
2015-05-07
The pure shear deformation of the Tetra-polyethylene glycol gels reveals the presence of an explicit cross-effect of strains in the strain energy density function even for the polymer networks with nearly regular structure including no appreciable amount of structural defect such as trapped entanglement. This result is in contrast to the expectation of the classical Gaussian network model (Neo Hookean model), i.e., the vanishing of the cross effect in regular networks with no trapped entanglement. The results show that (1) the cross effect of strains is not dependent on the network-strand length; (2) the cross effect is not affected by the presence of non-network strands; (3) the cross effect is proportional to the network polymer concentration including both elastically effective and ineffective strands; (4) no cross effect is expected exclusively in zero limit of network concentration in real polymer networks. These features indicate that the real polymer networks with regular network structures have an explicit cross-effect of strains, which originates from some interaction between network strands (other than entanglement effect) such as nematic interaction, topological interaction, and excluded volume interaction.
1992-07-09
This sharp, cloud free view of San Antonio, Texas (29.5N, 98.5W) illustrates the classic pattern of western cities. The city has a late nineteenth century Anglo grid pattern overlaid onto an earlier, less regular Hispanic settlement. A well marked central business district having streets laid out north/south and east/west is surrounded by blocks of suburban homes and small businesses set between the older colonial radial transportation routes.
Quantum implications of a scale invariant regularization
NASA Astrophysics Data System (ADS)
Ghilencea, D. M.
2018-04-01
We study scale invariance at the quantum level in a perturbative approach. For a scale-invariant classical theory, the scalar potential is computed at a three-loop level while keeping manifest this symmetry. Spontaneous scale symmetry breaking is transmitted at a quantum level to the visible sector (of ϕ ) by the associated Goldstone mode (dilaton σ ), which enables a scale-invariant regularization and whose vacuum expectation value ⟨σ ⟩ generates the subtraction scale (μ ). While the hidden (σ ) and visible sector (ϕ ) are classically decoupled in d =4 due to an enhanced Poincaré symmetry, they interact through (a series of) evanescent couplings ∝ɛ , dictated by the scale invariance of the action in d =4 -2 ɛ . At the quantum level, these couplings generate new corrections to the potential, as scale-invariant nonpolynomial effective operators ϕ2 n +4/σ2 n. These are comparable in size to "standard" loop corrections and are important for values of ϕ close to ⟨σ ⟩. For n =1 , 2, the beta functions of their coefficient are computed at three loops. In the IR limit, dilaton fluctuations decouple, the effective operators are suppressed by large ⟨σ ⟩, and the effective potential becomes that of a renormalizable theory with explicit scale symmetry breaking by the DR scheme (of μ =constant).
NASA Astrophysics Data System (ADS)
Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian
2017-07-01
Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.
NASA Astrophysics Data System (ADS)
Ostapchuk, Alexey; Saltykov, Nikolay
2017-04-01
Excessive tectonic stresses accumulated in the area of rock discontinuity are released while a process of slip along preexisting faults. Spectrum of slip modes includes not only creeps and regular earthquakes but also some transitional regimes - slow-slip events, low-frequency and very low-frequency earthquakes. However, there is still no agreement in Geophysics community if such fast and slow events have mutual nature [Peng, Gomberg, 2010] or they present different physical phenomena [Ide et al., 2007]. Models of nucleation and evolution of fault slip events could be evolved by laboratory experiments in which regularities of shear deformation of gouge-filled fault are investigated. In the course of the work we studied deformation regularities of experimental fault by slider frictional experiments for development of unified law of evolution of fault and revelation of its parameters responsible for deformation mode realization. The experiments were conducted as a classic slider-model experiment, in which block under normal and shear stresses moves along interface. The volume between two rough surfaces was filled by thin layer of granular matter. Shear force was applied by a spring which deformed with a constant rate. In such experiments elastic energy was accumulated in the spring, and regularities of its releases were determined by regularities of frictional behaviour of experimental fault. A full spectrum of slip modes was simulated in laboratory experiments. Slight change of gouge characteristics (granule shape, content of clay), viscosity of interstitial fluid and level of normal stress make it possible to obtained gradual transformation of the slip modes from steady sliding and slow slip to regular stick-slip, with various amplitude of 'coseismic' displacement. Using method of asymptotic analogies we have shown that different slip modes can be specified in term of single formalism and preparation of different slip modes have uniform evolution law. It is shown that shear stiffness of experimental fault is the parameter, which control realization of certain slip modes. It is worth to be mentioned that different serious of transformation is characterized by functional dependences, which have general view and differ only in normalization factors. Findings authenticate that slow and fast slip events have mutual nature. Determination of fault stiffness and testing of fault gouge allow to estimate intensity of seismic events. The reported study was funded by RFBR according to the research project № 16-05-00694.
Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
Diffractive paths for weak localization in quantum billiards
NASA Astrophysics Data System (ADS)
Březinová, Iva; Stampfer, Christoph; Wirtz, Ludger; Rotter, Stefan; Burgdörfer, Joachim
2008-04-01
We study the weak-localization effect in quantum transport through a clean ballistic cavity with regular classical dynamics. We address the question which paths account for the suppression of conductance through a system where disorder and chaos are absent. By exploiting both quantum and semiclassical methods, we unambiguously identify paths that are diffractively backscattered into the cavity (when approaching the lead mouths from the cavity interior) to play a key role. Diffractive scattering couples transmitted and reflected paths and is thus essential to reproduce the weak-localization peak in reflection and the corresponding antipeak in transmission. A comparison of semiclassical calculations featuring these diffractive paths yields good agreement with full quantum calculations and experimental data. Our theory provides system-specific predictions for the quantum regime of few open lead modes and can be expected to be relevant also for mixed as well as chaotic systems.
Estimation of the left ventricular shape and motion with a limited number of slices
NASA Astrophysics Data System (ADS)
Robert, Anne; Schmitt, Francis J. M.; Mousseaux, Elie
1996-04-01
In this paper, we describe a method for the reconstruction of the surface of the left ventricle from a set of lacunary data (that is an incomplete, unevenly sampled and unstructured data set). Global models, because they compress the properties of a surface into a small set of parameters, have a strong regularizing power and are therefore very well suited to lacunary data. Globally deformable superquadrics are particularly attractive, because of their simplicity. This model can be fitted to the data using the Levenberg-Marquardt algorithm for non-linear optimization. However, the difficulties we experienced to get temporally consistent solutions as well as the intrinsic 4D character of the data led us to generalize the classical 3D superquadric model to 4D. We present results on a 4D sequence from the Dynamic Spatial Reconstructor of the Mayo Clinic, and on a 4D IRM sequence.
Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition
NASA Astrophysics Data System (ADS)
Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale
2012-10-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
NASA Astrophysics Data System (ADS)
Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.
2012-01-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
Topview stereo: combining vehicle-mounted wide-angle cameras to a distance sensor array
NASA Astrophysics Data System (ADS)
Houben, Sebastian
2015-03-01
The variety of vehicle-mounted sensors in order to fulfill a growing number of driver assistance tasks has become a substantial factor in automobile manufacturing cost. We present a stereo distance method exploiting the overlapping field of view of a multi-camera fisheye surround view system, as they are used for near-range vehicle surveillance tasks, e.g. in parking maneuvers. Hence, we aim at creating a new input signal from sensors that are already installed. Particular properties of wide-angle cameras (e.g. hanging resolution) demand an adaptation of the image processing pipeline to several problems that do not arise in classical stereo vision performed with cameras carefully designed for this purpose. We introduce the algorithms for rectification, correspondence analysis, and regularization of the disparity image, discuss reasons and avoidance of the shown caveats, and present first results on a prototype topview setup.
Predicting perceptual quality of images in realistic scenario using deep filter banks
NASA Astrophysics Data System (ADS)
Zhang, Weixia; Yan, Jia; Hu, Shiyong; Ma, Yang; Deng, Dexiang
2018-03-01
Classical image perceptual quality assessment models usually resort to natural scene statistic methods, which are based on an assumption that certain reliable statistical regularities hold on undistorted images and will be corrupted by introduced distortions. However, these models usually fail to accurately predict degradation severity of images in realistic scenarios since complex, multiple, and interactive authentic distortions usually appear on them. We propose a quality prediction model based on convolutional neural network. Quality-aware features extracted from filter banks of multiple convolutional layers are aggregated into the image representation. Furthermore, an easy-to-implement and effective feature selection strategy is used to further refine the image representation and finally a linear support vector regression model is trained to map image representation into images' subjective perceptual quality scores. The experimental results on benchmark databases present the effectiveness and generalizability of the proposed model.
NASA Astrophysics Data System (ADS)
Studenikin, S. A.; Byszewski, M.; Maude, D. K.; Potemski, M.; Sachrajda, A.; Wasilewski, Z. R.; Hilke, M.; Pfeiffer, L. N.; West, K. W.
2006-08-01
Microwave induced resistance oscillations (MIROs) were studied experimentally over a very wide range of frequencies ranging from ∼20 GHz up to ∼4 THz, and from the quasi-classical regime to the quantum Hall effect regime. At low frequencies regular MIROs were observed, with a periodicity determined by the ratio of the microwave to cyclotron frequencies. For frequencies below 150 GHz the magnetic field dependence of MIROs waveform is well described by a simplified version of an existing theoretical model, where the damping is controlled by the width of the Landau levels. In the THz frequency range MIROs vanish and only pronounced resistance changes are observed at the cyclotron resonance. The evolution of MIROs with frequency is presented and discussed.
Questioning the cerebellar doctrine.
Galliano, Elisa; De Zeeuw, Chris I
2014-01-01
The basic principles of cerebellar function were originally described by Flourens, Cajal, and Marr/Albus/Ito, and they constitute the pillars of what can be considered to be the classic cerebellar doctrine. In their concepts, the main cerebellar function is to control motor behavior, Purkinje cells are the only cortical neuron receiving and integrating inputs from climbing fiber and mossy-parallel fiber pathways, and plastic modification at the parallel fiber synapses onto Purkinje cells constitutes the substrate of motor learning. Yet, because of recent technical advances and new angles of investigation, all pillars of the cerebellar doctrine now face regular re-examination. In this review, after summarizing the classic concepts and recent disputes, we attempt to synthesize an integrated view and propose a revisited version of the cerebellar doctrine. © 2014 Elsevier B.V. All rights reserved.
Traumatic synovitis in a classical guitarist: a study of joint laxity.
Bird, H A; Wright, V
1981-04-01
A classical guitarist performing for at least 5 hours each day developed a traumatic synovitis at the left wrist joint that was first erroneously considered to be rheumatoid arthritis. Comparison with members of the same guitar class suggested that unusual joint laxity of the fingers and wrist, probably inherited from the patient's father, was of more importance in the aetiology of the synovitis than a wide range of movement acquired by regular practice. Hyperextension of the metacarpophalangeal joint of the left index finger, quantified by the hyperextensometer, was less marked in the guitarists than in 100 normal individuals. This may be attributed to greater muscular control of the fingers. Lateral instability in the loaded joint may be the most important factor in the aetiology of traumatic synovitis.
Traumatic synovitis in a classical guitarist: a study of joint laxity.
Bird, H A; Wright, V
1981-01-01
A classical guitarist performing for at least 5 hours each day developed a traumatic synovitis at the left wrist joint that was first erroneously considered to be rheumatoid arthritis. Comparison with members of the same guitar class suggested that unusual joint laxity of the fingers and wrist, probably inherited from the patient's father, was of more importance in the aetiology of the synovitis than a wide range of movement acquired by regular practice. Hyperextension of the metacarpophalangeal joint of the left index finger, quantified by the hyperextensometer, was less marked in the guitarists than in 100 normal individuals. This may be attributed to greater muscular control of the fingers. Lateral instability in the loaded joint may be the most important factor in the aetiology of traumatic synovitis. Images PMID:7224687
The unsaturated flow in porous media with dynamic capillary pressure
NASA Astrophysics Data System (ADS)
Milišić, Josipa-Pina
2018-05-01
In this paper we consider a degenerate pseudoparabolic equation for the wetting saturation of an unsaturated two-phase flow in porous media with dynamic capillary pressure-saturation relationship where the relaxation parameter depends on the saturation. Following the approach given in [13] the existence of a weak solution is proved using Galerkin approximation and regularization techniques. A priori estimates needed for passing to the limit when the regularization parameter goes to zero are obtained by using appropriate test-functions, motivated by the fact that considered PDE allows a natural generalization of the classical Kullback entropy. Finally, a special care was given in obtaining an estimate of the mixed-derivative term by combining the information from the capillary pressure with the obtained a priori estimates on the saturation.
Helicity moduli of three-dimensional dilute XY models
NASA Astrophysics Data System (ADS)
Garg, Anupam; Pandit, Rahul; Solla, Sara A.; Ebner, C.
1984-07-01
The helicity moduli of various dilute, classical XY models on three-dimensional lattices are studied with a view to understanding some aspects of the superfluidity of 4He in Vycor glass. A spinwave calculation is used to obtain the low-temperature helicity modulus of a regularly-diluted XY model. A similar calculation is performed for the randomly bond-diluted and site-diluted XY models in the limit of low dilution. A Monte Carlo simulation is used to obtain the helicity modulus of the randomly bond-diluted XY model over a wide range of temperature and dilution. It is found that the randomly diluted models do agree and the regularly diluted model does not agree with certain experimentally found features of the variation in superfluid fraction with coverage of 4He in Vycor glass.
NASA Technical Reports Server (NTRS)
1995-01-01
The crew patch of STS-73, the second flight of the United States Microgravity Laboratory (USML-2), depicts the Space Shuttle Columbia in the vastness of space. In the foreground are the classic regular polyhedrons that were investigated by Plato and later Euclid. The Pythagoreans were also fascinated by the symmetrical three-dimensional objects whose sides are the same regular polygon. The tetrahedron, the cube, the octahedron, and the icosahedron were each associated with the Natural Elements of that time: fire (on this mission represented as combustion science); Earth (crystallography), air and water (fluid physics). An additional icon shown as the infinity symbol was added to further convey the discipline of fluid mechanics. The shape of the emblem represents a fifth polyhedron, a dodecahedron, which the Pythagoreans thought corresponded to a fifth element that represented the cosmos.
1995-06-06
The crew patch of STS-73, the second flight of the United States Microgravity Laboratory (USML-2), depicts the Space Shuttle Columbia in the vastness of space. In the foreground are the classic regular polyhedrons that were investigated by Plato and later Euclid. The Pythagoreans were also fascinated by the symmetrical three-dimensional objects whose sides are the same regular polygon. The tetrahedron, the cube, the octahedron, and the icosahedron were each associated with the Natural Elements of that time: fire (on this mission represented as combustion science); Earth (crystallography), air and water (fluid physics). An additional icon shown as the infinity symbol was added to further convey the discipline of fluid mechanics. The shape of the emblem represents a fifth polyhedron, a dodecahedron, which the Pythagoreans thought corresponded to a fifth element that represented the cosmos.
Potential estimates for the p-Laplace system with data in divergence form
NASA Astrophysics Data System (ADS)
Cianchi, A.; Schwarzacher, S.
2018-07-01
A pointwise bound for local weak solutions to the p-Laplace system is established in terms of data on the right-hand side in divergence form. The relevant bound involves a Havin-Maz'ya-Wolff potential of the datum, and is a counterpart for data in divergence form of a classical result of [25], recently extended to systems in [28]. A local bound for oscillations is also provided. These results allow for a unified approach to regularity estimates for broad classes of norms, including Banach function norms (e.g. Lebesgue, Lorentz and Orlicz norms), and norms depending on the oscillation of functions (e.g. Hölder, BMO and, more generally, Campanato type norms). In particular, new regularity properties are exhibited, and well-known results are easily recovered.
Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389
Selection of regularization parameter for l1-regularized damage detection
NASA Astrophysics Data System (ADS)
Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing
2018-06-01
The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.
Relative entropy of entanglement and restricted measurements.
Piani, M
2009-10-16
We introduce variants of relative entropy of entanglement based on the optimal distinguishability from unentangled states by means of restricted measurements. In this way we are able to prove that the standard regularized entropy of entanglement is strictly positive for all multipartite entangled states. This implies that the asymptotic creation of a multipartite entangled state by means of local operations and classical communication always requires the consumption of a nonlocal resource at a strictly positive rate.
Integrative Analysis of Prognosis Data on Multiple Cancer Subtypes
Liu, Jin; Huang, Jian; Zhang, Yawei; Lan, Qing; Rothman, Nathaniel; Zheng, Tongzhang; Ma, Shuangge
2014-01-01
Summary In cancer research, profiling studies have been extensively conducted, searching for genes/SNPs associated with prognosis. Cancer is diverse. Examining the similarity and difference in the genetic basis of multiple subtypes of the same cancer can lead to a better understanding of their connections and distinctions. Classic meta-analysis methods analyze each subtype separately and then compare analysis results across subtypes. Integrative analysis methods, in contrast, analyze the raw data on multiple subtypes simultaneously and can outperform meta-analysis methods. In this study, prognosis data on multiple subtypes of the same cancer are analyzed. An AFT (accelerated failure time) model is adopted to describe survival. The genetic basis of multiple subtypes is described using the heterogeneity model, which allows a gene/SNP to be associated with prognosis of some subtypes but not others. A compound penalization method is developed to identify genes that contain important SNPs associated with prognosis. The proposed method has an intuitive formulation and is realized using an iterative algorithm. Asymptotic properties are rigorously established. Simulation shows that the proposed method has satisfactory performance and outperforms a penalization-based meta-analysis method and a regularized thresholding method. An NHL (non-Hodgkin lymphoma) prognosis study with SNP measurements is analyzed. Genes associated with the three major subtypes, namely DLBCL, FL, and CLL/SLL, are identified. The proposed method identifies genes that are different from alternatives and have important implications and satisfactory prediction performance. PMID:24766212
Classical Wigner method with an effective quantum force: application to reaction rates.
Poulsen, Jens Aage; Li, Huaqing; Nyman, Gunnar
2009-07-14
We construct an effective "quantum force" to be used in the classical molecular dynamics part of the classical Wigner method when determining correlation functions. The quantum force is obtained by estimating the most important short time separation of the Feynman paths that enter into the expression for the correlation function. The evaluation of the force is then as easy as classical potential energy evaluations. The ideas are tested on three reaction rate problems. The resulting transmission coefficients are in much better agreement with accurate results than transmission coefficients from the ordinary classical Wigner method.
PARSEC's Astrometry - The Risky Approach
NASA Astrophysics Data System (ADS)
Andrei, A. H.
2015-10-01
Parallaxes - and hence the fundamental establishment of stellar distances - rank among the oldest, most direct, and hardest of astronomical determinations. Arguably amongst the most essential too. The direct approach to obtain trigonometric parallaxes, using a constrained set of equations to derive positions, proper motions, and parallaxes, has been labelled as risky. Properly so, because the axis of the parallactic apparent ellipse is smaller than one arcsec even for the nearest stars, and just a fraction of its perimeter can be followed. Thus the classical approach is of linearizing the description by locking the solution to a set of precise positions of the Earth at the instants of observation, rather than to the dynamics of its orbit, and of adopting a close examination of the few observations available. In the PARSEC program the parallaxes of 143 brown dwarfs were planned. Five years of observation of the fields were taken with the WFI camera at the ESO 2.2m telescope in Chile. The goal is to provide a statistically significant number of trigonometric parallaxes for BD sub-classes from L0 to T7. Taking advantage of the large, regularly spaced, quantity of observations, here we take the risky approach to fit an ellipse to the observed ecliptic coordinates and derive the parallaxes. We also combine the solutions from different centroiding methods, widely proven in prior astrometric investigations. As each of those methods assess diverse properties of the PSFs, they are taken as independent measurements, and combined into a weighted least-squares general solution. The results obtained compare well with the literature and with the classical approach.
NASA Astrophysics Data System (ADS)
Bowman, Dominic M.; Kurtz, Donald W.
2018-05-01
The δ Sct stars are a diverse group of intermediate-mass pulsating stars located on and near the main sequence within the classical instability strip in the Hertzsprung-Russell diagram. Many of these stars are hybrid stars pulsating simultaneously with pressure and gravity modes that probe the physics at different depths within a star's interior. Using two large ensembles of δ Sct stars observed by the Kepler Space Telescope, the instrumental biases inherent to Kepler mission data and the statistical properties of these stars are investigated. An important focus of this work is an analysis of the relationships between the pulsational and stellar parameters, and their distribution within the classical instability strip. It is found that a non-negligible fraction of main-sequence δ Sct stars exist outside theoretical predictions of the classical instability boundaries, which indicates the necessity of a mass-dependent mixing length parameter to simultaneously explain low and high radial order pressure modes in δ Sct stars within the Hertzsprung-Russell diagram. Furthermore, a search for regularities in the amplitude spectra of these stars is also presented, specifically the frequency difference between pressure modes of consecutive radial order. In this work, it is demonstrated that an ensemble-based approach using space photometry from the Kepler mission is not only plausible for δ Sct stars, but that it is a valuable method for identifying the most promising stars for mode identification and asteroseismic modelling. The full scientific potential of studying δ Sct stars is as yet unrealized. The ensembles discussed in this paper represent a high-quality data set for future studies of rotation and angular momentum transport inside A and F stars using asteroseismology.
When things go pear shaped: contour variations of contacts
NASA Astrophysics Data System (ADS)
Utzny, Clemens
2013-04-01
Traditional control of critical dimensions (CD) on photolithographic masks considers the CD average and a measure for the CD variation such as the CD range or the standard deviation. Also systematic CD deviations from the mean such as CD signatures are subject to the control. These measures are valid for mask quality verification as long as patterns across a mask exhibit only size variations and no shape variation. The issue of shape variations becomes especially important in the context of contact holes on EUV masks. For EUV masks the CD error budget is much smaller than for standard optical masks. This means that small deviations from the contact shape can impact EUV waver prints in the sense that contact shape deformations induce asymmetric bridging phenomena. In this paper we present a detailed study of contact shape variations based on regular product data. Two data sets are analyzed: 1) contacts of varying target size and 2) a regularly spaced field of contacts. Here, the methods of statistical shape analysis are used to analyze CD SEM generated contour data. We demonstrate that contacts on photolithographic masks do not only show size variations but exhibit also pronounced nontrivial shape variations. In our data sets we find pronounced shape variations which can be interpreted as asymmetrical shape squeezing and contact rounding. Thus we demonstrate the limitations of classic CD measures for describing the feature variations on masks. Furthermore we show how the methods of statistical shape analysis can be used for quantifying the contour variations thus paving the way to a new understanding of mask linearity and its specification.
An Excel-based implementation of the spectral method of action potential alternans analysis.
Pearman, Charles M
2014-12-01
Action potential (AP) alternans has been well established as a mechanism of arrhythmogenesis and sudden cardiac death. Proper interpretation of AP alternans requires a robust method of alternans quantification. Traditional methods of alternans analysis neglect higher order periodicities that may have greater pro-arrhythmic potential than classical 2:1 alternans. The spectral method of alternans analysis, already widely used in the related study of microvolt T-wave alternans, has also been used to study AP alternans. Software to meet the specific needs of AP alternans analysis is not currently available in the public domain. An AP analysis tool is implemented here, written in Visual Basic for Applications and using Microsoft Excel as a shell. This performs a sophisticated analysis of alternans behavior allowing reliable distinction of alternans from random fluctuations, quantification of alternans magnitude, and identification of which phases of the AP are most affected. In addition, the spectral method has been adapted to allow detection and quantification of higher order regular oscillations. Analysis of action potential morphology is also performed. A simple user interface enables easy import, analysis, and export of collated results. © 2014 The Author. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.
Aerobic conditioning for team sport athletes.
Stone, Nicholas M; Kilding, Andrew E
2009-01-01
Team sport athletes require a high level of aerobic fitness in order to generate and maintain power output during repeated high-intensity efforts and to recover. Research to date suggests that these components can be increased by regularly performing aerobic conditioning. Traditional aerobic conditioning, with minimal changes of direction and no skill component, has been demonstrated to effectively increase aerobic function within a 4- to 10-week period in team sport players. More importantly, traditional aerobic conditioning methods have been shown to increase team sport performance substantially. Many team sports require the upkeep of both aerobic fitness and sport-specific skills during a lengthy competitive season. Classic team sport trainings have been shown to evoke marginal increases/decreases in aerobic fitness. In recent years, aerobic conditioning methods have been designed to allow adequate intensities to be achieved to induce improvements in aerobic fitness whilst incorporating movement-specific and skill-specific tasks, e.g. small-sided games and dribbling circuits. Such 'sport-specific' conditioning methods have been demonstrated to promote increases in aerobic fitness, though careful consideration of player skill levels, current fitness, player numbers, field dimensions, game rules and availability of player encouragement is required. Whilst different conditioning methods appear equivalent in their ability to improve fitness, whether sport-specific conditioning is superior to other methods at improving actual game performance statistics requires further research.
Spark formation as a moving boundary process
NASA Astrophysics Data System (ADS)
Ebert, Ute
2006-03-01
The growth process of spark channels recently becomes accessible through complementary methods. First, I will review experiments with nanosecond photographic resolution and with fast and well defined power supplies that appropriately resolve the dynamics of electric breakdown [1]. Second, I will discuss the elementary physical processes as well as present computations of spark growth and branching with adaptive grid refinement [2]. These computations resolve three well separated scales of the process that emerge dynamically. Third, this scale separation motivates a hierarchy of models on different length scales. In particular, I will discuss a moving boundary approximation for the ionization fronts that generate the conducting channel. The resulting moving boundary problem shows strong similarities with classical viscous fingering. For viscous fingering, it is known that the simplest model forms unphysical cusps within finite time that are suppressed by a regularizing condition on the moving boundary. For ionization fronts, we derive a new condition on the moving boundary of mixed Dirichlet-Neumann type (φ=ɛnφ) that indeed regularizes all structures investigated so far. In particular, we present compact analytical solutions with regularization, both for uniformly translating shapes and for their linear perturbations [3]. These solutions are so simple that they may acquire a paradigmatic role in the future. Within linear perturbation theory, they explicitly show the convective stabilization of a curved front while planar fronts are linearly unstable against perturbations of arbitrary wave length. [1] T.M.P. Briels, E.M. van Veldhuizen, U. Ebert, TU Eindhoven. [2] C. Montijn, J. Wackers, W. Hundsdorfer, U. Ebert, CWI Amsterdam. [3] B. Meulenbroek, U. Ebert, L. Schäfer, Phys. Rev. Lett. 95, 195004 (2005).
A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods
2014-08-01
Approved for public release; distribution is unlimited. A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods...ABSTRACT A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods Report Title This experiment tests whether a virtual... PEDAGOGICAL EFFECTIVENESS OF VIRTUAL WORLDS AND OF TRADITIONAL TRAINING METHODS A Thesis by BENJAMIN PETERS
Optimal Tikhonov regularization for DEER spectroscopy
NASA Astrophysics Data System (ADS)
Edwards, Thomas H.; Stoll, Stefan
2018-03-01
Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.
Retaining both discrete and smooth features in 1D and 2D NMR relaxation and diffusion experiments
NASA Astrophysics Data System (ADS)
Reci, A.; Sederman, A. J.; Gladden, L. F.
2017-11-01
A new method of regularization of 1D and 2D NMR relaxation and diffusion experiments is proposed and a robust algorithm for its implementation is introduced. The new form of regularization, termed the Modified Total Generalized Variation (MTGV) regularization, offers a compromise between distinguishing discrete and smooth features in the reconstructed distributions. The method is compared to the conventional method of Tikhonov regularization and the recently proposed method of L1 regularization, when applied to simulated data of 1D spin-lattice relaxation, T1, 1D spin-spin relaxation, T2, and 2D T1-T2 NMR experiments. A range of simulated distributions composed of two lognormally distributed peaks were studied. The distributions differed with regard to the variance of the peaks, which were designed to investigate a range of distributions containing only discrete, only smooth or both features in the same distribution. Three different signal-to-noise ratios were studied: 2000, 200 and 20. A new metric is proposed to compare the distributions reconstructed from the different regularization methods with the true distributions. The metric is designed to penalise reconstructed distributions which show artefact peaks. Based on this metric, MTGV regularization performs better than Tikhonov and L1 regularization in all cases except when the distribution is known to only comprise of discrete peaks, in which case L1 regularization is slightly more accurate than MTGV regularization.
Velopharyngeal port status during classical singing.
Tanner, Kristine; Roy, Nelson; Merrill, Ray M; Power, David
2005-12-01
This investigation was undertaken to examine the status of the velopharyngeal (VP) port during classical singing. Using aeromechanical instrumentation, nasal airflow (mL/s), oral pressure (cm H2O), and VP orifice area estimates (cm2) were studied in 10 classically trained sopranos during singing and speaking. Each participant sang and spoke 3 nonsense words-/hampa/, /himpi/, and /humpu/-at 3 loudness levels (loud vs. comfortable vs. soft) and 3 pitches (high vs. comfortable vs. low), using a within-subject experimental design including all possible combinations. In general, nasal airflow, oral pressure, and VP area estimates were significantly greater for singing as compared to speech, and nasal airflow was observed during non-nasal sounds in all participants. Anticipatory nasal airflow was observed in 9 of 10 participants for singing and speaking and was significantly greater during the first vowel in /hampa/ versus /himpi/ and /humpu/. The effect of vowel height on nasal airflow was also significantly influenced by loudness and pitch. The results from this investigation indicate that at least some trained singers experience regular VP opening during classical singing. Vowel height seems to influence this effect. Future research should consider the effects of voice type, gender, experience level, performance ability, and singing style on VP valving in singers.
On some Aitken-like acceleration of the Schwarz method
NASA Astrophysics Data System (ADS)
Garbey, M.; Tromeur-Dervout, D.
2002-12-01
In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.
A novel QC-LDPC code based on the finite field multiplicative group for optical communications
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen
2013-09-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.
Nuclease-mediated genome editing: At the front-line of functional genomics technology.
Sakuma, Tetsushi; Woltjen, Knut
2014-01-01
Genome editing with engineered endonucleases is rapidly becoming a staple method in developmental biology studies. Engineered nucleases permit random or designed genomic modification at precise loci through the stimulation of endogenous double-strand break repair. Homology-directed repair following targeted DNA damage is mediated by co-introduction of a custom repair template, allowing the derivation of knock-out and knock-in alleles in animal models previously refractory to classic gene targeting procedures. Currently there are three main types of customizable site-specific nucleases delineated by the source mechanism of DNA binding that guides nuclease activity to a genomic target: zinc-finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs), and clustered regularly interspaced short palindromic repeats (CRISPR). Among these genome engineering tools, characteristics such as the ease of design and construction, mechanism of inducing DNA damage, and DNA sequence specificity all differ, making their application complementary. By understanding the advantages and disadvantages of each method, one may make the best choice for their particular purpose. © 2014 The Authors Development, Growth & Differentiation © 2014 Japanese Society of Developmental Biologists.
Martins, Caroline Curry; Bagatini, Margarete Dulce; Cardoso, Andréia Machado; Zanini, Daniela; Abdalla, Fátima Husein; Baldissarelli, Jucimara; Dalenogare, Diéssica Padilha; Farinha, Juliano Boufleur; Schetinger, Maria Rosa Chitolina; Morsch, Vera Maria
2016-02-15
Alterations in the activity of ectonucleotidase enzymes have been implicated in cardiovascular diseases, whereas regular exercise training has been shown to prevent these alterations. However, nothing is known about it relating to metabolic syndrome (MetS). We investigated the effect of exercise training on platelet ectonucleotidase enzymes and on the aggregation profile of MetS patients. We studied 38 MetS patients who performed regular concurrent exercise training for 30 weeks. Anthropometric measurements, biochemical profiles, hydrolysis of adenine nucleotides in platelets and platelet aggregation were collected from patients before and after the exercise intervention as well as from individuals of the control group. An increase in the hydrolysis of adenine nucleotides (ATP, ADP and AMP) and a decrease in adenosine deamination in the platelets of MetS patients before the exercise intervention were observed (P<0.001). However, these alterations were reversed by exercise training (P<0.001). Additionally, an increase in platelet aggregation was observed in the MetS patients (P<0.001) and the exercise training prevented platelet hyperaggregation in addition to decrease the classic cardiovascular risks. An alteration of ectonucleotidase enzymes occurs during MetS, whereas regular exercise training had a protective effect on these enzymes and on platelet aggregation. Copyright © 2016 Elsevier B.V. All rights reserved.
From localization to anomalous diffusion in the dynamics of coupled kicked rotors
NASA Astrophysics Data System (ADS)
Notarnicola, Simone; Iemini, Fernando; Rossini, Davide; Fazio, Rosario; Silva, Alessandro; Russomanno, Angelo
2018-02-01
We study the effect of many-body quantum interference on the dynamics of coupled periodically kicked systems whose classical dynamics is chaotic and shows an unbounded energy increase. We specifically focus on an N -coupled kicked rotors model: We find that the interplay of quantumness and interactions dramatically modifies the system dynamics, inducing a transition between energy saturation and unbounded energy increase. We discuss this phenomenon both numerically and analytically through a mapping onto an N -dimensional Anderson model. The thermodynamic limit N →∞ , in particular, always shows unbounded energy growth. This dynamical delocalization is genuinely quantum and very different from the classical one: Using a mean-field approximation, we see that the system self-organizes so that the energy per site increases in time as a power law with exponent smaller than 1. This wealth of phenomena is a genuine effect of quantum interference: The classical system for N ≥2 always behaves ergodically with an energy per site linearly increasing in time. Our results show that quantum mechanics can deeply alter the regularity or ergodicity properties of a many-body-driven system.
Wickering, Ellis; Gaspard, Nicolas; Zafar, Sahar; Moura, Valdery J; Biswal, Siddharth; Bechek, Sophia; OʼConnor, Kathryn; Rosenthal, Eric S; Westover, M Brandon
2016-06-01
The purpose of this study is to evaluate automated implementations of continuous EEG monitoring-based detection of delayed cerebral ischemia based on methods used in classical retrospective studies. We studied 95 patients with either Fisher 3 or Hunt Hess 4 to 5 aneurysmal subarachnoid hemorrhage who were admitted to the Neurosciences ICU and underwent continuous EEG monitoring. We implemented several variations of two classical algorithms for automated detection of delayed cerebral ischemia based on decreases in alpha-delta ratio and relative alpha variability. Of 95 patients, 43 (45%) developed delayed cerebral ischemia. Our automated implementation of the classical alpha-delta ratio-based trending method resulted in a sensitivity and specificity (Se,Sp) of (80,27)%, compared with the values of (100,76)% reported in the classic study using similar methods in a nonautomated fashion. Our automated implementation of the classical relative alpha variability-based trending method yielded (Se,Sp) values of (65,43)%, compared with (100,46)% reported in the classic study using nonautomated analysis. Our findings suggest that improved methods to detect decreases in alpha-delta ratio and relative alpha variability are needed before an automated EEG-based early delayed cerebral ischemia detection system is ready for clinical use.
Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S
2008-05-09
Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.
Danchin, N; Juillière, Y; de la Chaise, A T; Selton-Suty, C
1999-04-01
The goal of study was evaluate in 1,837 consecutive patients the comparative effects of French cassoulet (CASS) and international sauerkraut (CHOU). After procedures of exclusion classical, 8 patients could be evaluated and received in a randomised, doubleblind, crossover protocol an mouth dose of 22.5 g/kg of CASS or CHOU. The results show a very significative difference between the 2 products. A regular absorption of couscous is therefore recommended.
Motion estimation under location uncertainty for turbulent fluid flows
NASA Astrophysics Data System (ADS)
Cai, Shengze; Mémin, Etienne; Dérian, Pierre; Xu, Chao
2018-01-01
In this paper, we propose a novel optical flow formulation for estimating two-dimensional velocity fields from an image sequence depicting the evolution of a passive scalar transported by a fluid flow. This motion estimator relies on a stochastic representation of the flow allowing to incorporate naturally a notion of uncertainty in the flow measurement. In this context, the Eulerian fluid flow velocity field is decomposed into two components: a large-scale motion field and a small-scale uncertainty component. We define the small-scale component as a random field. Subsequently, the data term of the optical flow formulation is based on a stochastic transport equation, derived from the formalism under location uncertainty proposed in Mémin (Geophys Astrophys Fluid Dyn 108(2):119-146, 2014) and Resseguier et al. (Geophys Astrophys Fluid Dyn 111(3):149-176, 2017a). In addition, a specific regularization term built from the assumption of constant kinetic energy involves the very same diffusion tensor as the one appearing in the data transport term. Opposite to the classical motion estimators, this enables us to devise an optical flow method dedicated to fluid flows in which the regularization parameter has now a clear physical interpretation and can be easily estimated. Experimental evaluations are presented on both synthetic and real world image sequences. Results and comparisons indicate very good performance of the proposed formulation for turbulent flow motion estimation.
A quantum–quantum Metropolis algorithm
Yung, Man-Hong; Aspuru-Guzik, Alán
2012-01-01
The classical Metropolis sampling method is a cornerstone of many statistical modeling applications that range from physics, chemistry, and biology to economics. This method is particularly suitable for sampling the thermal distributions of classical systems. The challenge of extending this method to the simulation of arbitrary quantum systems is that, in general, eigenstates of quantum Hamiltonians cannot be obtained efficiently with a classical computer. However, this challenge can be overcome by quantum computers. Here, we present a quantum algorithm which fully generalizes the classical Metropolis algorithm to the quantum domain. The meaning of quantum generalization is twofold: The proposed algorithm is not only applicable to both classical and quantum systems, but also offers a quantum speedup relative to the classical counterpart. Furthermore, unlike the classical method of quantum Monte Carlo, this quantum algorithm does not suffer from the negative-sign problem associated with fermionic systems. Applications of this algorithm include the study of low-temperature properties of quantum systems, such as the Hubbard model, and preparing the thermal states of sizable molecules to simulate, for example, chemical reactions at an arbitrary temperature. PMID:22215584
A Synthetic Approach to the Transfer Matrix Method in Classical and Quantum Physics
ERIC Educational Resources Information Center
Pujol, O.; Perez, J. P.
2007-01-01
The aim of this paper is to propose a synthetic approach to the transfer matrix method in classical and quantum physics. This method is an efficient tool to deal with complicated physical systems of practical importance in geometrical light or charged particle optics, classical electronics, mechanics, electromagnetics and quantum physics. Teaching…
Prnjavorac, Besim; Irejiz, Nedzada; Kurbasic, Zahid; Krajina, Katarina; Deljkic, Amina; Sinanovic, Albina; Fejzic, Jasmin
2015-04-01
Appropriate vitamin D turnover is essential for many physiological function. Knowledge of it's function was improved in last two decades with enlargement of scientific confirmation and understanding of overall importance. In addition to classical (skeletal) roles of vitamin D, many other function (no classical), out of bone and calcium-phosphate metabolism, are well defined today. To analyze vitamin D level in the blood in dialysis and pre dialysis patients and evaluate efficacy of supplementation therapy with vitamin D supplements. Vitamin D3 level in form of 25-hydroxivitamin D3 was measured in dialysis and pre dialysis patients, using combination of enzyme immunoassay competition method with final fluorescent detection (ELFA). Parathormone was measured by ELISA method. Other parameters were measured by colorimetric methods. Statistical analysis was done by nonparametric methods, because of dispersion of results of Vitamin D and parathormone. In group of dialysis patients 38 were analyzed. Among them 35 (92%) presented vitamin D deficiency, whether they took supplementation or not. In only 3 patients vitamin D deficiency was not so severe. Vitamin D form were evaluated in 42 pre dialysis patients. Out of all 19 patients (45 %) have satisfied level, more than 30 ng/ml. Moderate deficiency have 16 patients (38%), 5 of all (12%) have severe deficiency, and two patients (5%) have very severe deficiency, less than 5 ng/ml. Parathormone was within normal range (9.5-75 pg/mL) in 13 patients (34 %), below normal range (2 %) in one subject, and in above normal range in 24 (63 %). Vitamin D3 deficiency was registered in most hemodialysis patients; nevertheless supplemental therapy was given regularly or not. It is to be considered more appropriate supplementation of Vitamin D3 for dialyzed patients as well as for pre dialysis ones. In pre dialysis patient moderate deficiency is shown in half of patients but sever in only two.
Multiple spatially localized dynamical states in friction-excited oscillator chains
NASA Astrophysics Data System (ADS)
Papangelo, A.; Hoffmann, N.; Grolet, A.; Stender, M.; Ciavarella, M.
2018-03-01
Friction-induced vibrations are known to affect many engineering applications. Here, we study a chain of friction-excited oscillators with nearest neighbor elastic coupling. The excitation is provided by a moving belt which moves at a certain velocity vd while friction is modelled with an exponentially decaying friction law. It is shown that in a certain range of driving velocities, multiple stable spatially localized solutions exist whose dynamical behavior (i.e. regular or irregular) depends on the number of oscillators involved in the vibration. The classical non-repeatability of friction-induced vibration problems can be interpreted in light of those multiple stable dynamical states. These states are found within a "snaking-like" bifurcation pattern. Contrary to the classical Anderson localization phenomenon, here the underlying linear system is perfectly homogeneous and localization is solely triggered by the friction nonlinearity.
Spin waves in rings of classical magnetic dipoles
NASA Astrophysics Data System (ADS)
Schmidt, Heinz-Jürgen; Schröder, Christian; Luban, Marshall
2017-03-01
We theoretically and numerically investigate spin waves that occur in systems of classical magnetic dipoles that are arranged at the vertices of a regular polygon and interact solely via their magnetic fields. There are certain limiting cases that can be analyzed in detail. One case is that of spin waves as infinitesimal excitations from the system’s ground state, where the dispersion relation can be determined analytically. The frequencies of these infinitesimal spin waves are compared with the peaks of the Fourier transform of the thermal expectation value of the autocorrelation function calculated by Monte Carlo simulations. In the special case of vanishing wave number an exact solution of the equations of motion is possible describing synchronized oscillations with finite amplitudes. Finally, the limiting case of a dipole chain with N\\longrightarrow ∞ is investigated and completely solved.
Numerical investigations of the potential for laser focus sensors in micrometrology
NASA Astrophysics Data System (ADS)
Bischoff, Jörg; Mastylo, Rostyslav; Manske, Eberhard
2017-06-01
Laser focus sensors (LFS)1 attached to a scanning nano-positioning and measuring machine (NPMM) enable near diffraction limit resolution with very large measuring areas up to 200 x 200 mm1. Further extensions are planned to address wafer sizes of 8 inch and beyond. Thus, they are preferably suited for micro-metrology on large wafers. On the other hand, the minimum lateral features in state-of-the-art semiconductor industry are as small as a few nanometer and therefore far beyond the resolution limits of classical optics. New techniques such as OCD or ODP3,4 a.k.a. as scatterometry have helped to overcome these constraints considerably. However, scatterometry relies on regular patterns and therefore, the measurements have to be performed on special reference gratings or boxes rather than in-die. Consequently, there is a gap between measurement and the actual structure of interest which becomes more and more an issues with shrinking feature sizes. On the other hand, near-field approaches would also allow to extent the resolution limit greatly5 but they require very challenging controls to keep the working distance small enough to stay within the near field zone. Therefore, the feasibility and the limits of a LFS scanner system have been investigated theoretically. Based on simulations of laser focus sensor scanning across simple topographies, it was found that there is potential to overcome the diffraction limitations to some extent by means of vicinity interference effects caused by the optical interaction of adjacent topography features. We think that it might be well possible to reconstruct the diffracting profile by means of rigorous diffraction simulation based on a thorough model of the laser focus sensor optics in combination with topography diffraction 6 in a similar way as applied in OCD. The difference lies in the kind of signal itself which has to be modeled. While standard OCD is based on spectra, LFS utilizes height scan signals. Simulation results are presented for different types of topographies (dense vs. sparse, regular vs. single) with lateral features near and beyond the classical resolution limit. Moreover, the influence of topography height on the detectability is investigated. To this end, several sensor principles and polarization setups are considered such as a dual color pin hole sensor and a Foucault knife sensor. It is shown that resolution beyond the Abbe or Rayleigh limit is possible even with "classical" optical setups when combining measurements with sophisticated profile retrieval techniques and some a-priori knowledge. Finally, measurement uncertainties are derived based on perturbation simulations according to the method presented in 7.
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.
Expansion shock waves in regularized shallow-water theory
NASA Astrophysics Data System (ADS)
El, Gennady A.; Hoefer, Mark A.; Shearer, Michael
2016-05-01
We identify a new type of shock wave by constructing a stationary expansion shock solution of a class of regularized shallow-water equations that include the Benjamin-Bona-Mahony and Boussinesq equations. An expansion shock exhibits divergent characteristics, thereby contravening the classical Lax entropy condition. The persistence of the expansion shock in initial value problems is analysed and justified using matched asymptotic expansions and numerical simulations. The expansion shock's existence is traced to the presence of a non-local dispersive term in the governing equation. We establish the algebraic decay of the shock as it is gradually eroded by a simple wave on either side. More generally, we observe a robustness of the expansion shock in the presence of weak dissipation and in simulations of asymmetric initial conditions where a train of solitary waves is shed from one side of the shock.
The convergence analysis of SpikeProp algorithm with smoothing L1∕2 regularization.
Zhao, Junhong; Zurada, Jacek M; Yang, Jie; Wu, Wei
2018-07-01
Unlike the first and the second generation artificial neural networks, spiking neural networks (SNNs) model the human brain by incorporating not only synaptic state but also a temporal component into their operating model. However, their intrinsic properties require expensive computation during training. This paper presents a novel algorithm to SpikeProp for SNN by introducing smoothing L 1∕2 regularization term into the error function. This algorithm makes the network structure sparse, with some smaller weights that can be eventually removed. Meanwhile, the convergence of this algorithm is proved under some reasonable conditions. The proposed algorithms have been tested for the convergence speed, the convergence rate and the generalization on the classical XOR-problem, Iris problem and Wisconsin Breast Cancer classification. Copyright © 2018 Elsevier Ltd. All rights reserved.
Pokémon Go: digital health interventions to reduce cardiovascular risk.
Krittanawong, Chayakrit; Aydar, Mehmet; Kitai, Takeshi
2017-10-01
Physical activity is associated with a lower risk of coronary heart disease/cardiovascular disease mortality, and current guidelines recommend physical activity for primary prevention in healthy individuals and secondary prevention in patients with coronary heart disease/cardiovascular disease. Over the last decade, playing classic video games has become one of the most popular leisure activities in the world, but is associated with a sedentary lifestyle. In the new era of rapidly evolving augmented reality technology, Pokémon Go, a well-known augmented reality game, may promote physical activity and prevent cardiovascular disease risks - that is, diabetes, obesity, and hypertension. Pokémon Go makes players willing to be physically active for regular and long periods of time. We report on an assessment of regular walking and playing Pokémon Go by performing data mining in Twitter.
Application of Turchin's method of statistical regularization
NASA Astrophysics Data System (ADS)
Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey
2018-04-01
During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.
Klimkiewicz, Paulina; Klimkiewicz, Robert; Jankowska, Agnieszka; Kubsik, Anna; Widłak, Patrycja; Łukasiak, Adam; Janczewska, Katarzyna; Kociuga, Natalia; Nowakowski, Tomasz; Woldańska-Okońska, Marta
2018-01-01
Introduction: In this article, the authors focused on the symptoms of ischemic stroke and the effect of neurorehabilitation methods on the functional status of patients after ischemic stroke. The aim of the study was to evaluate and compare the functional status of patients after ischemic stroke with improved classic kinesiotherapy, classic kinesiotherapy and NDT-Bobath and classic kinesiotherapy and PNF. Materials and methods: The study involved 120 patients after ischemic stroke. Patients were treated in the Department of Rehabilitation and Physical Medicine USK of Medical University in Lodz. Patients were divided into 3 groups of 40 people. Group 1 was rehabilitated by classical kinesiotherapy. Group 2 was rehabilitated by classic kinesiotherapy and NTD-Bobath. Group 3 was rehabilitated by classical kinesiotherapy and PNF. In all patient groups, magnetostimulation was performed using the Viofor JPS System. The study was conducted twice: before treatment and immediately after 5 weeks after the therapy. The effects of applied neurorehabilitation methods were assessed on the basis of the Rivermead Motor Assessment (RMA). Results: In all three patient groups, functional improvement was achieved. However, a significantly higher improvement was observed in patients in the second group, enhanced with classical kinesitherapy and NDT-Bobath. Conclusions: The use of classical kinesiotherapy combined with the NDT-Bobath method is noticeably more effective in improving functional status than the use only classical kinesiotherapy or combination of classical kinesiotherapy and PNF patients after ischemic stroke.
NASA Astrophysics Data System (ADS)
Makarova, A. N.; Makarov, E. I.; Zakharov, N. S.
2018-03-01
In the article, the issue of correcting engineering servicing regularity on the basis of actual dependability data of cars in operation is considered. The purpose of the conducted research is to increase dependability of transport-technological machines by correcting engineering servicing regularity. The subject of the research is the mechanism of engineering servicing regularity influence on reliability measure. On the basis of the analysis of researches carried out before, a method of nonparametric estimation of car failure measure according to actual time-to-failure data was chosen. A possibility of describing the failure measure dependence on engineering servicing regularity by various mathematical models is considered. It is proven that the exponential model is the most appropriate for that purpose. The obtained results can be used as a separate method of engineering servicing regularity correction with certain operational conditions taken into account, as well as for the technical-economical and economical-stochastic methods improvement. Thus, on the basis of the conducted researches, a method of engineering servicing regularity correction of transport-technological machines in the operational process was developed. The use of that method will allow decreasing the number of failures.
3D first-arrival traveltime tomography with modified total variation regularization
NASA Astrophysics Data System (ADS)
Jiang, Wenbin; Zhang, Jie
2018-02-01
Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.
Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection
Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin
2014-01-01
Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng Jinchao; Qin Chenghu; Jia Kebin
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less
Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.
Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2017-05-01
Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.
A hybrid-perturbation-Galerkin technique which combines multiple expansions
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1989-01-01
A two-step hybrid perturbation-Galerkin method for the solution of a variety of differential equations type problems is found to give better results when multiple perturbation expansions are employed. The method assumes that there is parameter in the problem formulation and that a perturbation method can be sued to construct one or more expansions in this perturbation coefficient functions multiplied by computed amplitudes. In step one, regular and/or singular perturbation methods are used to determine the perturbation coefficient functions. The results of step one are in the form of one or more expansions each expressed as a sum of perturbation coefficient functions multiplied by a priori known gauge functions. In step two the classical Bubnov-Galerkin method uses the perturbation coefficient functions computed in step one to determine a set of amplitudes which replace and improve upon the gauge functions. The hybrid method has the potential of overcoming some of the drawbacks of the perturbation and Galerkin methods as applied separately, while combining some of their better features. The proposed method is applied, with two perturbation expansions in each case, to a variety of model ordinary differential equations problems including: a family of linear two-boundary-value problems, a nonlinear two-point boundary-value problem, a quantum mechanical eigenvalue problem and a nonlinear free oscillation problem. The results obtained from the hybrid methods are compared with approximate solutions obtained by other methods, and the applicability of the hybrid method to broader problem areas is discussed.
Spatial resolution properties of motion-compensated tomographic image reconstruction methods.
Chun, Se Young; Fessler, Jeffrey A
2012-07-01
Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.
Prakash, Jaya; Yalavarthy, Phaneendra K
2013-03-01
Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.
Muthukrishnan, Madhanmohan; Singanallur, Nagendrakumar B; Ralla, Kumar; Villuppanoor, Srinivasan A
2008-08-01
Foot-and-mouth disease virus (FMDV) samples transported to the laboratory from far and inaccessible areas for serodiagnosis pose a major problem in a tropical country like India, where there is maximum temperature fluctuation. Inadequate storage methods lead to spoilage of FMDV samples collected from clinically positive animals in the field. Such samples are declared as non-typeable by the typing laboratories with the consequent loss of valuable epidemiological data. The present study evaluated the usefulness of FTA Classic Cards for the collection, shipment, storage and identification of the FMDV genome by RT-PCR and real-time RT-PCR. The stability of the viral RNA, the absence of infectivity and ease of processing the sample for molecular methods make the FTA cards a useful option for transport of FMDV genome for identification and serotyping. The method can be used routinely for FMDV research as it is economical and the cards can be transported easily in envelopes by regular document transport methods. Live virus cannot be isolated from samples collected in FTA cards, which is a limitation. This property can be viewed as an advantage as it limits the risk of transmission of live virus.
Gomgnimbou, Michel Kiréopori; Abadia, Edgar; Zhang, Jian; Refrégier, Guislaine; Panaiotov, Stefan; Bachiyska, Elizabeta; Sola, Christophe
2012-10-01
We developed "spoligoriftyping," a 53-plex assay based on two preexisting methods, the spoligotyping and "rifoligotyping" assays, by combining them into a single assay. Spoligoriftyping allows simultaneous spoligotyping (i.e., clustered regularly interspaced short palindromic repeat [CRISPR]-based genotyping) and characterization of the main rifampin drug resistance mutations on the rpoB hot spot region in a few hours. This test partly uses the dual-priming-oligonucleotide (DPO) principle, which allows simultaneous efficient amplifications of rpoB and the CRISPR locus in the same sample. We tested this method on a set of 114 previously phenotypically and genotypically characterized multidrug-resistant (MDR) Mycobacterium tuberculosis or drug-susceptible M. tuberculosis DNA extracted from clinical isolates obtained from patients from Bulgaria, Nigeria, and Germany. We showed that our method is 100% concordant with rpoB sequencing results and 99.95% (3,911/3,913 spoligotype data points) correlated with classical spoligotyping results. The sensitivity and specificity of our assay were 99 and 100%, respectively, compared to those of phenotypic drug susceptibility testing. Such assays pave the way to the implementation of locally and specifically adapted methods of performing in a single tube both drug resistance mutation detection and genotyping in a few hours.
29 CFR 778.209 - Method of inclusion of bonus in regular rate.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 3 2012-07-01 2012-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...
29 CFR 778.209 - Method of inclusion of bonus in regular rate.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 3 2013-07-01 2013-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...
29 CFR 778.209 - Method of inclusion of bonus in regular rate.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 3 2014-07-01 2014-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...
29 CFR 778.209 - Method of inclusion of bonus in regular rate.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...
29 CFR 778.209 - Method of inclusion of bonus in regular rate.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 3 2011-07-01 2011-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...
Portfolio Analysis for Vector Calculus
ERIC Educational Resources Information Center
Kaplan, Samuel R.
2015-01-01
Classic stock portfolio analysis provides an applied context for Lagrange multipliers that undergraduate students appreciate. Although modern methods of portfolio analysis are beyond the scope of vector calculus, classic methods reinforce the utility of this material. This paper discusses how to introduce classic stock portfolio analysis in a…
NASA Astrophysics Data System (ADS)
Austin, Rickey W.
In Einstein's theory of Special Relativity (SR), one method to derive relativistic kinetic energy is via applying the classical work-energy theorem to relativistic momentum. This approach starts with a classical based work-energy theorem and applies SR's momentum to the derivation. One outcome of this derivation is relativistic kinetic energy. From this derivation, it is rather straight forward to form a kinetic energy based time dilation function. In the derivation of General Relativity a common approach is to bypass classical laws as a starting point. Instead a rigorous development of differential geometry and Riemannian space is constructed, from which classical based laws are derived. This is in contrast to SR's approach of starting with classical laws and applying the consequences of the universal speed of light by all observers. A possible method to derive time dilation due to Newtonian gravitational potential energy (NGPE) is to apply SR's approach to deriving relativistic kinetic energy. It will be shown this method gives a first order accuracy compared to Schwarzschild's metric. The SR's kinetic energy and the newly derived NGPE derivation are combined to form a Riemannian metric based on these two energies. A geodesic is derived and calculations compared to Schwarzschild's geodesic for an orbiting test mass about a central, non-rotating, non-charged massive body. The new metric results in high accuracy calculations when compared to Einsteins General Relativity's prediction. The new method provides a candidate approach for starting with classical laws and deriving General Relativity effects. This approach mimics SR's method of starting with classical mechanics when deriving relativistic equations. As a compliment to introducing General Relativity, it provides a plausible scaffolding method from classical physics when teaching introductory General Relativity. A straight forward path from classical laws to General Relativity will be derived. This derivation provides a minimum first order accuracy to Schwarzschild's solution to Einstein's field equations.
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
The hypergraph regularity method and its applications
Rödl, V.; Nagle, B.; Skokan, J.; Schacht, M.; Kohayakawa, Y.
2005-01-01
Szemerédi's regularity lemma asserts that every graph can be decomposed into relatively few random-like subgraphs. This random-like behavior enables one to find and enumerate subgraphs of a given isomorphism type, yielding the so-called counting lemma for graphs. The combined application of these two lemmas is known as the regularity method for graphs and has proved useful in graph theory, combinatorial geometry, combinatorial number theory, and theoretical computer science. Here, we report on recent advances in the regularity method for k-uniform hypergraphs, for arbitrary k ≥ 2. This method, purely combinatorial in nature, gives alternative proofs of density theorems originally due to E. Szemerédi, H. Furstenberg, and Y. Katznelson. Further results in extremal combinatorics also have been obtained with this approach. The two main components of the regularity method for k-uniform hypergraphs, the regularity lemma and the counting lemma, have been obtained recently: Rödl and Skokan (based on earlier work of Frankl and Rödl) generalized Szemerédi's regularity lemma to k-uniform hypergraphs, and Nagle, Rödl, and Schacht succeeded in proving a counting lemma accompanying the Rödl–Skokan hypergraph regularity lemma. The counting lemma is proved by reducing the counting problem to a simpler one previously investigated by Kohayakawa, Rödl, and Skokan. Similar results were obtained independently by W. T. Gowers, following a different approach. PMID:15919821
Multiple graph regularized protein domain ranking
2012-01-01
Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331
Hanson, Erik A; Lundervold, Arvid
2013-11-01
Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.
Image deblurring based on nonlocal regularization with a non-convex sparsity constraint
NASA Astrophysics Data System (ADS)
Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi
2018-04-01
In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Caballero, Marcos D.; Doughty, Leanne; Turnbull, Anna M.; Pepper, Rachel E.; Pollock, Steven J.
2017-06-01
Reliable and validated assessments of introductory physics have been instrumental in driving curricular and pedagogical reforms that lead to improved student learning. As part of an effort to systematically improve our sophomore-level classical mechanics and math methods course (CM 1) at CU Boulder, we have developed a tool to assess student learning of CM 1 concepts in the upper division. The Colorado Classical Mechanics and Math Methods Instrument (CCMI) builds on faculty consensus learning goals and systematic observations of student difficulties. The result is a 9-question open-ended post test that probes student learning in the first half of a two-semester classical mechanics and math methods sequence. In this paper, we describe the design and development of this instrument, its validation, and measurements made in classes at CU Boulder and elsewhere.
A dynamic regularized gradient model of the subgrid-scale stress tensor for large-eddy simulation
NASA Astrophysics Data System (ADS)
Vollant, A.; Balarac, G.; Corre, C.
2016-02-01
Large-eddy simulation (LES) solves only the large scales part of turbulent flows by using a scales separation based on a filtering operation. The solution of the filtered Navier-Stokes equations requires then to model the subgrid-scale (SGS) stress tensor to take into account the effect of scales smaller than the filter size. In this work, a new model is proposed for the SGS stress model. The model formulation is based on a regularization procedure of the gradient model to correct its unstable behavior. The model is developed based on a priori tests to improve the accuracy of the modeling for both structural and functional performances, i.e., the model ability to locally approximate the SGS unknown term and to reproduce enough global SGS dissipation, respectively. LES is then performed for a posteriori validation. This work is an extension to the SGS stress tensor of the regularization procedure proposed by Balarac et al. ["A dynamic regularized gradient model of the subgrid-scale scalar flux for large eddy simulations," Phys. Fluids 25(7), 075107 (2013)] to model the SGS scalar flux. A set of dynamic regularized gradient (DRG) models is thus made available for both the momentum and the scalar equations. The second objective of this work is to compare this new set of DRG models with direct numerical simulations (DNS), filtered DNS in the case of classic flows simulated with a pseudo-spectral solver and with the standard set of models based on the dynamic Smagorinsky model. Various flow configurations are considered: decaying homogeneous isotropic turbulence, turbulent plane jet, and turbulent channel flows. These tests demonstrate the stable behavior provided by the regularization procedure, along with substantial improvement for velocity and scalar statistics predictions.
NASA Astrophysics Data System (ADS)
Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen
2018-04-01
Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.
A multiplicative regularization for force reconstruction
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2017-02-01
Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.
NASA Astrophysics Data System (ADS)
Tema, Evdokia; Camps, Pierre; Ferrara, Enzo
2014-05-01
A detailed rock-magnetic and archaeomagnetic study has been carried out on two rescue excavation kilns discovered during the works to expand a highway at the location of Osterietta, in Northen Italy. Systematic archaeomagnetic sampling was carried out collecting 15 samples from the first kiln (OSA) and 8 samples from the second kiln (OSB), all of them oriented in situ with a magnetic compass and an inclinometer. Magnetic mineralogy measurements have been carried out in order to determine the main magnetic carrier of the samples and to check their thermal stability. Standard thermal demagnetization procedures have been used to determine the archaeomagnetic direction registered by the bricks during their last firing. Demagnetization results show a very stable characteristic remanent magnetization (ChRM). We averaged the directions for each kiln separately and calculated the statistical parameters assuming a Fisherian distribution. The archaeointensity of both kilns has also been recovered with both the classical Thellier-Thellier method and the multi-specimen procedure (MSP-DSC). During the Thellier experiments, regular partial thermoremanent magnetization checks have been performed and the effect of the anisotropy of the thermoremanent magnetization (TRM) and cooling rate upon TRM intensity acquisition have been investigated in all samples. The multi-specimen procedure was performed with a very fast-heating oven developed at Montpellier (France). The intensity results obtained from both methods have been compared and the full geomagnetic field vector determined for each kiln has been used for archaeomagnetic dating. The obtained results show that the kilns were almost contemporaneous and their last use occurred in the 1750-1850 AD time interval.
Fu, Xian-Jun; Song, Xu-Xia; Wei, Lin-Bo; Wang, Zhen-Guo
2013-08-01
To provide the distribution pattern and compatibility laws of the constituent herbs in prescriptions, for doctor's convenience to make decision in choosing correct herbs and prescriptions for treating respiratory disease. Classical prescriptions treating respiratory disease were selected from authoritative prescription books. Data mining methods (frequent itemsets and association rules) were used to analyze the regular patterns and compatibility laws of the constituent herbs in the selected prescriptions. A total of 562 prescriptions were selected to be studied. The result exhibited that, Radix glycyrrhizae was the most frequently used in 47.2% prescriptions, other frequently used were Semen armeniacae amarum, Fructus schisandrae Chinese, Herba ephedrae, and Radix ginseng. Herbal ephedrae was always coupled with Semen armeniacae amarum with the confidence of 73.3%, and many herbs were always accompanied by Radix glycyrrhizae with high confidence. More over, Fructus schisandrae Chinese, Herba ephedrae and Rhizoma pinelliae was most commonly used to treat cough, dyspnoea and associated sputum respectively besides Radix glycyrrhizae and Semen armeniacae amarum. The prescriptions treating dyspnoea often used double herb group of Herba ephedrae & Radix glycyrrhizae, while prescriptions treating sputum often used double herb group of Rhizoma pinelliae & Radix glycyrrhizae and Rhizoma pinelliae & Semen armeniacae amarum, triple herb groups of Rhizoma pinelliae & Semen armeniacae amarum & Radix glycyrrhizae and Pericarpium citri reticulatae & Rhizoma pinelliae & Radix glycyrrhizae. The prescriptions treating respiratory disease showed common compatibility laws in using herbs and special compatibility laws for treating different respiratory symptoms. These principle patterns and special compatibility laws reported here could be useful for doctors to choose correct herbs and prescriptions in treating respiratory disease.
Regularized Generalized Canonical Correlation Analysis
ERIC Educational Resources Information Center
Tenenhaus, Arthur; Tenenhaus, Michel
2011-01-01
Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…
ZAHABIUN, Farzaneh; SADJJADI, Seyed Mahmoud; ESFANDIARI, Farideh
2015-01-01
Background: Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. Methods: A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. Results: The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Conclusion: Using this method is cost effective and fast for mounting of small nematodes comparing to classic method. PMID:26811729
Simulation of wave packet tunneling of interacting identical particles
NASA Astrophysics Data System (ADS)
Lozovik, Yu. E.; Filinov, A. V.; Arkhipov, A. S.
2003-02-01
We demonstrate a different method of simulation of nonstationary quantum processes, considering the tunneling of two interacting identical particles, represented by wave packets. The used method of quantum molecular dynamics (WMD) is based on the Wigner representation of quantum mechanics. In the context of this method ensembles of classical trajectories are used to solve quantum Wigner-Liouville equation. These classical trajectories obey Hamiltonian-like equations, where the effective potential consists of the usual classical term and the quantum term, which depends on the Wigner function and its derivatives. The quantum term is calculated using local distribution of trajectories in phase space, therefore, classical trajectories are not independent, contrary to classical molecular dynamics. The developed WMD method takes into account the influence of exchange and interaction between particles. The role of direct and exchange interactions in tunneling is analyzed. The tunneling times for interacting particles are calculated.
Quantum chaos: an introduction via chains of interacting spins-1/2
NASA Astrophysics Data System (ADS)
Gubin, Aviva; Santos, Lea
2012-02-01
We discuss aspects of quantum chaos by focusing on spectral statistical properties and structures of eigenstates of quantum many-body systems. Quantum systems whose classical counterparts are chaotic have properties that differ from those of quantum systems whose classical counterparts are regular. One of the main signatures of what became known as quantum chaos is a spectrum showing repulsion of the energy levels. We show how level repulsion may develop in one-dimensional systems of interacting spins-1/2 which are devoid of random elements and involve only two-body interactions. We present a simple recipe to unfold the spectrum and emphasize the importance of taking into account the symmetries of the system. In addition to the statistics of eigenvalues, we analyze also how the structure of the eigenstates may indicate chaos. This is done by computing quantities that measure the level of delocalization of the eigenstates.
Moment inference from tomograms
Day-Lewis, F. D.; Chen, Y.; Singha, K.
2007-01-01
Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error. Copyright 2007 by the American Geophysical Union.
Moment inference from tomograms
Day-Lewis, Frederick D.; Chen, Yongping; Singha, Kamini
2007-01-01
Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error.
A comparative study of different methods for calculating electronic transition rates
NASA Astrophysics Data System (ADS)
Kananenka, Alexei A.; Sun, Xiang; Schubert, Alexander; Dunietz, Barry D.; Geva, Eitan
2018-03-01
We present a comprehensive comparison of the following mixed quantum-classical methods for calculating electronic transition rates: (1) nonequilibrium Fermi's golden rule, (2) mixed quantum-classical Liouville method, (3) mean-field (Ehrenfest) mixed quantum-classical method, and (4) fewest switches surface-hopping method (in diabatic and adiabatic representations). The comparison is performed on the Garg-Onuchic-Ambegaokar benchmark charge-transfer model, over a broad range of temperatures and electronic coupling strengths, with different nonequilibrium initial states, in the normal and inverted regimes. Under weak to moderate electronic coupling, the nonequilibrium Fermi's golden rule rates are found to be in good agreement with the rates obtained via the mixed quantum-classical Liouville method that coincides with the fully quantum-mechanically exact results for the model system under study. Our results suggest that the nonequilibrium Fermi's golden rule can serve as an inexpensive yet accurate alternative to Ehrenfest and the fewest switches surface-hopping methods.
Zahabiun, Farzaneh; Sadjjadi, Seyed Mahmoud; Esfandiari, Farideh
2015-01-01
Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Using this method is cost effective and fast for mounting of small nematodes comparing to classic method.
“Kerrr” black hole: The lord of the string
NASA Astrophysics Data System (ADS)
Smailagic, Anais; Spallucci, Euro
2010-04-01
Kerrr in the title is not a typo. The third “r” stands for regular, in the sense of pathology-free rotating black hole. We exhibit a long search-for, exact, Kerr-like, solution of the Einstein equations with novel features: (i) no curvature ring singularity; (ii) no “anti-gravity” universe with causality violating time-like closed world-lines; (iii) no “super-luminal” matter disk. The ring singularity is replaced by a classical, circular, rotating string with Planck tension representing the inner engine driving the rotation of all the surrounding matter. The resulting geometry is regular and smoothly interpolates among inner Minkowski space, borderline de Sitter and outer Kerr universe. The key ingredient to cure all unphysical features of the ordinary Kerr black hole is the choice of a “non-commutative geometry inspired” matter source as the input for the Einstein equations, in analogy with spherically symmetric black holes described in earlier works.
Expansion shock waves in regularized shallow-water theory
El, Gennady A.; Shearer, Michael
2016-01-01
We identify a new type of shock wave by constructing a stationary expansion shock solution of a class of regularized shallow-water equations that include the Benjamin–Bona–Mahony and Boussinesq equations. An expansion shock exhibits divergent characteristics, thereby contravening the classical Lax entropy condition. The persistence of the expansion shock in initial value problems is analysed and justified using matched asymptotic expansions and numerical simulations. The expansion shock's existence is traced to the presence of a non-local dispersive term in the governing equation. We establish the algebraic decay of the shock as it is gradually eroded by a simple wave on either side. More generally, we observe a robustness of the expansion shock in the presence of weak dissipation and in simulations of asymmetric initial conditions where a train of solitary waves is shed from one side of the shock. PMID:27279780
Nonparametric instrumental regression with non-convex constraints
NASA Astrophysics Data System (ADS)
Grasmair, M.; Scherzer, O.; Vanhems, A.
2013-03-01
This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.
Black hole solution in the framework of arctan-electrodynamics
NASA Astrophysics Data System (ADS)
Kruglov, S. I.
An arctan-electrodynamics coupled with the gravitational field is investigated. We obtain the regular black hole solution that at r →∞ gives corrections to the Reissner-Nordström solution. The corrections to Coulomb’s law at r →∞ are found. We evaluate the mass of the black hole that is a function of the dimensional parameter β introduced in the model. The magnetically charged black hole was investigated and we have obtained the magnetic mass of the black hole and the metric function at r →∞. The regular black hole solution is obtained at r → 0 with the de Sitter core. We show that there is no singularity of the Ricci scalar for electrically and magnetically charged black holes. Restrictions on the electric and magnetic fields are found that follow from the requirement of the absence of superluminal sound speed and the requirement of a classical stability.
An Exponential Regulator for Rapidity Divergences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ye; Neill, Duff; Zhu, Hua Xing
2016-04-01
Finding an efficient and compelling regularization of soft and collinear degrees of freedom at the same invariant mass scale, but separated in rapidity is a persistent problem in high-energy factorization. In the course of a calculation, one encounters divergences unregulated by dimensional regularization, often called rapidity divergences. Once regulated, a general framework exists for their renormalization, the rapidity renormalization group (RRG), leading to fully resummed calculations of transverse momentum (to the jet axis) sensitive quantities. We examine how this regularization can be implemented via a multi-differential factorization of the soft-collinear phase-space, leading to an (in principle) alternative non-perturbative regularization ofmore » rapidity divergences. As an example, we examine the fully-differential factorization of a color singlet's momentum spectrum in a hadron-hadron collision at threshold. We show how this factorization acts as a mother theory to both traditional threshold and transverse momentum resummation, recovering the classical results for both resummations. Examining the refactorization of the transverse momentum beam functions in the threshold region, we show that one can directly calculate the rapidity renormalized function, while shedding light on the structure of joint resummation. Finally, we show how using modern bootstrap techniques, the transverse momentum spectrum is determined by an expansion about the threshold factorization, leading to a viable higher loop scheme for calculating the relevant anomalous dimensions for the transverse momentum spectrum.« less
A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems
NASA Astrophysics Data System (ADS)
Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong
2017-09-01
In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.
Changes and specificities in health behaviors among healthcare students over an 8-year period
Delay, J.; Grigioni, S.; Déchelotte, P.; Ladner, J.
2018-01-01
Background Healthcare students are future health care providers and serve as role models and coaches to enhance behaviors for healthy lifestyles. However healthcare students face multiple stressors that could lead to adopting risk behaviors. Objectives To assess the changes in health risk factors among healthcare students between 2007 and 2015, and to identify specific health behaviors based on the curriculum in a population of healthcare students: Methods Two cross sectionnal studies were conducted in 2007 and 2015 among nursing, medical, pharmacy, and physiotherapy students (Rouen, France). During compulsory courses and examination sessions students filled self-administered questionnaires on socio-demographic characteristics and behavior as: tobacco smoking, alcohol consumption, cannabis consumption, eating disorders, regular practice of sport, perceived health, stress and use of psychotropic drugs. Results 2,605 healthcare students were included (1,326 in 2007 and 1,279 in 2015), comprising 1,225 medical students (47.0%), 738 nursing students (28.3%), 362 pharmacy students (13.9%), and 280 physiotherapy students (10.8%). Between 2007 and 2015, occasional binge drinking and regular practice of sport increased significantly among healthcare students, respectively AOR = 1.48 CI95% (1.20–1.83) and AOR = 1.33 CI95% (1.11–1.60), regular cannabis consumption decreased significantly, AOR = 0.32 CI95% (0.19–0.54). There was no change in smoking or overweight/obese. There was a higher risk of frequent binge drinking and a lower risk of tobacco smoking in all curricula than in nursing students. Medical students practiced sport on a more regular basis, were less overweight/obese, had fewer eating disorders than nursing students. Conclusion Our findings demonstrate a stable frequency of classic behaviors as smoking but a worsening of emerging behaviors as binge drinking among healthcare students between 2007 and 2015. Health behaviors differed according to healthcare curricula and nursing students demonstrated higher risks. As health behaviors are positively related to favorable attitudes towards preventive counseling, therefore healthcare students should receive training in preventive counseling and develop healthy lifestyles targeted according to the health curriculum. PMID:29566003
De la Flor-Martínez, Maria; Galindo-Moreno, Pablo; Sánchez-Fernández, Elena; Piattelli, Adriano; Cobo, Manuel Jesus; Herrera-Viedma, Enrique
2016-10-01
The study of classic papers permits analysis of the past, present, and future of a specific area of knowledge. This type of analysis is becoming more frequent and more sophisticated. Our objective was to use the H-classics method, based on the h-index, to analyze classic papers in Implant Dentistry, Periodontics, and Oral Surgery (ID, P, and OS). First, an electronic search of documents related to ID, P, and OS was conducted in journals indexed in Journal Citation Reports (JCR) 2014 within the category 'Dentistry, Oral Surgery & Medicine'. Second, Web of Knowledge databases were searched using Mesh terms related to ID, P, and OS. Finally, the H-classics method was applied to select the classic articles in these disciplines, collecting data on associated research areas, document type, country, institutions, and authors. Of 267,611 documents related to ID, P, and OS retrieved from JCR journals (2014), 248 were selected as H-classics. They were published in 35 journals between 1953 and 2009, most frequently in the Journal of Clinical Periodontology (18.95%), the Journal of Periodontology (18.54%), International Journal of Oral and Maxillofacial Implants (9.27%), and Clinical Oral Implant Research (6.04%). These classic articles derived from the USA in 49.59% of cases and from Europe in 47.58%, while the most frequent host institution was the University of Gothenburg (17.74%) and the most frequent authors were J. Lindhe (10.48%) and S. Socransky (8.06%). The H-classics approach offers an objective method to identify core knowledge in clinical disciplines such as ID, P, and OS. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Optical tomography by means of regularized MLEM
NASA Astrophysics Data System (ADS)
Majer, Charles L.; Urbanek, Tina; Peter, Jörg
2015-09-01
To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.
NASA Astrophysics Data System (ADS)
Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei
2018-05-01
A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.
Regularization techniques on least squares non-uniform fast Fourier transform.
Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena
2013-05-01
Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.
X-ray computed tomography using curvelet sparse regularization.
Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias
2015-04-01
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
Thermodynamic integration from classical to quantum mechanics.
Habershon, Scott; Manolopoulos, David E
2011-12-14
We present a new method for calculating quantum mechanical corrections to classical free energies, based on thermodynamic integration from classical to quantum mechanics. In contrast to previous methods, our method is numerically stable even in the presence of strong quantum delocalization. We first illustrate the method and its relationship to a well-established method with an analysis of a one-dimensional harmonic oscillator. We then show that our method can be used to calculate the quantum mechanical contributions to the free energies of ice and water for a flexible water model, a problem for which the established method is unstable. © 2011 American Institute of Physics
Coarse-graining time series data: Recurrence plot of recurrence plots and its application for music
NASA Astrophysics Data System (ADS)
Fukino, Miwa; Hirata, Yoshito; Aihara, Kazuyuki
2016-02-01
We propose a nonlinear time series method for characterizing two layers of regularity simultaneously. The key of the method is using the recurrence plots hierarchically, which allows us to preserve the underlying regularities behind the original time series. We demonstrate the proposed method with musical data. The proposed method enables us to visualize both the local and the global musical regularities or two different features at the same time. Furthermore, the determinism scores imply that the proposed method may be useful for analyzing emotional response to the music.
Coarse-graining time series data: Recurrence plot of recurrence plots and its application for music.
Fukino, Miwa; Hirata, Yoshito; Aihara, Kazuyuki
2016-02-01
We propose a nonlinear time series method for characterizing two layers of regularity simultaneously. The key of the method is using the recurrence plots hierarchically, which allows us to preserve the underlying regularities behind the original time series. We demonstrate the proposed method with musical data. The proposed method enables us to visualize both the local and the global musical regularities or two different features at the same time. Furthermore, the determinism scores imply that the proposed method may be useful for analyzing emotional response to the music.
Beam energy considerations for gold nano-particle enhanced radiation treatment.
Van den Heuvel, F; Locquet, Jean-Pierre; Nuyts, S
2010-08-21
A novel approach using nano-technology enhanced radiation modalities is investigated. The proposed methodology uses antibodies labeled with organically inert metals with a high atomic number. Irradiation using photons with energies in the kilo-electron volt (keV) range shows an increase in dose due to a combination of an increase in photo-electric interactions and a pronounced generation of Auger and/or Coster-Krönig (A-CK) electrons. The dependence of the dose deposition on various factors is investigated using Monte Carlo simulation models. The factors investigated include agent concentration, spectral dependence looking at mono-energetic sources as well as classical bremsstrahlung sources. The optimization of the energy spectrum is performed in terms of physical dose enhancement as well as the dose deposited by Auger and/or Coster-Krönig electrons and their biological effectiveness. A quasi-linear dependence on concentration and an exponential decrease within the target medium is observed. The maximal dose enhancement is dependent on the position of the target in the beam. Apart from irradiation with low-photon energies (10-20 keV) there is no added benefit from the increase in generation of Auger electrons. Interestingly, a regular 110 kVp bremsstrahlung spectrum shows a comparable enhancement in comparison with the optimized mono-energetic sources. In conclusion we find that the use of enhanced nano-particles shows promise to be implemented quite easily in regular clinics on a physical level due to the advantageous properties in classical beams.
Beam energy considerations for gold nano-particle enhanced radiation treatment
NASA Astrophysics Data System (ADS)
Van den Heuvel, F.; Locquet, Jean-Pierre; Nuyts, S.
2010-08-01
A novel approach using nano-technology enhanced radiation modalities is investigated. The proposed methodology uses antibodies labeled with organically inert metals with a high atomic number. Irradiation using photons with energies in the kilo-electron volt (keV) range shows an increase in dose due to a combination of an increase in photo-electric interactions and a pronounced generation of Auger and/or Coster-Krönig (A-CK) electrons. The dependence of the dose deposition on various factors is investigated using Monte Carlo simulation models. The factors investigated include agent concentration, spectral dependence looking at mono-energetic sources as well as classical bremsstrahlung sources. The optimization of the energy spectrum is performed in terms of physical dose enhancement as well as the dose deposited by Auger and/or Coster-Krönig electrons and their biological effectiveness. A quasi-linear dependence on concentration and an exponential decrease within the target medium is observed. The maximal dose enhancement is dependent on the position of the target in the beam. Apart from irradiation with low-photon energies (10-20 keV) there is no added benefit from the increase in generation of Auger electrons. Interestingly, a regular 110 kVp bremsstrahlung spectrum shows a comparable enhancement in comparison with the optimized mono-energetic sources. In conclusion we find that the use of enhanced nano-particles shows promise to be implemented quite easily in regular clinics on a physical level due to the advantageous properties in classical beams.
Celik, Hasan; Bouhrara, Mustapha; Reiter, David A.; Fishbein, Kenneth W.; Spencer, Richard G.
2013-01-01
We propose a new approach to stabilizing the inverse Laplace transform of a multiexponential decay signal, a classically ill-posed problem, in the context of nuclear magnetic resonance relaxometry. The method is based on extension to a second, indirectly detected, dimension, that is, use of the established framework of two-dimensional relaxometry, followed by projection onto the desired axis. Numerical results for signals comprised of discrete T1 and T2 relaxation components and experiments performed on agarose gel phantoms are presented. We find markedly improved accuracy, and stability with respect to noise, as well as insensitivity to regularization in quantifying underlying relaxation components through use of the two-dimensional as compared to the one-dimensional inverse Laplace transform. This improvement is demonstrated separately for two different inversion algorithms, nonnegative least squares and non-linear least squares, to indicate the generalizability of this approach. These results may have wide applicability in approaches to the Fredholm integral equation of the first kind. PMID:24035004
Edge Diffraction Coefficients around Critical Rays
NASA Astrophysics Data System (ADS)
Fradkin, L.; Harmer, M.; Darmon, M.
2014-04-01
The classical GTD (Geometrical Theory of Diffraction) gives a recipe, based on high-frequency asymptotics, for calculating edge diffraction coefficients in the geometrical regions where only diffracted waves propagate. The Uniform GTD extends this recipe to transition zones between irradiated and silent regions, known as penumbra. For many industrial materials, e.g. steels, and frequencies utlized in industrial ultrasonic transducers, that is, around 5 MHz, asymptotics suggested for description of geometrical regions supporting the head waves or transition regions surrounding their boundaries, known as critical rays, prove unsatisfactory. We present a numerical extension of GTD, which is based on a regularized, variable step Simpson's method for evaluating the edge diffraction coefficients in the regions of interference between head waves, diffracted waves and/or reflected waves. In mathematical terms, these are the regions of coalescence of three critical points - a branch point, stationary point and/or pole, respectively. We show that away from the shadow boundaries, near the critical rays the GTD still produces correct values of the edge diffraction coefficients.
Adaptive laboratory evolution -- principles and applications for biotechnology.
Dragosits, Martin; Mattanovich, Diethard
2013-07-01
Adaptive laboratory evolution is a frequent method in biological studies to gain insights into the basic mechanisms of molecular evolution and adaptive changes that accumulate in microbial populations during long term selection under specified growth conditions. Although regularly performed for more than 25 years, the advent of transcript and cheap next-generation sequencing technologies has resulted in many recent studies, which successfully applied this technique in order to engineer microbial cells for biotechnological applications. Adaptive laboratory evolution has some major benefits as compared with classical genetic engineering but also some inherent limitations. However, recent studies show how some of the limitations may be overcome in order to successfully incorporate adaptive laboratory evolution in microbial cell factory design. Over the last two decades important insights into nutrient and stress metabolism of relevant model species were acquired, whereas some other aspects such as niche-specific differences of non-conventional cell factories are not completely understood. Altogether the current status and its future perspectives highlight the importance and potential of adaptive laboratory evolution as approach in biotechnological engineering.
NASA Astrophysics Data System (ADS)
Karami, Fahd; Ziad, Lamia; Sadik, Khadija
2017-12-01
In this paper, we focus on a numerical method of a problem called the Perona-Malik inequality which we use for image denoising. This model is obtained as the limit of the Perona-Malik model and the p-Laplacian operator with p→ ∞. In Atlas et al., (Nonlinear Anal. Real World Appl 18:57-68, 2014), the authors have proved the existence and uniqueness of the solution of the proposed model. However, in their work, they used the explicit numerical scheme for approximated problem which is strongly dependent to the parameter p. To overcome this, we use in this work an efficient algorithm which is a combination of the classical additive operator splitting and a nonlinear relaxation algorithm. At last, we have presented the experimental results in image filtering show, which demonstrate the efficiency and effectiveness of our algorithm and finally, we have compared it with the previous scheme presented in Atlas et al., (Nonlinear Anal. Real World Appl 18:57-68, 2014).
Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses.
Olivari, Mario; Nieuwenhuizen, Frank M; Venrooij, Joost; Bülthoff, Heinrich H; Pollini, Lorenzo
2015-12-01
In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails.
Regularization of soft-X-ray imaging in the DIII-D tokamak
Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...
2015-03-02
We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less
ERIC Educational Resources Information Center
Matthews, Dorothy, Ed.
1979-01-01
The eight articles in this bulletin suggest methods of introducing classical literature into the English curriculum. Article titles are: "Ideas for Teaching Classical Mythology"; "What Novels Should High School Students Read?"; "Enlivening the Classics for Live Students"; "Poetry in Performance: The Value of Song and Oral Interpretation in…
Emotional responses to Hindustani raga music: the role of musical structure
Mathur, Avantika; Vijayakumar, Suhas H.; Chakrabarti, Bhismadev; Singh, Nandini C.
2015-01-01
In Indian classical music, ragas constitute specific combinations of tonic intervals potentially capable of evoking distinct emotions. A raga composition is typically presented in two modes, namely, alaap and gat. Alaap is the note by note delineation of a raga bound by a slow tempo, but not bound by a rhythmic cycle. Gat on the other hand is rendered at a faster tempo and follows a rhythmic cycle. Our primary objective was to (1) discriminate the emotions experienced across alaap and gat of ragas, (2) investigate the association of tonic intervals, tempo and rhythmic regularity with emotional response. 122 participants rated their experienced emotion across alaap and gat of 12 ragas. Analysis of the emotional responses revealed that (1) ragas elicit distinct emotions across the two presentation modes, and (2) specific tonic intervals are robust predictors of emotional response. Specifically, our results showed that the ‘minor second’ is a direct predictor of negative valence. (3) Tonality determines the emotion experienced for a raga where as rhythmic regularity and tempo modulate levels of arousal. Our findings provide new insights into the emotional response to Indian ragas and the impact of tempo, rhythmic regularity and tonality on it. PMID:25983702
Emotional responses to Hindustani raga music: the role of musical structure.
Mathur, Avantika; Vijayakumar, Suhas H; Chakrabarti, Bhismadev; Singh, Nandini C
2015-01-01
In Indian classical music, ragas constitute specific combinations of tonic intervals potentially capable of evoking distinct emotions. A raga composition is typically presented in two modes, namely, alaap and gat. Alaap is the note by note delineation of a raga bound by a slow tempo, but not bound by a rhythmic cycle. Gat on the other hand is rendered at a faster tempo and follows a rhythmic cycle. Our primary objective was to (1) discriminate the emotions experienced across alaap and gat of ragas, (2) investigate the association of tonic intervals, tempo and rhythmic regularity with emotional response. 122 participants rated their experienced emotion across alaap and gat of 12 ragas. Analysis of the emotional responses revealed that (1) ragas elicit distinct emotions across the two presentation modes, and (2) specific tonic intervals are robust predictors of emotional response. Specifically, our results showed that the 'minor second' is a direct predictor of negative valence. (3) Tonality determines the emotion experienced for a raga where as rhythmic regularity and tempo modulate levels of arousal. Our findings provide new insights into the emotional response to Indian ragas and the impact of tempo, rhythmic regularity and tonality on it.
A novel approach of ensuring layout regularity correct by construction in advanced technologies
NASA Astrophysics Data System (ADS)
Ahmed, Shafquat Jahan; Vaderiya, Yagnesh; Gupta, Radhika; Parthasarathy, Chittoor; Marin, Jean-Claude; Robert, Frederic
2017-03-01
In advanced technology nodes, layout regularity has become a mandatory prerequisite to create robust designs less sensitive to variations in manufacturing process in order to improve yield and minimizing electrical variability. In this paper we describe a method for designing regular full custom layouts based on design and process co-optimization. The method includes various design rule checks that can be used on-the-fly during leaf-cell layout development. We extract a Layout Regularity Index (LRI) from the layouts based on the jogs, alignments and pitches used in the design for any given metal layer. Regularity Index of a layout is the direct indicator of manufacturing yield and is used to compare the relative health of different layout blocks in terms of process friendliness. The method has been deployed for 28nm and 40nm technology nodes for Memory IP and is being extended to other IPs (IO, standard-cell). We have quantified the gain of layout regularity with the deployed method on printability and electrical characteristics by process-variation (PV) band simulation analysis and have achieved up-to 5nm reduction in PV band.
Wang, Hongkai; Zhou, Zongwei; Li, Yingci; Chen, Zhonghua; Lu, Peiou; Wang, Wenzhi; Liu, Wanyu; Yu, Lijuan
2017-12-01
This study aimed to compare one state-of-the-art deep learning method and four classical machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer (NSCLC) from 18 F-FDG PET/CT images. Another objective was to compare the discriminative power of the recently popular PET/CT texture features with the widely used diagnostic features such as tumor size, CT value, SUV, image contrast, and intensity standard deviation. The four classical machine learning methods included random forests, support vector machines, adaptive boosting, and artificial neural network. The deep learning method was the convolutional neural networks (CNN). The five methods were evaluated using 1397 lymph nodes collected from PET/CT images of 168 patients, with corresponding pathology analysis results as gold standard. The comparison was conducted using 10 times 10-fold cross-validation based on the criterion of sensitivity, specificity, accuracy (ACC), and area under the ROC curve (AUC). For each classical method, different input features were compared to select the optimal feature set. Based on the optimal feature set, the classical methods were compared with CNN, as well as with human doctors from our institute. For the classical methods, the diagnostic features resulted in 81~85% ACC and 0.87~0.92 AUC, which were significantly higher than the results of texture features. CNN's sensitivity, specificity, ACC, and AUC were 84, 88, 86, and 0.91, respectively. There was no significant difference between the results of CNN and the best classical method. The sensitivity, specificity, and ACC of human doctors were 73, 90, and 82, respectively. All the five machine learning methods had higher sensitivities but lower specificities than human doctors. The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/CT images. Because CNN does not need tumor segmentation or feature calculation, it is more convenient and more objective than the classical methods. However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes. Therefore, incorporating the diagnostic features into CNN is a promising direction for future research.
Flenady, Tracy; Dwyer, Trudy; Applegarth, Judith
2017-09-01
Abnormal respiratory rates are one of the first indicators of clinical deterioration in emergency department(ED) patients. Despite the importance of respiratory rate observations, this vital sign is often inaccurately recorded on ED observation charts, compromising patient safety. Concurrently, there is a paucity of research reporting why this phenomenon occurs. To develop a substantive theory explaining ED registered nurses' reasoning when they miss or misreport respiratory rate observations. This research project employed a classic grounded theory analysis of qualitative data. Seventy-nine registered nurses currently working in EDs within Australia. Data collected included detailed responses from individual interviews and open-ended responses from an online questionnaire. Classic grounded theory (CGT) research methods were utilised, therefore coding was central to the abstraction of data and its reintegration as theory. Constant comparison synonymous with CGT methods were employed to code data. This approach facilitated the identification of the main concern of the participants and aided in the generation of theory explaining how the participants processed this issue. The main concern identified is that ED registered nurses do not believe that collecting an accurate respiratory rate for ALL patients at EVERY round of observations is a requirement, and yet organizational requirements often dictate that a value for the respiratory rate be included each time vital signs are collected. The theory 'Rationalising Transgression', explains how participants continually resolve this problem. The study found that despite feeling professionally conflicted, nurses often erroneously record respiratory rate observations, and then rationalise this behaviour by employing strategies that adjust the significance of the organisational requirement. These strategies include; Compensating, when nurses believe they are compensating for errant behaviour by enhancing the patient's outcome; Minimalizing, when nurses believe that the patient's outcome would be no different if they recorded an accurate respiratory rate or not and; Trivialising, a strategy that sanctions negligent behaviour and occurs when nurses 'cut corners' to get the job done. Nurses' use these strategies to titrate the level ofemotional discomfort associated with erroneous behaviour, thereby rationalising transgression CONCLUSION: This research reveals that despite continuing education regarding gold standard guidelines for respiratory rate collection, suboptimal practice continues. Ideally, to combat this transgression, a culture shift must occur regarding nurses' understanding of acceptable practice methods. Nurses must receive education in a way that permeates their understanding of the relationship between the regular collection of accurate respiratory rate observations and optimal patient outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
A strategy for quantum algorithm design assisted by machine learning
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung
2014-07-01
We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.
A novel deep learning algorithm for incomplete face recognition: Low-rank-recovery network.
Zhao, Jianwei; Lv, Yongbiao; Zhou, Zhenghua; Cao, Feilong
2017-10-01
There have been a lot of methods to address the recognition of complete face images. However, in real applications, the images to be recognized are usually incomplete, and it is more difficult to realize such a recognition. In this paper, a novel convolution neural network frame, named a low-rank-recovery network (LRRNet), is proposed to conquer the difficulty effectively inspired by matrix completion and deep learning techniques. The proposed LRRNet first recovers the incomplete face images via an approach of matrix completion with the truncated nuclear norm regularization solution, and then extracts some low-rank parts of the recovered images as the filters. With these filters, some important features are obtained by means of the binaryzation and histogram algorithms. Finally, these features are classified with the classical support vector machines (SVMs). The proposed LRRNet method has high face recognition rate for the heavily corrupted images, especially for the images in the large databases. The proposed LRRNet performs well and efficiently for the images with heavily corrupted, especially in the case of large databases. Extensive experiments on several benchmark databases demonstrate that the proposed LRRNet performs better than some other excellent robust face recognition methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Iterative Nonlocal Total Variation Regularization Method for Image Restoration
Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen
2013-01-01
In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560
An Efficient Augmented Lagrangian Method for Statistical X-Ray CT Image Reconstruction.
Li, Jiaojiao; Niu, Shanzhou; Huang, Jing; Bian, Zhaoying; Feng, Qianjin; Yu, Gaohang; Liang, Zhengrong; Chen, Wufan; Ma, Jianhua
2015-01-01
Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as "ALM-ANAD". The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics.
Comparison of ionospheric plasma drifts obtained by different techniques
NASA Astrophysics Data System (ADS)
Kouba, Daniel; Arikan, Feza; Arikan, Orhan; Toker, Cenk; Mosna, Zbysek; Gok, Gokhan; Rejfek, Lubos; Ari, Gizem
2016-07-01
Ionospheric observatory in Pruhonice (Czech Republic, 50N, 14.9E) provides regular ionospheric sounding using Digisonde DPS-4D. The paper is focused on F-region vertical drift data. Vertical component of the drift velocity vector can be estimated by several methods. Digisonde DPS-4D allows sounding in drift mode with direct output represented by drift velocity vector. The Digisonde located in Pruhonice provides direct drift measurement routinely once per 15 minutes. However, also other different techniques can be found in the literature, for example the indirect estimation based on the temporal evolution of measured ionospheric characteristics is often used for calculation of the vertical drift component. The vertical velocity is thus estimated according to the change of characteristics scaled from the classical quarter-hour ionograms. In present paper direct drift measurement is compared with technique based on measuring of the virtual height at fixed frequency from the F-layer trace on ionogram, technique based on variation of h`F and hmF. This comparison shows possibility of using different methods for calculating vertical drift velocity and their relationship to the direct measurement used by Digisonde. This study is supported by the Joint TUBITAK 114E092 and AS CR 14/001 projects.
Models of Neuronal Stimulus-Response Functions: Elaboration, Estimation, and Evaluation
Meyer, Arne F.; Williamson, Ross S.; Linden, Jennifer F.; Sahani, Maneesh
2017-01-01
Rich, dynamic, and dense sensory stimuli are encoded within the nervous system by the time-varying activity of many individual neurons. A fundamental approach to understanding the nature of the encoded representation is to characterize the function that relates the moment-by-moment firing of a neuron to the recent history of a complex sensory input. This review provides a unifying and critical survey of the techniques that have been brought to bear on this effort thus far—ranging from the classical linear receptive field model to modern approaches incorporating normalization and other nonlinearities. We address separately the structure of the models; the criteria and algorithms used to identify the model parameters; and the role of regularizing terms or “priors.” In each case we consider benefits or drawbacks of various proposals, providing examples for when these methods work and when they may fail. Emphasis is placed on key concepts rather than mathematical details, so as to make the discussion accessible to readers from outside the field. Finally, we review ways in which the agreement between an assumed model and the neuron's response may be quantified. Re-implemented and unified code for many of the methods are made freely available. PMID:28127278
ERIC Educational Resources Information Center
Caballero, Marcos D.; Doughty, Leanne; Turnbull, Anna M.; Pepper, Rachel E.; Pollock, Steven J.
2017-01-01
Reliable and validated assessments of introductory physics have been instrumental in driving curricular and pedagogical reforms that lead to improved student learning. As part of an effort to systematically improve our sophomore-level classical mechanics and math methods course (CM 1) at CU Boulder, we have developed a tool to assess student…
Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule
NASA Astrophysics Data System (ADS)
Jin, Qinian; Wang, Wei
2018-03-01
The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.
Hybrid classical/quantum simulation for infrared spectroscopy of water
NASA Astrophysics Data System (ADS)
Maekawa, Yuki; Sasaoka, Kenji; Ube, Takuji; Ishiguro, Takashi; Yamamoto, Takahiro
2018-05-01
We have developed a hybrid classical/quantum simulation method to calculate the infrared (IR) spectrum of water. The proposed method achieves much higher accuracy than conventional classical molecular dynamics (MD) simulations at a much lower computational cost than ab initio MD simulations. The IR spectrum of water is obtained as an ensemble average of the eigenvalues of the dynamical matrix constructed by ab initio calculations, using the positions of oxygen atoms that constitute water molecules obtained from the classical MD simulation. The calculated IR spectrum is in excellent agreement with the experimental IR spectrum.
Data Analysis Techniques for Physical Scientists
NASA Astrophysics Data System (ADS)
Pruneau, Claude A.
2017-10-01
Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.
Optimal control of underactuated mechanical systems: A geometric approach
NASA Astrophysics Data System (ADS)
Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela
2010-08-01
In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.
Algebraic classification of Weyl anomalies in arbitrary dimensions.
Boulanger, Nicolas
2007-06-29
Conformally invariant systems involving only dimensionless parameters are known to describe particle physics at very high energy. In the presence of an external gravitational field, the conformal symmetry may generalize to the Weyl invariance of classical massless field systems in interaction with gravity. In the quantum theory, the latter symmetry no longer survives: A Weyl anomaly appears. Anomalies are a cornerstone of quantum field theory, and, for the first time, a general, purely algebraic understanding of the universal structure of the Weyl anomalies is obtained, in arbitrary dimensions and independently of any regularization scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klimsiak, Tomasz, E-mail: tomas@mat.umk.pl; Rozkosz, Andrzej, E-mail: rozkosz@mat.umk.pl
In the paper we consider the problem of valuation of American options written on dividend-paying assets whose price dynamics follow the classical multidimensional Black and Scholes model. We provide a general early exercise premium representation formula for options with payoff functions which are convex or satisfy mild regularity assumptions. Examples include index options, spread options, call on max options, put on min options, multiply strike options and power-product options. In the proof of the formula we exploit close connections between the optimal stopping problems associated with valuation of American options, obstacle problems and reflected backward stochastic differential equations.
Cellular Automata with Anticipation: Examples and Presumable Applications
NASA Astrophysics Data System (ADS)
Krushinsky, Dmitry; Makarenko, Alexander
2010-11-01
One of the most prospective new methodologies for modelling is the so-called cellular automata (CA) approach. According to this paradigm, the models are built from simple elements connected into regular structures with local interaction between neighbours. The patterns of connections usually have a simple geometry (lattices). As one of the classical examples of CA we mention the game `Life' by J. Conway. This paper presents two examples of CA with anticipation property. These examples include a modification of the game `Life' and a cellular model of crowd movement.
Off-diagonal expansion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
NASA Astrophysics Data System (ADS)
Wu, Bitao; Wu, Gang; Yang, Caiqian; He, Yi
2018-05-01
A novel damage identification method for concrete continuous girder bridges based on spatially-distributed long-gauge strain sensing is presented in this paper. First, the variation regularity of the long-gauge strain influence line of continuous girder bridges which changes with the location of vehicles on the bridge is studied. According to this variation regularity, a calculation method for the distribution regularity of the area of long-gauge strain history is investigated. Second, a numerical simulation of damage identification based on the distribution regularity of the area of long-gauge strain history is conducted, and the results indicate that this method is effective for identifying damage and is not affected by the speed, axle number and weight of vehicles. Finally, a real bridge test on a highway is conducted, and the experimental results also show that this method is very effective for identifying damage in continuous girder bridges, and the local element stiffness distribution regularity can be revealed at the same time. This identified information is useful for maintaining of continuous girder bridges on highways.
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.
ERIC Educational Resources Information Center
Hester, Yvette
Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least…
NASA Astrophysics Data System (ADS)
Ignatyev, A. V.; Ignatyev, V. A.; Onischenko, E. V.
2017-11-01
This article is the continuation of the work made bt the authors on the development of the algorithms that implement the finite element method in the form of a classical mixed method for the analysis of geometrically nonlinear bar systems [1-3]. The paper describes an improved algorithm of the formation of the nonlinear governing equations system for flexible plane frames and bars with large displacements of nodes based on the finite element method in a mixed classical form and the use of the procedure of step-by-step loading. An example of the analysis is given.
Nonsmooth, nonconvex regularizers applied to linear electromagnetic inverse problems
NASA Astrophysics Data System (ADS)
Hidalgo-Silva, H.; Gomez-Trevino, E.
2017-12-01
Tikhonov's regularization method is the standard technique applied to obtain models of the subsurface conductivity distribution from electric or electromagnetic measurements by solving UT (m) = | F (m) - d |2 + λ P(m). The second term correspond to the stabilizing functional, with P (m) = | ∇ m |2 the usual approach, and λ the regularization parameter. Due to the roughness penalizer inclusion, the model developed by Tikhonov's algorithm tends to smear discontinuities, a feature that may be undesirable. An important requirement for the regularizer is to allow the recovery of edges, and smooth the homogeneous parts. As is well known, Total Variation (TV) is now the standard approach to meet this requirement. Recently, Wang et.al. proved convergence for alternating direction method of multipliers in nonconvex, nonsmooth optimization. In this work we present a study of several algorithms for model recovering of Geosounding data based on Infimal Convolution, and also on hybrid, TV and second order TV and nonsmooth, nonconvex regularizers, observing their performance on synthetic and real data. The algorithms are based on Bregman iteration and Split Bregman method, and the geosounding method is the low-induction numbers magnetic dipoles. Non-smooth regularizers are considered using the Legendre-Fenchel transform.
NASA Astrophysics Data System (ADS)
Geng, Weihua; Zhao, Shan
2017-12-01
We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.
NASA Astrophysics Data System (ADS)
Maslakov, M. L.
2018-04-01
This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.
Pulse and Entrainment to Non-Isochronous Auditory Stimuli: The Case of North Indian Alap
Will, Udo; Clayton, Martin; Wertheim, Ira; Leante, Laura; Berg, Eric
2015-01-01
Pulse is often understood as a feature of a (quasi-) isochronous event sequence that is picked up by an entrained subject. However, entrainment does not only occur between quasi-periodic rhythms. This paper demonstrates the expression of pulse by subjects listening to non-periodic musical stimuli and investigates the processes behind this behaviour. The stimuli are extracts from the introductory sections of North Indian (Hindustani) classical music performances (alap, jor and jhala). The first of three experiments demonstrates regular motor responses to both irregular alap and more regular jor sections: responses to alap appear related to individual spontaneous tempi, while for jor they relate to the stimulus event rate. A second experiment investigated whether subjects respond to average periodicities of the alap section, and whether their responses show phase alignment to the musical events. In the third experiment we investigated responses to a broader sample of performances, testing their relationship to spontaneous tempo, and the effect of prior experience with this music. Our results suggest an entrainment model in which pulse is understood as the experience of one’s internal periodicity: it is not necessarily linked to temporally regular, structured sensory input streams; it can arise spontaneously through the performance of repetitive motor actions, or on exposure to event sequences with rather irregular temporal structures. Greater regularity in the external event sequence leads to entrainment between motor responses and stimulus sequence, modifying subjects’ internal periodicities in such a way that they are either identical or harmonically related to each other. This can be considered as the basis for shared (rhythmic) experience and may be an important process supporting ‘social’ effects of temporally regular music. PMID:25849357
Pulse and entrainment to non-isochronous auditory stimuli: the case of north Indian alap.
Will, Udo; Clayton, Martin; Wertheim, Ira; Leante, Laura; Berg, Eric
2015-01-01
Pulse is often understood as a feature of a (quasi-) isochronous event sequence that is picked up by an entrained subject. However, entrainment does not only occur between quasi-periodic rhythms. This paper demonstrates the expression of pulse by subjects listening to non-periodic musical stimuli and investigates the processes behind this behaviour. The stimuli are extracts from the introductory sections of North Indian (Hindustani) classical music performances (alap, jor and jhala). The first of three experiments demonstrates regular motor responses to both irregular alap and more regular jor sections: responses to alap appear related to individual spontaneous tempi, while for jor they relate to the stimulus event rate. A second experiment investigated whether subjects respond to average periodicities of the alap section, and whether their responses show phase alignment to the musical events. In the third experiment we investigated responses to a broader sample of performances, testing their relationship to spontaneous tempo, and the effect of prior experience with this music. Our results suggest an entrainment model in which pulse is understood as the experience of one's internal periodicity: it is not necessarily linked to temporally regular, structured sensory input streams; it can arise spontaneously through the performance of repetitive motor actions, or on exposure to event sequences with rather irregular temporal structures. Greater regularity in the external event sequence leads to entrainment between motor responses and stimulus sequence, modifying subjects' internal periodicities in such a way that they are either identical or harmonically related to each other. This can be considered as the basis for shared (rhythmic) experience and may be an important process supporting 'social' effects of temporally regular music.
Classical Trajectories and Quantum Spectra
NASA Technical Reports Server (NTRS)
Mielnik, Bogdan; Reyes, Marco A.
1996-01-01
A classical model of the Schrodinger's wave packet is considered. The problem of finding the energy levels corresponds to a classical manipulation game. It leads to an approximate but non-perturbative method of finding the eigenvalues, exploring the bifurcations of classical trajectories. The role of squeezing turns out decisive in the generation of the discrete spectra.
Multipole Vortex Blobs (MVB): Symplectic Geometry and Dynamics.
Holm, Darryl D; Jacobs, Henry O
2017-01-01
Vortex blob methods are typically characterized by a regularization length scale, below which the dynamics are trivial for isolated blobs. In this article, we observe that the dynamics need not be trivial if one is willing to consider distributional derivatives of Dirac delta functionals as valid vorticity distributions. More specifically, a new singular vortex theory is presented for regularized Euler fluid equations of ideal incompressible flow in the plane. We determine the conditions under which such regularized Euler fluid equations may admit vorticity singularities which are stronger than delta functions, e.g., derivatives of delta functions. We also describe the symplectic geometry associated with these augmented vortex structures, and we characterize the dynamics as Hamiltonian. Applications to the design of numerical methods similar to vortex blob methods are also discussed. Such findings illuminate the rich dynamics which occur below the regularization length scale and enlighten our perspective on the potential for regularized fluid models to capture multiscale phenomena.
NASA Astrophysics Data System (ADS)
Burman, Erik; Hansbo, Peter; Larson, Mats G.
2018-03-01
Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.
NASA Astrophysics Data System (ADS)
Bonhommeau, David; Truhlar, Donald G.
2008-07-01
The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode ν2 with n2=0,…,6 quanta of vibration) in the à electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTU /SD+trajectory projection onto ZPE orbit (TRAPZ) and FSTU /SD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH2 internal energy distributions obtained for n2=0 and n2>1, as observed in experiments. Distributions obtained for n2=1 present an intermediate behavior between distributions obtained for smaller and larger n2 values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH2 internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n2=0 and n2=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.
Bonhommeau, David; Truhlar, Donald G
2008-07-07
The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode nu(2) with n(2)=0,[ellipsis (horizontal)],6 quanta of vibration) in the A electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTUSD+trajectory projection onto ZPE orbit (TRAPZ) and FSTUSD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH(2) internal energy distributions obtained for n(2)=0 and n(2)>1, as observed in experiments. Distributions obtained for n(2)=1 present an intermediate behavior between distributions obtained for smaller and larger n(2) values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH(2) internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n(2)=0 and n(2)=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.
s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography
Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai
2016-01-01
EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529
Using CAS to Solve Classical Mathematics Problems
ERIC Educational Resources Information Center
Burke, Maurice J.; Burroughs, Elizabeth A.
2009-01-01
Historically, calculus has displaced many algebraic methods for solving classical problems. This article illustrates an algebraic method for finding the zeros of polynomial functions that is closely related to Newton's method (devised in 1669, published in 1711), which is encountered in calculus. By exploring this problem, precalculus students…
Eigensystem analysis of classical relaxation techniques with applications to multigrid analysis
NASA Technical Reports Server (NTRS)
Lomax, Harvard; Maksymiuk, Catherine
1987-01-01
Classical relaxation techniques are related to numerical methods for solution of ordinary differential equations. Eigensystems for Point-Jacobi, Gauss-Seidel, and SOR methods are presented. Solution techniques such as eigenvector annihilation, eigensystem mixing, and multigrid methods are examined with regard to the eigenstructure.
Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics
ERIC Educational Resources Information Center
Schlitt, D. W.
1977-01-01
Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)
Maggi, Maristella; Scotti, Claudia
2017-08-01
Single domain antibodies (sdAbs) are small antigen-binding domains derived from naturally occurring, heavy chain-only immunoglobulins isolated from camelid and sharks. They maintain the same binding capability of full-length IgGs but with improved thermal stability and permeability, which justifies their scientific, medical and industrial interest. Several described recombinant forms of sdAbs have been produced in different hosts and with different strategies. Here we present an optimized method for a time-saving, high yield production and extraction of a poly-histidine-tagged sdAb from Escherichia coli classical inclusion bodies. Protein expression and extraction were attempted using 4 different methods (e.g. autoinducing or IPTG-induced soluble expression, non-classical and classical inclusion bodies). The best method resulted to be expression in classical inclusion bodies and urea-mediated protein extraction which yielded 60-70 mg/l bacterial culture. The method we here describe can be of general interest for an enhanced and efficient heterologous expression of sdAbs for research and industrial purposes. Copyright © 2017 Elsevier Inc. All rights reserved.
Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms
NASA Astrophysics Data System (ADS)
Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.
2017-09-01
Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.
NASA Astrophysics Data System (ADS)
Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.
2018-02-01
Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.
Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.
Skariah, Deepak G; Arigovindan, Muthuvel
2017-06-19
We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.
Estimation of High-Dimensional Graphical Models Using Regularized Score Matching
Lin, Lina; Drton, Mathias; Shojaie, Ali
2017-01-01
Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498
Krylov subspace iterative methods for boundary element method based near-field acoustic holography.
Valdivia, Nicolas; Williams, Earl G
2005-02-01
The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.
The particle problem in classical gravity: a historical note on 1941
NASA Astrophysics Data System (ADS)
Galvagno, Mariano; Giribet, Gastón
2005-11-01
This historical note is mainly based on a relatively unknown paper published by Albert Einstein in Revista de la Universidad Nacional de Tucumán in 1941. Taking the ideas of this work as a leitmotiv, we review the discussions about the particle problem in the theory of gravitation within the historical context by means of the study of seminal works on the subject. The revision shows how the digressions regarding the structure of matter and the concise problem of finding regular solutions of the pure field equations turned out to be intrinsically unified in the beginning of the programme towards a final theory of fields. The paper mentioned (Einstein 1941a Rev. Univ. Nac. Tucumán A 2 11) represents the basis of the one written by Einstein in collaboration with Wolfgang Pauli in 1943, in which, following analogous lines, the proof of the non-existence of regular particle-type solutions was generalized to the case of cylindrical geometries in Kaluza-Klein theory (Einstein and Pauli 1943 Ann. Math. 44 131). Besides, other generalizations were subsequently presented. The (non-)existence of such solutions in classical unified field theory was undoubtedly an important criterion leading Einstein's investigations. This aspect was investigated with expertness by Jeroen van Dongen in a recent work, though restricting the scope to the particular case of Kaluza-Klein theory (van Dongen 2002 Stud. Hist. Phil. Mod. Phys. 33 185). Here, we discuss the particle problem within a more general context, presenting in this way a complement to previous reviews.
Classical versus Computer Algebra Methods in Elementary Geometry
ERIC Educational Resources Information Center
Pech, Pavel
2005-01-01
Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…
Chen, Mohan; Vella, Joseph R.; Panagiotopoulos, Athanassios Z.; ...
2015-04-08
The structure and dynamics of liquid lithium are studied using two simulation methods: orbital-free (OF) first-principles molecular dynamics (MD), which employs OF density functional theory (DFT), and classical MD utilizing a second nearest-neighbor embedded-atom method potential. The properties we studied include the dynamic structure factor, the self-diffusion coefficient, the dispersion relation, the viscosity, and the bond angle distribution function. Our simulation results were compared to available experimental data when possible. Each method has distinct advantages and disadvantages. For example, OFDFT gives better agreement with experimental dynamic structure factors, yet is more computationally demanding than classical simulations. Classical simulations can accessmore » a broader temperature range and longer time scales. The combination of first-principles and classical simulations is a powerful tool for studying properties of liquid lithium.« less
Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
Guo, Xueshi; Li, Xiaoying; Liu, Nannan; Ou, Z Y
2016-07-26
One of the important functions in a communication network is the distribution of information. It is not a problem to accomplish this in a classical system since classical information can be copied at will. However, challenges arise in quantum system because extra quantum noise is often added when the information content of a quantum state is distributed to various users. Here, we experimentally demonstrate a quantum information tap by using a fiber optical parametric amplifier (FOPA) with correlated inputs, whose noise is reduced by the destructive quantum interference through quantum entanglement between the signal and the idler input fields. By measuring the noise figure of the FOPA and comparing with a regular FOPA, we observe an improvement of 0.7 ± 0.1 dB and 0.84 ± 0.09 dB from the signal and idler outputs, respectively. When the low noise FOPA functions as an information splitter, the device has a total information transfer coefficient of Ts+Ti = 1.5 ± 0.2, which is greater than the classical limit of 1. Moreover, this fiber based device works at the 1550 nm telecom band, so it is compatible with the current fiber-optical network for quantum information distribution.
NASA Astrophysics Data System (ADS)
Ivanov, Sergey V.; Buzykin, Oleg G.
2016-12-01
A classical approach is applied to calculate pressure broadening coefficients of CO2 vibration-rotational spectral lines perturbed by Ar. Three types of spectra are examined: electric dipole (infrared) absorption; isotropic and anisotropic Raman Q branches. Simple and explicit formulae of the classical impact theory are used along with exact 3D Hamilton equations for CO2-Ar molecular motion. The calculations utilize vibrationally independent most accurate ab initio potential energy surface (PES) of Hutson et al. expanded in Legendre polynomial series up to lmax = 24. New improved algorithm of classical rotational frequency selection is applied. The dependences of CO2 half-widths on rotational quantum number J up to J=100 are computed for the temperatures between 77 and 765 K and compared with available experimental data as well as with the results of fully quantum dynamical calculations performed on the same PES. To make the picture complete, the predictions of two independent variants of the semi-classical Robert-Bonamy formalism for dipole absorption lines are included. This method. however, has demonstrated poor accuracy almost for all temperatures. On the contrary, classical broadening coefficients are in excellent agreement both with measurements and with quantum results at all temperatures. The classical impact theory in its present variant is capable to produce quickly and accurately the pressure broadening coefficients of spectral lines of linear molecules for any J value (including high Js) using full-dimensional ab initio - based PES in the cases where other computational methods are either extremely time consuming (like the quantum close coupling method) or give erroneous results (like semi-classical methods).
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
Yan, Zai You; Hung, Kin Chew; Zheng, Hui
2003-05-01
Regularization of the hypersingular integral in the normal derivative of the conventional Helmholtz integral equation through a double surface integral method or regularization relationship has been studied. By introducing the new concept of discretized operator matrix, evaluation of the double surface integrals is reduced to calculate the product of two discretized operator matrices. Such a treatment greatly improves the computational efficiency. As the number of frequencies to be computed increases, the computational cost of solving the composite Helmholtz integral equation is comparable to that of solving the conventional Helmholtz integral equation. In this paper, the detailed formulation of the proposed regularization method is presented. The computational efficiency and accuracy of the regularization method are demonstrated for a general class of acoustic radiation and scattering problems. The radiation of a pulsating sphere, an oscillating sphere, and a rigid sphere insonified by a plane acoustic wave are solved using the new method with curvilinear quadrilateral isoparametric elements. It is found that the numerical results rapidly converge to the corresponding analytical solutions as finer meshes are applied.
Widén, F; Everett, H; Blome, S; Fernandez Pinero, J; Uttenthal, A; Cortey, M; von Rosen, T; Tignon, M; Liu, L
2014-10-01
Classical swine fever is one of the most important infectious diseases for the pig industry worldwide due to its economic impact. Vaccination is an effective means to control disease, however within the EU its regular use is banned owing to the inability to differentiate infected and vaccinated animals, the so called DIVA principle. This inability complicates monitoring of disease and stops international trade thereby limiting use of the vaccine in many regions. The C-strain vaccine is safe to use and gives good protection. It is licensed for emergency vaccination in the EU in event of an outbreak. Two genetic assays that can distinguish between wild type virus and C-strain vaccines have recently been developed. Here the results from a comparison of these two real-time RT-PCR assays in an interlaboratory exercise are presented. Both assays showed similar performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Löw, U; Palmowski, A M; Weich, C-M; Ruprecht, K W
2004-12-01
Since the description of the "multiple evanescent white dot syndrome" (MEWDS) by Jampol et al, choroiditis has been in the focus of interest. But the classical type of MEWDS was an exceptional case in clinical routine. A 48-year-old female presented to our hospital with a sudden unilateral visual acuity decrease and an extension of the blind spot. Ophthalmoscopy and fluorescein angiography revealed typical multiple grey-white chorioretinal patches of the same stage with lesion areas of about 100 - 200 microm compatible with the diagnose of MEWDS. Although visual acuity increased continuously the patient developed a classical choroidal neovascularization within 4 weeks. She was treated with PDT and visual acuity as well as the ophthalmoscopic diagnosis remained stable. In spite of visual improvement in MEWDS, regular control is recommended. In addition we propose to consider the diagnosis of MEWDS if an enlargement of the blind spot and CNV without lesions of the retinal pigment epithelium are diagnosed.
Energetics and solvation structure of a dihalogen dopant (I2) in (4)He clusters.
Pérez de Tudela, Ricardo; Barragán, Patricia; Valdés, Álvaro; Prosmiti, Rita
2014-08-21
The energetics and structure of small HeNI2 clusters are analyzed as the size of the system changes, with N up to 38. The full interaction between the I2 molecule and the He atoms is based on analytical ab initio He-I2 potentials plus the He-He interaction, obtained from first-principle calculations. The most stable structures, as a function of the number of solvent He atoms, are obtained by employing an evolutionary algorithm and compared with CCSD(T) and MP2 ab initio computations. Further, the classical description is completed by explicitly including thermal corrections and quantum features, such as zero-point-energy values and spatial delocalization. From quantum PIMC calculations, the binding energies and radial/angular probability density distributions of the thermal equilibrium state for selected-size clusters are computed at a low temperature. The sequential formation of regular shell structures is analyzed and discussed for both classical and quantum treatments.
Haider, Bilal; Krause, Matthew R.; Duque, Alvaro; Yu, Yuguo; Touryan, Jonathan; Mazer, James A.; McCormick, David A.
2011-01-01
SUMMARY During natural vision, the entire visual field is stimulated by images rich in spatiotemporal structure. Although many visual system studies restrict stimuli to the classical receptive field (CRF), it is known that costimulation of the CRF and the surrounding nonclassical receptive field (nCRF) increases neuronal response sparseness. The cellular and network mechanisms underlying increased response sparseness remain largely unexplored. Here we show that combined CRF + nCRF stimulation increases the sparseness, reliability, and precision of spiking and membrane potential responses in classical regular spiking (RSC) pyramidal neurons of cat primary visual cortex. Conversely, fast-spiking interneurons exhibit increased activity and decreased selectivity during CRF + nCRF stimulation. The increased sparseness and reliability of RSC neuron spiking is associated with increased inhibitory barrages and narrower visually evoked synaptic potentials. Our experimental observations were replicated with a simple computational model, suggesting that network interactions among neuronal subtypes ultimately sharpen recurrent excitation, producing specific and reliable visual responses. PMID:20152117
NASA Astrophysics Data System (ADS)
Hernandez, Monica
2017-12-01
This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.
Introduction of Total Variation Regularization into Filtered Backprojection Algorithm
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.
NASA Astrophysics Data System (ADS)
Nikolaev, A. S.
2015-03-01
We study the structure of the canonical Poincaré-Lindstedt perturbation series in the Deprit operator formalism and establish its connection to the Kato resolvent expansion. A discussion of invariant definitions for averaging and integrating perturbation operators and their canonical identities reveals a regular pattern in the series for the Deprit generator. This regularity is explained using Kato series and the relation of the perturbation operators to the Laurent coefficients for the resolvent of the Liouville operator. This purely canonical approach systematizes the series and leads to an explicit expression for the Deprit generator in any order of the perturbation theory: , where is the partial pseudoinverse of the perturbed Liouville operator. The corresponding Kato series provides a reasonably effective computational algorithm. The canonical connection of the perturbed and unperturbed averaging operators allows describing ambiguities in the generator and transformed Hamiltonian, while Gustavson integrals turn out to be insensitive to the normalization style. We use nonperturbative examples for illustration.
Noiseless Vlasov-Poisson simulations with linearly transformed particles
Pinto, Martin C.; Sonnendrucker, Eric; Friedman, Alex; ...
2014-06-25
We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development ofmore » Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Lastly, benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.« less
Scenario generation for stochastic optimization problems via the sparse grid method
Chen, Michael; Mehrotra, Sanjay; Papp, David
2015-04-19
We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less
42 CFR 61.6 - Method of application.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...
42 CFR 61.6 - Method of application.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...
42 CFR 61.6 - Method of application.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...
42 CFR 61.6 - Method of application.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...
42 CFR 61.6 - Method of application.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...
Dictionary learning-based spatiotemporal regularization for 3D dense speckle tracking
NASA Astrophysics Data System (ADS)
Lu, Allen; Zontak, Maria; Parajuli, Nripesh; Stendahl, John C.; Boutagy, Nabil; Eberle, Melissa; O'Donnell, Matthew; Sinusas, Albert J.; Duncan, James S.
2017-03-01
Speckle tracking is a common method for non-rigid tissue motion analysis in 3D echocardiography, where unique texture patterns are tracked through the cardiac cycle. However, poor tracking often occurs due to inherent ultrasound issues, such as image artifacts and speckle decorrelation; thus regularization is required. Various methods, such as optical flow, elastic registration, and block matching techniques have been proposed to track speckle motion. Such methods typically apply spatial and temporal regularization in a separate manner. In this paper, we propose a joint spatiotemporal regularization method based on an adaptive dictionary representation of the dense 3D+time Lagrangian motion field. Sparse dictionaries have good signal adaptive and noise-reduction properties; however, they are prone to quantization errors. Our method takes advantage of the desirable noise suppression, while avoiding the undesirable quantization error. The idea is to enforce regularization only on the poorly tracked trajectories. Specifically, our method 1.) builds data-driven 4-dimensional dictionary of Lagrangian displacements using sparse learning, 2.) automatically identifies poorly tracked trajectories (outliers) based on sparse reconstruction errors, and 3.) performs sparse reconstruction of the outliers only. Our approach can be applied on dense Lagrangian motion fields calculated by any method. We demonstrate the effectiveness of our approach on a baseline block matching speckle tracking and evaluate performance of the proposed algorithm using tracking and strain accuracy analysis.
A Review of Classical Methods of Item Analysis.
ERIC Educational Resources Information Center
French, Christine L.
Item analysis is a very important consideration in the test development process. It is a statistical procedure to analyze test items that combines methods used to evaluate the important characteristics of test items, such as difficulty, discrimination, and distractibility of the items in a test. This paper reviews some of the classical methods for…
Joint Processing of Envelope Alignment and Phase Compensation for Isar Imaging
NASA Astrophysics Data System (ADS)
Chen, Tao; Jin, Guanghu; Dong, Zhen
2018-04-01
Range envelope alignment and phase compensation are spilt into two isolated parts in the classical methods of translational motion compensation in Inverse Synthetic Aperture Radar (ISAR) imaging. In classic method of the rotating object imaging, the two reference points of the envelope alignment and the Phase Difference (PD) estimation are probably not the same point, making it difficult to uncouple the coupling term by conducting the correction of Migration Through Resolution Cell (MTRC). In this paper, an improved approach of joint processing which chooses certain scattering point as the sole reference point is proposed to perform with utilizing the Prominent Point Processing (PPP) method. With this end in view, we firstly get the initial image using classical methods from which a certain scattering point can be chose. The envelope alignment and phase compensation using the selected scattering point as the same reference point are subsequently conducted. The keystone transform is thus smoothly applied to further improve imaging quality. Both simulation experiments and real data processing are provided to demonstrate the performance of the proposed method compared with classical method.
Galvão-Lima, Leonardo J; Espíndola, Milena S; Soares, Luana S; Zambuzi, Fabiana A; Cacemiro, Maira; Fontanari, Caroline; Bollela, Valdes R; Frantz, Fabiani G
Three decades after HIV recognition and its association with AIDS development, many advances have emerged - especially related to prevention and treatment. Undoubtedly, the development of Highly Active Antiretroviral Therapy (HAART) dramatically changed the future of the syndrome that we know today. In the present study, we evaluate the impact of Highly Active Antiretroviral Therapy on macrophage function and its relevance to HIV pathogenesis. PBMCs were isolated from blood samples and monocytes (CD14+ cells) were purified. Monocyte-Derived Macrophages (MDMs) were activated on classical (M GM-CSF+IFN-γ ) or alternative (M IL-4+IL13 ) patterns using human recombinant cytokines for six days. After this period, Monocyte-Derived Macrophages were stimulated with TLR2/Dectin-1 or TLR4 agonists and we evaluated the influence of HIV-1 infection and Highly Active Antiretroviral Therapy on the release of cytokines/chemokines by macrophages. The data were obtained using Monocyte-Derived Macrophages derived from HIV naïve or from patients on regular Highly Active Antiretroviral Therapy. Classically Monocyte-Derived Macrophages obtained from HIV-1 infected patients on Highly Active Antiretroviral Therapy released higher levels of IL-6 and IL-12 even without PAMPs stimuli when compared to control group. On the other hand, alternative Monocyte-Derived Macrophages derived from HIV-1 infected patients on Highly Active Antiretroviral Therapy released lower levels of IL-6, IL-10, TNF-α, IP-10 and RANTES after LPS stimuli when compared to control group. Furthermore, healthy individuals have a complex network of cytokines/chemokines released by Monocyte-Derived Macrophages after PAMP stimuli, which was deeply affected in MDMs obtained from naïve HIV-1 infected patients and only partially restored in MDMs derived from HIV-1 infected patients even on regular Highly Active Antiretroviral Therapy. Our therapy protocols were not effective in restoring the functional alterations induced by HIV, especially those found on macrophages. These findings indicate that we still need to develop new approaches and improve the current therapy protocols, focusing on the reestablishment of cellular functions and prevention/treatment of opportunistic infections. Copyright © 2016 Sociedade Brasileira de Infectologia. Published by Elsevier Editora Ltda. All rights reserved.
NASA Astrophysics Data System (ADS)
Vattré, A.
2017-08-01
A parametric energy-based framework is developed to describe the elastic strain relaxation of interface dislocations. By means of the Stroh sextic formalism with a Fourier series technique, the proposed approach couples the classical anisotropic elasticity theory with surface/interface stress and elasticity properties in heterogeneous interface-dominated materials. For any semicoherent interface of interest, the strain energy landscape is computed using the persistent elastic fields produced by infinitely periodic hexagonal-shaped dislocation configurations with planar three-fold nodes. A finite element based procedure combined with the conjugate gradient and nudged elastic band methods is applied to determine the minimum-energy paths for which the pre-computed energy landscapes yield to elastically favorable dislocation reactions. Several applications on the Au/Cu heterosystems are given. The simple and limiting case of a single set of infinitely periodic dislocations is introduced to determine exact closed-form expressions for stresses. The second limiting case of the pure (010) Au/Cu heterophase interfaces containing two crossing sets of straight dislocations investigates the effects due to the non-classical boundary conditions on the stress distributions, including separate and appropriate constitutive relations at semicoherent interfaces and free surfaces. Using the quantized Frank-Bilby equation, it is shown that the elastic strain landscape exhibits intrinsic dislocation configurations for which the junction formation is energetically unfavorable. On the other hand, the mismatched (111) Au/Cu system gives rise to the existence of a minimum-energy path where the fully strain-relaxed equilibrium and non-regular intrinsic hexagonal-shaped dislocation rearrangement is accompanied by a significant removal of the short-range elastic energy.
Stable and unstable accretion in the classical T Tauri stars IM Lup and RU Lup as observed by MOST
NASA Astrophysics Data System (ADS)
Siwak, Michal; Ogloza, Waldemar; Rucinski, Slavek M.; Moffat, Anthony F. J.; Matthews, Jaymie M.; Cameron, Chris; Guenther, David B.; Kuschnig, Rainer; Rowe, Jason F.; Sasselov, Dimitar; Weiss, Werner W.
2016-03-01
Results of the time variability monitoring of the two classical T Tauri stars, RU Lup and IM Lup, are presented. Three photometric data sets were utilized: (1) simultaneous (same field) MOST satellite observations over four weeks in each of the years 2012 and 2013, (2) multicolour observations at the South African Astronomical Observatory in April-May of 2013, (3) archival V-filter All Sky Automated Survey (ASAS) data for nine seasons, 2001-2009. They were augmented by an analysis of high-resolution, public-domain VLT-UT2 Ultraviolet Visual Echelle Spectrograph spectra from the years 2000 to 2012. From the MOST observations, we infer that irregular light variations of RU Lup are caused by stochastic variability of hotspots induced by unstable accretion. In contrast, the MOST light curves of IM Lup are fairly regular and modulated with a period of about 7.19-7.58 d, which is in accord with ASAS observations showing a well-defined 7.247 ± 0.026 d periodicity. We propose that this is the rotational period of IM Lup and is due to the changing visibility of two antipodal hotspots created near the stellar magnetic poles during the stable process of accretion. Re-analysis of RU Lup high-resolution spectra with the broadening function approach reveals signs of a large polar coldspot, which is fairly stable over 13 years. As the star rotates, the spot-induced depression of intensity in the broadening function profiles changes cyclically with period 3.710 58 d, which was previously found by the spectral cross-correlation method.
Toward a qualitative understanding of binge-watching behaviors: A focus group approach.
Flayelle, Maèva; Maurage, Pierre; Billieux, Joël
2017-12-01
Background and aims Binge-watching (i.e., seeing multiple episodes of the same TV series in a row) now constitutes a widespread phenomenon. However, little is known about the psychological factors underlying this behavior, as reflected by the paucity of available studies, most merely focusing on its potential harmfulness by applying the classic criteria used for other addictive disorders without exploring the uniqueness of binge-watching. This study thus aimed to take the opposite approach as a first step toward a genuine understanding of binge-watching behaviors through a qualitative analysis of the phenomenological characteristics of TV series watching. Methods A focus group of regular TV series viewers (N = 7) was established to explore a wide range of aspects related to TV series watching (e.g., motives, viewing practices, and related behaviors). Results A content analysis identified binge-watching features across three dimensions: TV series watching motivations, TV series watching engagement, and structural characteristics of TV shows. Most participants acknowledged that TV series watching can become addictive, but they all agreed having trouble recognizing themselves as truly being an "addict." Although obvious connections could be established with substance addiction criteria and symptoms, such parallelism appeared to be insufficient, as several distinctive facets emerged (e.g., positive view, transient overinvolvement, context dependency, and low everyday life impact). Discussion and conclusion The research should go beyond the classic biomedical and psychological models of addictive behaviors to account for binge-watching in order to explore its specificities and generate the first steps toward an adequate theoretical rationale for these emerging problematic behaviors.
Kobayashi, J; Yanagisawa, R; Ono, T; Tatsuzawa, Y; Tokutake, Y; Kubota, N; Hidaka, E; Sakashita, K; Kojima, S; Shimodaira, S; Nakamura, T
2018-02-01
Adverse reactions to platelet transfusions are a problem. Children with primary haematological and malignant diseases may experience allergic transfusion reactions (ATRs) to platelet concentrates (PCs), which can be prevented by giving washed PCs. A new platelet additive solution, using bicarbonated Ringer's solution and acid-citrate-dextrose formula A (BRS-A), may be better for platelet washing and storage, but clinical data are scarce. A retrospective cohort study for consecutive cases was performed between 2013 and 2017. For 24 months, we transfused washed PCs containing BRS-A to children with primary haematological and malignant diseases and previous adverse reactions. Patients transfused with conventional PCs (containing residual plasma) were assigned as controls, and results were compared in terms of frequency of ATRs, corrected count increment (CCI) and occurrence of bleeding. We also studied children transfused with PCs washed by a different system as historical controls. Thirty-two patients received 377 conventional PC transfusions. ATRs occurred in 12 (37·5%) patients from transfused with 18 (4·8%) bags. Thirteen patients, who experienced reactions to regular PCs in plasma, then received 119 transfusion bags of washed PCs containing BRS-A, and none had ATRs to washed PCs containing BRS-A. Before study period, six patients transfused 137 classical washed PCs with different platelet additive solution, under same indication, ATRs occurred in one (16·7%) patient from transfused with one (0·7%) bags. CCIs (24 h) in were lower with classical washed PCs (1·26 ± 0·54) compared to regular PCs in plasma (2·07 ± 0·76) (P < 0·001), but there was no difference between washed PCs containing BRS-A (2·14 ± 0·77) and regular PCs (2·21 ± 0·79) (P = 0·769), and we saw no post-transfusion bleeding. Washed PCs containing BRS-A appear to prevent ATRs without loss of transfusion efficacy in children with primary haematological and malignant diseases. Their efficacy should be further evaluated through larger prospective clinical trials. © 2017 International Society of Blood Transfusion.
[Today's meaning of classical authors of political thinking].
Weinacht, Paul-Ludwig
2005-01-01
How can classical political authors be actualised? The question is asked in a discipline which is founded in old traditions: the political science. One of its great matters is the history of political ideas. Classic authors are treated in many books, but they are viewed in different perspectives; colleagues do not agree with shining and bad examples. For actualising classic we have to go a methodically reflected way: historic not historicistic, with sensibility for classic and christian norms without dogmatism or scepticism. Searching the permanent problems we try to translate the original concepts of the classic authors carefully in our time. For demonstrating our method of actualising, we choose the French classical author Montesquieu. His famous concept of division of powers is misunderstood as a "liberal" mechanism which works in itself in favour of freedom (such as Kant made work a "natural mechanism" in a people of devils in favour of their legality); in reality Montesquieu acknoledges that constitutional und organisational work cannot stabilise themselves but must be found in social character and in human virtues.
NASA Astrophysics Data System (ADS)
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
History matching by spline approximation and regularization in single-phase areal reservoirs
NASA Technical Reports Server (NTRS)
Lee, T. Y.; Kravaris, C.; Seinfeld, J.
1986-01-01
An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.
General phase regularized reconstruction using phase cycling.
Ong, Frank; Cheng, Joseph Y; Lustig, Michael
2018-07-01
To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112-125, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Quantum calculus of classical vortex images, integrable models and quantum states
NASA Astrophysics Data System (ADS)
Pashaev, Oktay K.
2016-10-01
From two circle theorem described in terms of q-periodic functions, in the limit q→1 we have derived the strip theorem and the stream function for N vortex problem. For regular N-vortex polygon we find compact expression for the velocity of uniform rotation and show that it represents a nonlinear oscillator. We describe q-dispersive extensions of the linear and nonlinear Schrodinger equations, as well as the q-semiclassical expansions in terms of Bernoulli and Euler polynomials. Different kind of q-analytic functions are introduced, including the pq-analytic and the golden analytic functions.
Toscano; de Aguiar MA; Ozorio De Almeida AM
2001-01-01
We propose a picture of Wigner function scars as a sequence of concentric rings along a two-dimensional surface inside a periodic orbit. This is verified for a two-dimensional plane that contains a classical hyperbolic orbit of a Hamiltonian system with 2 degrees of freedom. The stationary wave functions are the familiar mixture of scarred and random waves, but the spectral average of the Wigner functions in part of the plane is nearly that of a harmonic oscillator and individual states are also remarkably regular. These results are interpreted in terms of the semiclassical picture of chords and centers.
Role of Orbital Dynamics in Spin Relaxation and Weak Antilocalization in Quantum Dots
NASA Astrophysics Data System (ADS)
Zaitsev, Oleg; Frustaglia, Diego; Richter, Klaus
2005-01-01
We develop a semiclassical theory for spin-dependent quantum transport to describe weak (anti)localization in quantum dots with spin-orbit coupling. This allows us to distinguish different types of spin relaxation in systems with chaotic, regular, and diffusive orbital classical dynamics. We find, in particular, that for typical Rashba spin-orbit coupling strengths, integrable ballistic systems can exhibit weak localization, while corresponding chaotic systems show weak antilocalization. We further calculate the magnetoconductance and analyze how the weak antilocalization is suppressed with decreasing quantum dot size and increasing additional in-plane magnetic field.
NASA Astrophysics Data System (ADS)
Graham, D. L.
1995-02-01
Bright and dark markings have been regularly recorded by visual observers of Mercury since the nineteenth century. Following the Mariner 10 mission, topographic maps of the hemisphere imaged by the spacecraft were produced. Part One of this paper reviews the classical telescopic observations of albedo markings on Mercury and the definitive albedo map is reproduced to assist visual observers of the planet. In Part Two, an investigation into the relationship between albedo and physiography is conducted and the significance of the historical observations is discussed.
An evaluation of collision models in the Method of Moments for rarefied gas problems
NASA Astrophysics Data System (ADS)
Emerson, David; Gu, Xiao-Jun
2014-11-01
The Method of Moments offers an attractive approach for solving gaseous transport problems that are beyond the limit of validity of the Navier-Stokes-Fourier equations. Recent work has demonstrated the capability of the regularized 13 and 26 moment equations for solving problems when the Knudsen number, Kn (where Kn is the ratio of the mean free path of a gas to a typical length scale of interest), is in the range 0.1 and 1.0-the so-called transition regime. In comparison to numerical solutions of the Boltzmann equation, the Method of Moments has captured both qualitatively, and quantitatively, results of classical test problems in kinetic theory, e.g. velocity slip in Kramers' problem, temperature jump in Knudsen layers, the Knudsen minimum etc. However, most of these results have been obtained for Maxwell molecules, where molecules repel each other according to an inverse fifth-power rule. Recent work has incorporated more traditional collision models such as BGK, S-model, and ES-BGK, the latter being important for thermal problems where the Prandtl number can vary. We are currently investigating the impact of these collision models on fundamental low-speed problems of particular interest to micro-scale flows that will be discussed and evaluated in the presentation. Engineering and Physical Sciences Research Council under Grant EP/I011927/1 and CCP12.
NASA Astrophysics Data System (ADS)
Ogawa, Kazuhisa; Kobayashi, Hirokazu; Tomita, Akihisa
2018-02-01
The quantum interference of entangled photons forms a key phenomenon underlying various quantum-optical technologies. It is known that the quantum interference patterns of entangled photon pairs can be reconstructed classically by the time-reversal method; however, the time-reversal method has been applied only to time-frequency-entangled two-photon systems in previous experiments. Here, we apply the time-reversal method to the position-wave-vector-entangled two-photon systems: the two-photon Young interferometer and the two-photon beam focusing system. We experimentally demonstrate that the time-reversed systems classically reconstruct the same interference patterns as the position-wave-vector-entangled two-photon systems.
NASA Astrophysics Data System (ADS)
de Sousa, J. Ricardo; de Albuquerque, Douglas F.
1997-02-01
By using two approaches of renormalization group (RG), mean field RG (MFRG) and effective field RG (EFRG), we study the critical properties of the simple cubic lattice classical XY and classical Heisenberg models. The methods are illustrated by employing its simplest approximation version in which small clusters with one ( N‧ = 1) and two ( N = 2) spins are used. The thermal and magnetic critical exponents, Yt and Yh, and the critical parameter Kc are numerically obtained and are compared with more accurate methods (Monte Carlo, series expansion and ε-expansion). The results presented in this work are in excellent agreement with these sophisticated methods. We have also shown that the exponent Yh does not depend on the symmetry n of the Hamiltonian, hence the criteria of universality for this exponent is only a function of the dimension d.
Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.
2016-07-07
This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.
Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.
Eichstädt, S; Wilkens, V
2017-06-01
An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.
Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms
NASA Astrophysics Data System (ADS)
Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart
2008-03-01
Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.
NASA Astrophysics Data System (ADS)
Li, Gang; Zhao, Qing
2017-03-01
In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.
Nonideal Rayleigh–Taylor mixing
Lim, Hyunkyung; Iwerks, Justin; Glimm, James; Sharp, David H.
2010-01-01
Rayleigh–Taylor mixing is a classical hydrodynamic instability that occurs when a light fluid pushes against a heavy fluid. The two main sources of nonideal behavior in Rayleigh–Taylor (RT) mixing are regularizations (physical and numerical), which produce deviations from a pure Euler equation, scale invariant formulation, and nonideal (i.e., experimental) initial conditions. The Kolmogorov theory of turbulence predicts stirring at all length scales for the Euler fluid equations without regularization. We interpret mathematical theories of existence and nonuniqueness in this context, and we provide numerical evidence for dependence of the RT mixing rate on nonideal regularizations; in other words, indeterminacy when modeled by Euler equations. Operationally, indeterminacy shows up as nonunique solutions for RT mixing, parametrized by Schmidt and Prandtl numbers, in the large Reynolds number (Euler equation) limit. Verification and validation evidence is presented for the large eddy simulation algorithm used here. Mesh convergence depends on breaking the nonuniqueness with explicit use of the laminar Schmidt and Prandtl numbers and their turbulent counterparts, defined in terms of subgrid scale models. The dependence of the mixing rate on the Schmidt and Prandtl numbers and other physical parameters will be illustrated. We demonstrate numerically the influence of initial conditions on the mixing rate. Both the dominant short wavelength initial conditions and long wavelength perturbations are observed to play a role. By examination of two classes of experiments, we observe the absence of a single universal explanation, with long and short wavelength initial conditions, and the various physical and numerical regularizations contributing in different proportions in these two different contexts. PMID:20615983
Zonal wavefront reconstruction in quadrilateral geometry for phase measuring deflectometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Xue, Junpeng; Gao, Bo
2017-06-14
There are wide applications for zonal reconstruction methods in slope-based metrology due to its good capability of reconstructing the local details on surface profile. It was noticed in the literature that large reconstruction errors occur when using zonal reconstruction methods designed for rectangular geometry to process slopes in a quadrilateral geometry, which is a more general geometry with phase measuring deflectometry. In this paper, we present a new idea for the zonal methods for quadrilateral geometry. Instead of employing the intermediate slopes to set up height-slope equations, we consider the height increment as a more general connector to establish themore » height-slope relations for least-squares regression. The classical zonal methods and interpolation-assisted zonal methods are compared with our proposal. Results of both simulation and experiment demonstrate the effectiveness of the proposed idea. In implementation, the modification on the classical zonal methods is addressed. Finally, the new methods preserve many good aspects of the classical ones, such as the ability to handle a large incomplete slope dataset in an arbitrary aperture, and the low computational complexity comparable with the classical zonal method. Of course, the accuracy of the new methods is much higher when integrating the slopes in quadrilateral geometry.« less
Automatic Aircraft Collision Avoidance System and Method
NASA Technical Reports Server (NTRS)
Skoog, Mark (Inventor); Hook, Loyd (Inventor); McWherter, Shaun (Inventor); Willhite, Jaimie (Inventor)
2014-01-01
The invention is a system and method of compressing a DTM to be used in an Auto-GCAS system using a semi-regular geometric compression algorithm. In general, the invention operates by first selecting the boundaries of the three dimensional map to be compressed and dividing the three dimensional map data into regular areas. Next, a type of free-edged, flat geometric surface is selected which will be used to approximate terrain data of the three dimensional map data. The flat geometric surface is used to approximate terrain data for each regular area. The approximations are checked to determine if they fall within selected tolerances. If the approximation for a specific regular area is within specified tolerance, the data is saved for that specific regular area. If the approximation for a specific area falls outside the specified tolerances, the regular area is divided and a flat geometric surface approximation is made for each of the divided areas. This process is recursively repeated until all of the regular areas are approximated by flat geometric surfaces. Finally, the compressed three dimensional map data is provided to the automatic ground collision system for an aircraft.
Radiation-reaction force on a small charged body to second order
NASA Astrophysics Data System (ADS)
Moxon, Jordan; Flanagan, Éanna
2018-05-01
In classical electrodynamics, an accelerating charged body emits radiation and experiences a corresponding radiation-reaction force, or self-force. We extend to higher order in the total charge a previous rigorous derivation of the electromagnetic self-force in flat spacetime by Gralla, Harte, and Wald. The method introduced by Gralla, Harte, and Wald computes the self-force from the Maxwell field equations and conservation of stress-energy in a limit where the charge, size, and mass of the body go to zero, and it does not require regularization of a singular self-field. For our higher-order computation, an adjustment of the definition of the mass of the body is necessary to avoid including self-energy from the electromagnetic field sourced by the body in the distant past. We derive the evolution equations for the mass, spin, and center-of-mass position of the body through second order. We derive, for the first time, the second-order acceleration dependence of the evolution of the spin (self-torque), as well as a mixing between the extended body effects and the acceleration-dependent effects on the overall body motion.
NASA Astrophysics Data System (ADS)
Xiong, W.; Li, J.; Zhu, Y.; Luo, X.
2018-07-01
The transition between regular reflection (RR) and Mach reflection (MR) of a Type V shock-shock interaction on a double-wedge geometry with non-equilibrium high-temperature gas effects is investigated theoretically and numerically. A modified shock polar method that involves thermochemical non-equilibrium processes is applied to calculate the theoretical critical angles of transition based on the detachment criterion and the von Neumann criterion. Two-dimensional inviscid numerical simulations are performed correspondingly to reveal the interactive wave patterns, the transition processes, and the critical transition angles. The theoretical and numerical results of the critical transition angles are compared, which shows evident disagreement, indicating that the transition mechanism between RR and MR of a Type V shock interaction is beyond the admissible scope of the classical theory. Numerical results show that the collisions of triple points of the Type V interaction cause the transition instead. Compared with the frozen counterpart, it is found that the high-temperature gas effects lead to a larger critical transition angle and a larger hysteresis interval.
Biological control via "ecological" damping: An approach that attenuates non-target effects.
Parshad, Rana D; Quansah, Emmanuel; Black, Kelly; Beauregard, Matthew
2016-03-01
In this work we develop and analyze a mathematical model of biological control to prevent or attenuate the explosive increase of an invasive species population, that functions as a top predator, in a three-species food chain. We allow for finite time blow-up in the model as a mathematical construct to mimic the explosive increase in population, enabling the species to reach "disastrous", and uncontrollable population levels, in a finite time. We next improve the mathematical model and incorporate controls that are shown to drive down the invasive population growth and, in certain cases, eliminate blow-up. Hence, the population does not reach an uncontrollable level. The controls avoid chemical treatments and/or natural enemy introduction, thus eliminating various non-target effects associated with such classical methods. We refer to these new controls as "ecological damping", as their inclusion dampens the invasive species population growth. Further, we improve prior results on the regularity and Turing instability of the three-species model that were derived in Parshad et al. (2014). Lastly, we confirm the existence of spatiotemporal chaos. Copyright © 2016 Elsevier Inc. All rights reserved.
Element enrichment factor calculation using grain-size distribution and functional data regression.
Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R
2015-01-01
In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Simple proteomics data analysis in the object-oriented PowerShell.
Mohammed, Yassene; Palmblad, Magnus
2013-01-01
Scripting languages such as Perl and Python are appreciated for solving simple, everyday tasks in bioinformatics. A more recent, object-oriented command shell and scripting language, Windows PowerShell, has many attractive features: an object-oriented interactive command line, fluent navigation and manipulation of XML files, ability to consume Web services from the command line, consistent syntax and grammar, rich regular expressions, and advanced output formatting. The key difference between classical command shells and scripting languages, such as bash, and object-oriented ones, such as PowerShell, is that in the latter the result of a command is a structured object with inherited properties and methods rather than a simple stream of characters. Conveniently, PowerShell is included in all new releases of Microsoft Windows and therefore already installed on most computers in classrooms and teaching labs. In this chapter we demonstrate how PowerShell in particular allows easy interaction with mass spectrometry data in XML formats, connection to Web services for tools such as BLAST, and presentation of results as formatted text or graphics. These features make PowerShell much more than "yet another scripting language."
Adaptive laboratory evolution – principles and applications for biotechnology
2013-01-01
Adaptive laboratory evolution is a frequent method in biological studies to gain insights into the basic mechanisms of molecular evolution and adaptive changes that accumulate in microbial populations during long term selection under specified growth conditions. Although regularly performed for more than 25 years, the advent of transcript and cheap next-generation sequencing technologies has resulted in many recent studies, which successfully applied this technique in order to engineer microbial cells for biotechnological applications. Adaptive laboratory evolution has some major benefits as compared with classical genetic engineering but also some inherent limitations. However, recent studies show how some of the limitations may be overcome in order to successfully incorporate adaptive laboratory evolution in microbial cell factory design. Over the last two decades important insights into nutrient and stress metabolism of relevant model species were acquired, whereas some other aspects such as niche-specific differences of non-conventional cell factories are not completely understood. Altogether the current status and its future perspectives highlight the importance and potential of adaptive laboratory evolution as approach in biotechnological engineering. PMID:23815749
Active subspace: toward scalable low-rank learning.
Liu, Guangcan; Yan, Shuicheng
2012-12-01
We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.
Application of singular value decomposition to structural dynamics systems with constraints
NASA Technical Reports Server (NTRS)
Juang, J.-N.; Pinson, L. D.
1985-01-01
Singular value decomposition is used to construct a coordinate transformation for a linear dynamic system subject to linear, homogeneous constraint equations. The method is compared with two commonly used methods, namely classical Gaussian elimination and Walton-Steeves approach. Although the classical method requires fewer numerical operations, the singular value decomposition method is more accurate and convenient in eliminating the dependent coordinates. Numerical examples are presented to demonstrate the application of the method.
Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul
2016-01-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257
A fractional-order accumulative regularization filter for force reconstruction
NASA Astrophysics Data System (ADS)
Wensong, Jiang; Zhongyu, Wang; Jing, Lv
2018-02-01
The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.
Pose invariant face recognition: 3D model from single photo
NASA Astrophysics Data System (ADS)
Napoléon, Thibault; Alfalou, Ayman
2017-02-01
Face recognition is widely studied in the literature for its possibilities in surveillance and security. In this paper, we report a novel algorithm for the identification task. This technique is based on an optimized 3D modeling allowing to reconstruct faces in different poses from a limited number of references (i.e. one image by class/person). Particularly, we propose to use an active shape model to detect a set of keypoints on the face necessary to deform our synthetic model with our optimized finite element method. Indeed, in order to improve our deformation, we propose a regularization by distances on graph. To perform the identification we use the VanderLugt correlator well know to effectively address this task. On the other hand we add a difference of Gaussian filtering step to highlight the edges and a description step based on the local binary patterns. The experiments are performed on the PHPID database enhanced with our 3D reconstructed faces of each person with an azimuth and an elevation ranging from -30° to +30°. The obtained results prove the robustness of our new method with 88.76% of good identification when the classic 2D approach (based on the VLC) obtains just 44.97%.
Hervás, César; Silva, Manuel; Serrano, Juan Manuel; Orejuela, Eva
2004-01-01
The suitability of an approach for extracting heuristic rules from trained artificial neural networks (ANNs) pruned by a regularization method and with architectures designed by evolutionary computation for quantifying highly overlapping chromatographic peaks is demonstrated. The ANN input data are estimated by the Levenberg-Marquardt method in the form of a four-parameter Weibull curve associated with the profile of the chromatographic band. To test this approach, two N-methylcarbamate pesticides, carbofuran and propoxur, were quantified using a classic peroxyoxalate chemiluminescence reaction as a detection system for chromatographic analysis. Straightforward network topologies (one and two outputs models) allow the analytes to be quantified in concentration ratios ranging from 1:7 to 5:1 with an average standard error of prediction for the generalization test of 2.7 and 2.3% for carbofuran and propoxur, respectively. The reduced dimensions of the selected ANN architectures, especially those obtained after using heuristic rules, allowed simple quantification equations to be developed that transform the input variables into output variables. These equations can be easily interpreted from a chemical point of view to attain quantitative analytical information regarding the effect of both analytes on the characteristics of chromatographic bands, namely profile, dispersion, peak height, and residence time. Copyright 2004 American Chemical Society
NASA Astrophysics Data System (ADS)
Bogolubov, Nikolai N.; Soldatov, Andrey V.
2017-12-01
Exact and approximate master equations were derived by the projection operator method for the reduced statistical operator of a multi-level quantum system with finite number N of quantum eigenstates interacting with arbitrary external classical fields and dissipative environment simultaneously. It was shown that the structure of these equations can be simplified significantly if the free Hamiltonian driven dynamics of an arbitrary quantum multi-level system under the influence of the external driving fields as well as its Markovian and non-Markovian evolution, stipulated by the interaction with the environment, are described in terms of the SU(N) algebra representation. As a consequence, efficient numerical methods can be developed and employed to analyze these master equations for real problems in various fields of theoretical and applied physics. It was also shown that literally the same master equations hold not only for the reduced density operator but also for arbitrary nonequilibrium multi-time correlation functions as well under the only assumption that the system and the environment are uncorrelated at some initial moment of time. A calculational scheme was proposed to account for these lost correlations in a regular perturbative way, thus providing additional computable terms to the correspondent master equations for the correlation functions.
Morabia, Alfredo
2015-03-18
Before World War II, epidemiology was a small discipline, practiced by a handful of people working mostly in the United Kingdom and in the United States. Today it is practiced by tens of thousands of people on all continents. Between 1945 and 1965, during what is known as its "classical" phase, epidemiology became recognized as a major academic discipline in medicine and public health. On the basis of a review of the historical evidence, this article examines to which extent classical epidemiology has been a golden age of an action-driven, problem-solving science, in which epidemiologists were less concerned with the sophistication of their methods than with the societal consequences of their work. It also discusses whether the paucity of methods stymied or boosted classical epidemiology's ability to convince political and financial agencies about the need to intervene in order to improve the health of the people.
Renormalized stress-energy tensor for stationary black holes
NASA Astrophysics Data System (ADS)
Levi, Adam
2017-01-01
We continue the presentation of the pragmatic mode-sum regularization (PMR) method for computing the renormalized stress-energy tensor (RSET). We show in detail how to employ the t -splitting variant of the method, which was first presented for ⟨ϕ2⟩ren , to compute the RSET in a stationary, asymptotically flat background. This variant of the PMR method was recently used to compute the RSET for an evaporating spinning black hole. As an example for regularization, we demonstrate here the computation of the RSET for a minimally coupled, massless scalar field on Schwarzschild background in all three vacuum states. We discuss future work and possible improvements of the regularization schemes in the PMR method.
Linear and Non-linear Information Flows In Rainfall Field
NASA Astrophysics Data System (ADS)
Molini, A.; La Barbera, P.; Lanza, L. G.
The rainfall process is the result of a complex framework of non-linear dynamical in- teractions between the different components of the atmosphere. It preserves the com- plexity and the intermittent features of the generating system in space and time as well as the strong dependence of these properties on the scale of observations. The understanding and quantification of how the non-linearity of the generating process comes to influence the single rain events constitute relevant research issues in the field of hydro-meteorology, especially in those applications where a timely and effective forecasting of heavy rain events is able to reduce the risk of failure. This work focuses on the characterization of the non-linear properties of the observed rain process and on the influence of these features on hydrological models. Among the goals of such a survey is the research of regular structures of the rainfall phenomenon and the study of the information flows within the rain field. The research focuses on three basic evo- lution directions for the system: in time, in space and between the different scales. In fact, the information flows that force the system to evolve represent in general a connection between the different locations in space, the different instants in time and, unless assuming the hypothesis of scale invariance is verified "a priori", the different characteristic scales. A first phase of the analysis is carried out by means of classic statistical methods, then a survey of the information flows within the field is devel- oped by means of techniques borrowed from the Information Theory, and finally an analysis of the rain signal in the time and frequency domains is performed, with par- ticular reference to its intermittent structure. The methods adopted in this last part of the work are both the classic techniques of statistical inference and a few procedures for the detection of non-linear and non-stationary features within the process starting from measured data.
Contact stresses in gear teeth: A new method of analysis
NASA Technical Reports Server (NTRS)
Somprakit, Paisan; Huston, Ronald L.; Oswald, Fred B.
1991-01-01
A new, innovative procedure called point load superposition for determining the contact stresses in mating gear teeth. It is believed that this procedure will greatly extend both the range of applicability and the accuracy of gear contact stress analysis. Point load superposition is based upon fundamental solutions from the theory of elasticity. It is an iterative numerical procedure which has distinct advantages over the classical Hertz method, the finite element method, and over existing applications with the boundary element method. Specifically, friction and sliding effects, which are either excluded from or difficult to study with the classical methods, are routinely handled with the new procedure. Presented here are the basic theory and the algorithms. Several examples are given. Results are consistent with those of the classical theories. Applications to spur gears are discussed.
On the regularized fermionic projector of the vacuum
NASA Astrophysics Data System (ADS)
Finster, Felix
2008-03-01
We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.
Generalized Bregman distances and convergence rates for non-convex regularization methods
NASA Astrophysics Data System (ADS)
Grasmair, Markus
2010-11-01
We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ1/p holds, if the regularization term has a slightly faster growth at zero than |t|p.
Line mixing effects in isotropic Raman spectra of pure N{sub 2}: A classical trajectory study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, Sergey V., E-mail: serg.vict.ivanov@gmail.com; Boulet, Christian; Buzykin, Oleg G.
2014-11-14
Line mixing effects in the Q branch of pure N{sub 2} isotropic Raman scattering are studied at room temperature using a classical trajectory method. It is the first study using an extended modified version of Gordon's classical theory of impact broadening and shift of rovibrational lines. The whole relaxation matrix is calculated using an exact 3D classical trajectory method for binary collisions of rigid N{sub 2} molecules employing the most up-to-date intermolecular potential energy surface (PES). A simple symmetrizing procedure is employed to improve off-diagonal cross-sections to make them obeying exactly the principle of detailed balance. The adequacy of themore » results is confirmed by the sum rule. The comparison is made with available experimental data as well as with benchmark fully quantum close coupling [F. Thibault, C. Boulet, and Q. Ma, J. Chem. Phys. 140, 044303 (2014)] and refined semi-classical Robert-Bonamy [C. Boulet, Q. Ma, and F. Thibault, J. Chem. Phys. 140, 084310 (2014)] results. All calculations (classical, quantum, and semi-classical) were made using the same PES. The agreement between classical and quantum relaxation matrices is excellent, opening the way to the analysis of more complex molecular systems.« less
NASA Astrophysics Data System (ADS)
Jiang, Peng; Peng, Lihui; Xiao, Deyun
2007-06-01
This paper presents a regularization method by using different window functions as regularization for electrical capacitance tomography (ECT) image reconstruction. Image reconstruction for ECT is a typical ill-posed inverse problem. Because of the small singular values of the sensitivity matrix, the solution is sensitive to the measurement noise. The proposed method uses the spectral filtering properties of different window functions to make the solution stable by suppressing the noise in measurements. The window functions, such as the Hanning window, the cosine window and so on, are modified for ECT image reconstruction. Simulations with respect to five typical permittivity distributions are carried out. The reconstructions are better and some of the contours are clearer than the results from the Tikhonov regularization. Numerical results show that the feasibility of the image reconstruction algorithm using different window functions as regularization.
An Experimental and Theoretical Study of Nitrogen-Broadened Acetylene Lines
NASA Technical Reports Server (NTRS)
Thibault, Franck; Martinez, Raul Z.; Bermejo, Dionisio; Ivanov, Sergey V.; Buzykin, Oleg G.; Ma, Qiancheng
2014-01-01
We present experimental nitrogen-broadening coefficients derived from Voigt profiles of isotropic Raman Q-lines measured in the 2 band of acetylene (C2H2) at 150 K and 298 K, and compare them to theoretical values obtained through calculations that were carried out specifically for this work. Namely, full classical calculations based on Gordon's approach, two kinds of semi-classical calculations based on Robert Bonamy method as well as full quantum dynamical calculations were performed. All the computations employed exactly the same ab initio potential energy surface for the C2H2N2 system which is, to our knowledge, the most realistic, accurate and up-to-date one. The resulting calculated collisional half-widths are in good agreement with the experimental ones only for the full classical and quantum dynamical methods. In addition, we have performed similar calculations for IR absorption lines and compared the results to bibliographic values. Results obtained with the full classical method are again in good agreement with the available room temperature experimental data. The quantum dynamical close-coupling calculations are too time consuming to provide a complete set of values and therefore have been performed only for the R(0) line of C2H2. The broadening coefficient obtained for this line at 173 K and 297 K also compares quite well with the available experimental data. The traditional Robert Bonamy semi-classical formalism, however, strongly overestimates the values of half-width for both Qand R-lines. The refined semi-classical Robert Bonamy method, first proposed for the calculations of pressure broadening coefficients of isotropic Raman lines, is also used for IR lines. By using this improved model that takes into account effects from line coupling, the calculated semi-classical widths are significantly reduced and closer to the measured ones.
Velopharyngeal Port Status during Classical Singing
ERIC Educational Resources Information Center
Tanner, Kristine; Roy, Nelson; Merrill, Ray M.; Power, David
2005-01-01
Purpose: This investigation was undertaken to examine the status of the velopharyngeal (VP) port during classical singing. Method: Using aeromechanical instrumentation, nasal airflow (mL/s), oral pressure (cm H[subscript 2]O), and VP orifice area estimates (cm[squared]) were studied in 10 classically trained sopranos during singing and speaking.…
NASA Astrophysics Data System (ADS)
Chen, Shuhong; Tan, Zhong
2007-11-01
In this paper, we consider the nonlinear elliptic systems under controllable growth condition. We use a new method introduced by Duzaar and Grotowski, for proving partial regularity for weak solutions, based on a generalization of the technique of harmonic approximation. We extend previous partial regularity results under the natural growth condition to the case of the controllable growth condition, and directly establishing the optimal Hölder exponent for the derivative of a weak solution.
A new procedure for calculating contact stresses in gear teeth
NASA Technical Reports Server (NTRS)
Somprakit, Paisan; Huston, Ronald L.
1991-01-01
A numerical procedure for evaluating and monitoring contact stresses in meshing gear teeth is discussed. The procedure is intended to extend the range of applicability and to improve the accuracy of gear contact stress analysis. The procedure is based upon fundamental solution from the theory of elasticity. It is an iterative numerical procedure. The method is believed to have distinct advantages over the classical Hertz method, the finite-element method, and over existing approaches with the boundary element method. Unlike many classical contact stress analyses, friction effects and sliding are included. Slipping and sticking in the contact region are studied. Several examples are discussed. The results are in agreement with classical results. Applications are presented for spur gears.
NASA Astrophysics Data System (ADS)
Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin
2018-02-01
Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model
Schmiedt, Hanno; Schlemmer, Stephan; Yurchenko, Sergey N.; Yachmenev, Andrey
2017-01-01
We report a new semi-classical method to compute highly excited rotational energy levels of an asymmetric-top molecule. The method forgoes the idea of a full quantum mechanical treatment of the ro-vibrational motion of the molecule. Instead, it employs a semi-classical Green's function approach to describe the rotational motion, while retaining a quantum mechanical description of the vibrations. Similar approaches have existed for some time, but the method proposed here has two novel features. First, inspired by the path integral method, periodic orbits in the phase space and tunneling paths are naturally obtained by means of molecular symmetry analysis. Second, the rigorous variational method is employed for the first time to describe the molecular vibrations. In addition, we present a new robust approach to generating rotational energy surfaces for vibrationally excited states; this is done in a fully quantum-mechanical, variational manner. The semi-classical approach of the present work is applied to calculating the energies of very highly excited rotational states and it reduces dramatically the computing time as well as the storage and memory requirements when compared to the fullly quantum-mechanical variational approach. Test calculations for excited states of SO2 yield semi-classical energies in very good agreement with the available experimental data and the results of fully quantum-mechanical calculations. PMID:28000807
Controlling lightwave in Riemann space by merging geometrical optics with transformation optics.
Liu, Yichao; Sun, Fei; He, Sailing
2018-01-11
In geometrical optical design, we only need to choose a suitable combination of lenses, prims, and mirrors to design an optical path. It is a simple and classic method for engineers. However, people cannot design fantastical optical devices such as invisibility cloaks, optical wormholes, etc. by geometrical optics. Transformation optics has paved the way for these complicated designs. However, controlling the propagation of light by transformation optics is not a direct design process like geometrical optics. In this study, a novel mixed method for optical design is proposed which has both the simplicity of classic geometrical optics and the flexibility of transformation optics. This mixed method overcomes the limitations of classic optical design; at the same time, it gives intuitive guidance for optical design by transformation optics. Three novel optical devices with fantastic functions have been designed using this mixed method, including asymmetrical transmissions, bidirectional focusing, and bidirectional cloaking. These optical devices cannot be implemented by classic optics alone and are also too complicated to be designed by pure transformation optics. Numerical simulations based on both the ray tracing method and full-wave simulation method are carried out to verify the performance of these three optical devices.
Modal identification of structures by a novel approach based on FDD-wavelet method
NASA Astrophysics Data System (ADS)
Tarinejad, Reza; Damadipour, Majid
2014-02-01
An important application of system identification in structural dynamics is the determination of natural frequencies, mode shapes and damping ratios during operation which can then be used for calibrating numerical models. In this paper, the combination of two advanced methods of Operational Modal Analysis (OMA) called Frequency Domain Decomposition (FDD) and Continuous Wavelet Transform (CWT) based on novel cyclic averaging of correlation functions (CACF) technique are used for identification of dynamic properties. By using this technique, the autocorrelation of averaged correlation functions is used instead of original signals. Integration of FDD and CWT methods is used to overcome their deficiency and take advantage of the unique capabilities of these methods. The FDD method is able to accurately estimate the natural frequencies and mode shapes of structures in the frequency domain. On the other hand, the CWT method is in the time-frequency domain for decomposition of a signal at different frequencies and determines the damping coefficients. In this paper, a new formulation applied to the wavelet transform of the averaged correlation function of an ambient response is proposed. This application causes to accurate estimation of damping ratios from weak (noise) or strong (earthquake) vibrations and long or short duration record. For this purpose, the modified Morlet wavelet having two free parameters is used. The optimum values of these two parameters are obtained by employing a technique which minimizes the entropy of the wavelet coefficients matrix. The capabilities of the novel FDD-Wavelet method in the system identification of various dynamic systems with regular or irregular distribution of mass and stiffness are illustrated. This combined approach is superior to classic methods and yields results that agree well with the exact solutions of the numerical models.
Wang, Zhifei; Xie, Yanming; Wang, Yongyan
2011-10-01
Computerizing extracting information from Chinese medicine literature seems more convenient than hand searching, which could simplify searching process and improve the accuracy. However, many computerized auto-extracting methods are increasingly used, regular expression is so special that could be efficient for extracting useful information in research. This article focused on regular expression applying in extracting information from Chinese medicine literature. Two practical examples were reported in this article about regular expression to extract "case number (non-terminology)" and "efficacy rate (subgroups for related information identification)", which explored how to extract information in Chinese medicine literature by means of some special research method.
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
NASA Astrophysics Data System (ADS)
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng
2018-01-01
Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.
Application of L1/2 regularization logistic method in heart disease diagnosis.
Zhang, Bowen; Chai, Hua; Yang, Ziyi; Liang, Yong; Chu, Gejin; Liu, Xiaoying
2014-01-01
Heart disease has become the number one killer of human health, and its diagnosis depends on many features, such as age, blood pressure, heart rate and other dozens of physiological indicators. Although there are so many risk factors, doctors usually diagnose the disease depending on their intuition and experience, which requires a lot of knowledge and experience for correct determination. To find the hidden medical information in the existing clinical data is a noticeable and powerful approach in the study of heart disease diagnosis. In this paper, sparse logistic regression method is introduced to detect the key risk factors using L(1/2) regularization on the real heart disease data. Experimental results show that the sparse logistic L(1/2) regularization method achieves fewer but informative key features than Lasso, SCAD, MCP and Elastic net regularization approaches. Simultaneously, the proposed method can cut down the computational complexity, save cost and time to undergo medical tests and checkups, reduce the number of attributes needed to be taken from patients.
Patch-based image reconstruction for PET using prior-image derived dictionaries
NASA Astrophysics Data System (ADS)
Tahaei, Marzieh S.; Reader, Andrew J.
2016-09-01
In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.
NASA Astrophysics Data System (ADS)
Chen, Ying; Lowengrub, John; Shen, Jie; Wang, Cheng; Wise, Steven
2018-07-01
We develop efficient energy stable numerical methods for solving isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization. The scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is constructed based on a convex splitting approach. We prove that, for the isotropic Cahn-Hilliard system with the Willmore regularization, the total free energy of the system is non-increasing for any time step and mesh sizes. A straightforward modification of the scheme is then used to solve the regularized strongly anisotropic Cahn-Hilliard system, and it is numerically verified that the discrete energy of the anisotropic system is also non-increasing, and can be efficiently solved by using the modified stable method. We present numerical results in both two and three dimensions that are in good agreement with those in earlier work on the topics. Numerical simulations are presented to demonstrate the accuracy and efficiency of the proposed methods.
An analytical method for the inverse Cauchy problem of Lame equation in a rectangle
NASA Astrophysics Data System (ADS)
Grigor’ev, Yu
2018-04-01
In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.
Advanced Imaging Methods for Long-Baseline Optical Interferometry
NASA Astrophysics Data System (ADS)
Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.
2008-11-01
We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.
Zero-point energy constraint in quasi-classical trajectory calculations.
Xie, Zhen; Bowman, Joel M
2006-04-27
A method to constrain the zero-point energy in quasi-classical trajectory calculations is proposed and applied to the Henon-Heiles system. The main idea of this method is to smoothly eliminate the coupling terms in the Hamiltonian as the energy of any mode falls below a specified value.
Quantum Capacity under Adversarial Quantum Noise: Arbitrarily Varying Quantum Channels
NASA Astrophysics Data System (ADS)
Ahlswede, Rudolf; Bjelaković, Igor; Boche, Holger; Nötzel, Janis
2013-01-01
We investigate entanglement transmission over an unknown channel in the presence of a third party (called the adversary), which is enabled to choose the channel from a given set of memoryless but non-stationary channels without informing the legitimate sender and receiver about the particular choice that he made. This channel model is called an arbitrarily varying quantum channel (AVQC). We derive a quantum version of Ahlswede's dichotomy for classical arbitrarily varying channels. This includes a regularized formula for the common randomness-assisted capacity for entanglement transmission of an AVQC. Quite surprisingly and in contrast to the classical analog of the problem involving the maximal and average error probability, we find that the capacity for entanglement transmission of an AVQC always equals its strong subspace transmission capacity. These results are accompanied by different notions of symmetrizability (zero-capacity conditions) as well as by conditions for an AVQC to have a capacity described by a single-letter formula. In the final part of the paper the capacity of the erasure-AVQC is computed and some light shed on the connection between AVQCs and zero-error capacities. Additionally, we show by entirely elementary and operational arguments motivated by the theory of AVQCs that the quantum, classical, and entanglement-assisted zero-error capacities of quantum channels are generically zero and are discontinuous at every positivity point.
Classical mutual information in mean-field spin glass models
NASA Astrophysics Data System (ADS)
Alba, Vincenzo; Inglis, Stephen; Pollet, Lode
2016-03-01
We investigate the classical Rényi entropy Sn and the associated mutual information In in the Sherrington-Kirkpatrick (S-K) model, which is the paradigm model of mean-field spin glasses. Using classical Monte Carlo simulations and analytical tools we investigate the S-K model in the n -sheet booklet. This is achieved by gluing together n independent copies of the model, and it is the main ingredient for constructing the Rényi entanglement-related quantities. We find a glassy phase at low temperatures, whereas at high temperatures the model exhibits paramagnetic behavior, consistent with the regular S-K model. The temperature of the paramagnetic-glassy transition depends nontrivially on the geometry of the booklet. At high temperatures we provide the exact solution of the model by exploiting the replica symmetry. This is the permutation symmetry among the fictitious replicas that are used to perform disorder averages (via the replica trick). In the glassy phase the replica symmetry has to be broken. Using a generalization of the Parisi solution, we provide analytical results for Sn and In and for standard thermodynamic quantities. Both Sn and In exhibit a volume law in the whole phase diagram. We characterize the behavior of the corresponding densities, Sn/N and In/N , in the thermodynamic limit. Interestingly, at the critical point the mutual information does not exhibit any crossing for different system sizes, in contrast with local spin models.
Quantum approach to classical statistical mechanics.
Somma, R D; Batista, C D; Ortiz, G
2007-07-20
We present a new approach to study the thermodynamic properties of d-dimensional classical systems by reducing the problem to the computation of ground state properties of a d-dimensional quantum model. This classical-to-quantum mapping allows us to extend the scope of standard optimization methods by unifying them under a general framework. The quantum annealing method is naturally extended to simulate classical systems at finite temperatures. We derive the rates to assure convergence to the optimal thermodynamic state using the adiabatic theorem of quantum mechanics. For simulated and quantum annealing, we obtain the asymptotic rates of T(t) approximately (pN)/(k(B)logt) and gamma(t) approximately (Nt)(-c/N), for the temperature and magnetic field, respectively. Other annealing strategies are also discussed.
Representation of the exact relativistic electronic Hamiltonian within the regular approximation
NASA Astrophysics Data System (ADS)
Filatov, Michael; Cremer, Dieter
2003-12-01
The exact relativistic Hamiltonian for electronic states is expanded in terms of energy-independent linear operators within the regular approximation. An effective relativistic Hamiltonian has been obtained, which yields in lowest order directly the infinite-order regular approximation (IORA) rather than the zeroth-order regular approximation method. Further perturbational expansion of the exact relativistic electronic energy utilizing the effective Hamiltonian leads to new methods based on ordinary (IORAn) or double [IORAn(2)] perturbation theory (n: order of expansion), which provide improved energies in atomic calculations. Energies calculated with IORA4 and IORA3(2) are accurate up to c-20. Furthermore, IORA is improved by using the IORA wave function to calculate the Rayleigh quotient, which, if minimized, leads to the exact relativistic energy. The outstanding performance of this new IORA method coined scaled IORA is documented in atomic and molecular calculations.
Dense motion estimation using regularization constraints on local parametric models.
Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein
2004-11-01
This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.
NASA Astrophysics Data System (ADS)
Wuthrich, Christian
My dissertation studies the foundations of loop quantum gravity (LQG), a candidate for a quantum theory of gravity based on classical general relativity. At the outset, I discuss two---and I claim separate---questions: first, do we need a quantum theory of gravity at all; and second, if we do, does it follow that gravity should or even must be quantized? My evaluation of different arguments either way suggests that while no argument can be considered conclusive, there are strong indications that gravity should be quantized. LQG attempts a canonical quantization of general relativity and thereby provokes a foundational interest as it must take a stance on many technical issues tightly linked to the interpretation of general relativity. Most importantly, it codifies general relativity's main innovation, the so-called background independence, in a formalism suitable for quantization. This codification pulls asunder what has been joined together in general relativity: space and time. It is thus a central issue whether or not general relativity's four-dimensional structure can be retrieved in the alternative formalism and how it fares through the quantization process. I argue that the rightful four-dimensional spacetime structure can only be partially retrieved at the classical level. What happens at the quantum level is an entirely open issue. Known examples of classically singular behaviour which gets regularized by quantization evoke an admittedly pious hope that the singularities which notoriously plague the classical theory may be washed away by quantization. This work scrutinizes pronouncements claiming that the initial singularity of classical cosmological models vanishes in quantum cosmology based on LQG and concludes that these claims must be severely qualified. In particular, I explicate why casting the quantum cosmological models in terms of a deterministic temporal evolution fails to capture the concepts at work adequately. Finally, a scheme is developed of how the re-emergence of the smooth spacetime from the underlying discrete quantum structure could be understood.
Introducing Hurst exponent in pair trading
NASA Astrophysics Data System (ADS)
Ramos-Requena, J. P.; Trinidad-Segovia, J. E.; Sánchez-Granero, M. A.
2017-12-01
In this paper we introduce a new methodology for pair trading. This new method is based on the calculation of the Hurst exponent of a pair. Our approach is inspired by the classical concepts of co-integration and mean reversion but joined under a unique strategy. We will show how Hurst approach presents better results than classical Distance Method and Correlation strategies in different scenarios. Results obtained prove that this new methodology is consistent and suitable by reducing the drawdown of trading over the classical ones getting as a result a better performance.
RBOOST: RIEMANNIAN DISTANCE BASED REGULARIZED BOOSTING
Liu, Meizhu; Vemuri, Baba C.
2011-01-01
Boosting is a versatile machine learning technique that has numerous applications including but not limited to image processing, computer vision, data mining etc. It is based on the premise that the classification performance of a set of weak learners can be boosted by some weighted combination of them. There have been a number of boosting methods proposed in the literature, such as the AdaBoost, LPBoost, SoftBoost and their variations. However, the learning update strategies used in these methods usually lead to overfitting and instabilities in the classification accuracy. Improved boosting methods via regularization can overcome such difficulties. In this paper, we propose a Riemannian distance regularized LPBoost, dubbed RBoost. RBoost uses Riemannian distance between two square-root densities (in closed form) – used to represent the distribution over the training data and the classification error respectively – to regularize the error distribution in an iterative update formula. Since this distance is in closed form, RBoost requires much less computational cost compared to other regularized Boosting algorithms. We present several experimental results depicting the performance of our algorithm in comparison to recently published methods, LP-Boost and CAVIAR, on a variety of datasets including the publicly available OASIS database, a home grown Epilepsy database and the well known UCI repository. Results depict that the RBoost algorithm performs better than the competing methods in terms of accuracy and efficiency. PMID:21927643
Composite SAR imaging using sequential joint sparsity
NASA Astrophysics Data System (ADS)
Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.
2017-06-01
This paper investigates accurate and efficient ℓ1 regularization methods for generating synthetic aperture radar (SAR) images. Although ℓ1 regularization algorithms are already employed in SAR imaging, practical and efficient implementation in terms of real time imaging remain a challenge. Here we demonstrate that fast numerical operators can be used to robustly implement ℓ1 regularization methods that are as or more efficient than traditional approaches such as back projection, while providing superior image quality. In particular, we develop a sequential joint sparsity model for composite SAR imaging which naturally combines the joint sparsity methodology with composite SAR. Our technique, which can be implemented using standard, fractional, or higher order total variation regularization, is able to reduce the effects of speckle and other noisy artifacts with little additional computational cost. Finally we show that generalizing total variation regularization to non-integer and higher orders provides improved flexibility and robustness for SAR imaging.
NASA Astrophysics Data System (ADS)
Yao, Bing; Yang, Hui
2016-12-01
This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.
NASA Astrophysics Data System (ADS)
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
Taking-On: A Grounded Theory of Addressing Barriers in Task Completion
ERIC Educational Resources Information Center
Austinson, Julie Ann
2011-01-01
This study of taking-on was conducted using classical grounded theory methodology (Glaser, 1978, 1992, 1998, 2001, 2005; Glaser & Strauss, 1967). Classical grounded theory is inductive, empirical, and naturalistic; it does not utilize manipulation or constrained time frames. Classical grounded theory is a systemic research method used to generate…
NASA Astrophysics Data System (ADS)
Raj, Xavier James
2016-07-01
Accurate orbit prediction of an artificial satellite under the influence of air drag is one of the most difficult and untraceable problem in orbital dynamics. The orbital decay of these satellites is mainly controlled by the atmospheric drag effects. The effects of the atmosphere are difficult to determine, since the atmospheric density undergoes large fluctuations. The classical Newtonian equations of motion, which is non linear is not suitable for long-term integration. Many transformations have emerged in the literature to stabilize the equations of motion either to reduce the accumulation of local numerical errors or allowing the use of large integration step sizes, or both in the transformed space. One such transformation is known as KS transformation by Kustaanheimo and Stiefel, who regularized the nonlinear Kepler equations of motion and reduced it into linear differential equations of a harmonic oscillator of constant frequency. The method of KS total energy element equations has been found to be a very powerful method for obtaining numerical as well as analytical solution with respect to any type of perturbing forces, as the equations are less sensitive to round off and truncation errors. The uniformly regular KS canonical equations are a particular canonical form of the KS differential equations, where all the ten KS Canonical elements αi and βi are constant for unperturbed motion. These equations permit the uniform formulation of the basic laws of elliptic, parabolic and hyperbolic motion. Using these equations, developed analytical solution for short term orbit predictions with respect to Earth's zonal harmonic terms J2, J3, J4. Further, these equations were utilized to include the canonical forces and analytical theories with air drag were developed for low eccentricity orbits (e < 0.2) with different atmospheric models. Using uniformly regular KS canonical elements developed analytical theory for high eccentricity (e > 0.2) orbits by assuming the atmosphere to be oblate only. In this paper a new non-singular analytical theory is developed for the motion of high eccentricity satellite orbits with oblate diurnally varying atmosphere in terms of the uniformly regular KS canonical elements. The analytical solutions are generated up to fourth-order terms using a new independent variable and c (a small parameter dependent on the flattening of the atmosphere). Due to symmetry, only two of the nine equations need to be solved analytically to compute the state vector and change in energy at the end of each revolution. The theory is developed on the assumption that density is constant on the surfaces of spheroids of fixed ellipticity ɛ (equal to the Earth's ellipticity, 0.00335) whose axes coincide with the Earth's axis. Numerical experimentation with the analytical solution for a wide range of perigee height, eccentricity, and orbital inclination has been carried out up to 100 revolutions. Comparisons are made with numerically integrated values and found that they match quite well. Effectiveness of the present analytical solutions will be demonstrated by comparing the results with other analytical solutions in the literature.
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
NASA Astrophysics Data System (ADS)
Protasov, M.; Gadylshin, K.
2017-07-01
A numerical method is proposed for the calculation of exact frequency-dependent rays when the solution of the Helmholtz equation is known. The properties of frequency-dependent rays are analysed and compared with classical ray theory and with the method of finite-difference modelling for the first time. In this paper, we study the dependence of these rays on the frequency of signals and show the convergence of the exact rays to the classical rays with increasing frequency. A number of numerical experiments demonstrate the distinctive features of exact frequency-dependent rays, in particular, their ability to penetrate into shadow zones that are impenetrable for classical rays.
Duan, Jizhong; Liu, Yu; Jing, Peiguang
2018-02-01
Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.
Accelerated Edge-Preserving Image Restoration Without Boundary Artifacts
Matakos, Antonios; Ramani, Sathish; Fessler, Jeffrey A.
2013-01-01
To reduce blur in noisy images, regularized image restoration methods have been proposed that use non-quadratic regularizers (like l1 regularization or total-variation) that suppress noise while preserving edges in the image. Most of these methods assume a circulant blur (periodic convolution with a blurring kernel) that can lead to wraparound artifacts along the boundaries of the image due to the implied periodicity of the circulant model. Using a non-circulant model could prevent these artifacts at the cost of increased computational complexity. In this work we propose to use a circulant blur model combined with a masking operator that prevents wraparound artifacts. The resulting model is non-circulant, so we propose an efficient algorithm using variable splitting and augmented Lagrangian (AL) strategies. Our variable splitting scheme, when combined with the AL framework and alternating minimization, leads to simple linear systems that can be solved non-iteratively using FFTs, eliminating the need for more expensive CG-type solvers. The proposed method can also efficiently tackle a variety of convex regularizers including edge-preserving (e.g., total-variation) and sparsity promoting (e.g., l1 norm) regularizers. Simulation results show fast convergence of the proposed method, along with improved image quality at the boundaries where the circulant model is inaccurate. PMID:23372080
[Research and development strategies in classical herbal formulae].
Chen, Chang; Cheng, Jin-Tang; Liu, An
2017-05-01
As an outstanding representative of traditional Chinese medicine prescription, classical herbal formulae are the essence of traditional Chinese medicine great treasure. To support the development of classical herbal formulae, the state and relevant administrative departments have successively promulgated the relevant encouraged policies.But some key issues of classic herbal formulae in the development process have not reached a unified consensus and standard, and these problems were discussed in depth here.The authors discussed the registration requirements of classical herbal formulae, proposed the screening specific indicators of classical herbal formulae, determination basis of prescription and dosage,screening method of production process, and the basic principle of clinical localization, in order to bring out valuable opinions and provide a reference for classical herbal formulae development and policy formulation. Copyright© by the Chinese Pharmaceutical Association.
Unfolding sphere size distributions with a density estimator based on Tikhonov regularization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weese, J.; Korat, E.; Maier, D.
1997-12-01
This report proposes a method for unfolding sphere size distributions given a sample of radii that combines the advantages of a density estimator with those of Tikhonov regularization methods. The following topics are discusses in this report to achieve this method: the relation between the profile and the sphere size distribution; the method for unfolding sphere size distributions; the results based on simulations; and the experimental data comparison.
An Aural Learning Project: Assimilating Jazz Education Methods for Traditional Applied Pedagogy
ERIC Educational Resources Information Center
Gamso, Nancy M.
2011-01-01
The Aural Learning Project (ALP) was developed to incorporate jazz method components into the author's classical practice and her applied woodwind lesson curriculum. The primary objective was to place a more focused pedagogical emphasis on listening and hearing than is traditionally used in the classical applied curriculum. The components of the…
On epicardial potential reconstruction using regularization schemes with the L1-norm data term.
Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart
2011-01-07
The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.
Pierre, Peggy; Despert, François; Tranquart, François; Coutant, Régis; Tardy, Véronique; Kerlan, Véronique; Sonnet, Emmanuel; Baron, Sabine; Lorcy, Yannick; Emy, Philippe; Delavierre, Dominique; Monceaux, Françoise; Morel, Yves; Lecomte, Pierre
2012-12-01
Several cases of testicular adrenal rest tumours have been reported in men with congenital adrenal hyperplasia (CAH) due to the classical form of 21-hydroxylase deficiency but the prevalence has not been established. The aims of this report were to evaluate the frequency of testicular adrenal rest tissue in this population in a retrospective multicentre study involving eight endocrinology centres, and to determine whether treatment or genetic background had an impact on the occurrence of adrenal rest tissue. Testicular adrenal rest tissue (TART) was sought clinically and with ultrasound examination in forty-five males with CAH due to the classical form of 21-hydroxylase deficiency. When the diagnosis of testicular adrenal rest tumours was sought, good observance of treatment was judged on biological concentrations of 17-hydroxyprogesterone (17OHP), delta4-androstenedione, active renin and testosterone. The results of affected and non-affected subjects were compared. TART was detected in none of the 18 subjects aged 1 to 15years but was detected in 14 of the 27 subjects aged more than 15years. Five patients with an abnormal echography result had no clinical signs. Therapeutic control evaluated at diagnosis of TART seemed less effective when diagnosis was made in patients with adrenal rest tissue compared to TART-free subjects. Various genotypes were observed in patients with or without TART. Due to the high prevalence of TART in classical CAH and the delayed clinical diagnosis, testicular ultrasonography must be performed before puberty and thereafter regularly during adulthood even if the clinical examination is normal. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
External funding of obstetrical publications: citation significance and trends over 2 decades.
Vintzileos, William S; Ananth, Cande V; Vintzileos, Anthony M
2013-08-01
The objective of the study was to identify the external funding status of the most frequently cited obstetrical publications (citation classics) and to assess trends in funded vs nonfunded manuscripts as well as each publication's type of external funding. For the first objective, the citation classics, which were reported in a previous publication, were reviewed to identify their funding status. For the second objective, all pregnancy-related and obstetrical publications from the 2 US-based leading journals, the American Journal of Obstetrics and Gynecology and Obstetrics and Gynecology, were reviewed to identify the funding status and trends between 1989 and 2012. Twenty-seven of 44 of the citation classics (61%) had external funding, whereas only 43% of the reviewed regular (non-citation classic) obstetrical publications had external funding. There was a decreasing trend in the number of obstetrical manuscripts associated with a decreasing trend in the number and proportion of nonfunded manuscripts and an increasing trend in the number and proportion of National Institutes of Health (NIH)-funded manuscripts. Relative to 1989, in 2012 there was a 34.8% decrease in the number of published obstetrical manuscripts, a 59.6% decrease in the number of nonfunded manuscripts, and a 6.8% increase in the number of funded manuscripts accompanied by an 8.2% increase in the number of NIH-funded publications. In the last 9 years (2004-2012), there was a 35.1% increase in the proportion of NIH-funded manuscripts accompanied by an 18.8% decrease in the proportion of non-NIH-funded manuscripts. Our findings provide useful data regarding the importance of securing NIH-based funding for physicians contemplating academic careers in obstetrics. Copyright © 2013 Mosby, Inc. All rights reserved.
van Gelder, C M; van Capelle, C I; Ebbink, B J; Moor-van Nugteren, I; van den Hout, J M P; Hakkesteegt, M M; van Doorn, P A; de Coo, I F M; Reuser, A J J; de Gier, H H W; van der Ploeg, A T
2012-05-01
Classic infantile Pompe disease is an inherited generalized glycogen storage disorder caused by deficiency of lysosomal acid α-glucosidase. If left untreated, patients die before one year of age. Although enzyme-replacement therapy (ERT) has significantly prolonged lifespan, it has also revealed new aspects of the disease. For up to 11 years, we investigated the frequency and consequences of facial-muscle weakness, speech disorders and dysphagia in long-term survivors. Sequential photographs were used to determine the timing and severity of facial-muscle weakness. Using standardized articulation tests and fibreoptic endoscopic evaluation of swallowing, we investigated speech and swallowing function in a subset of patients. This study included 11 patients with classic infantile Pompe disease. Median age at the start of ERT was 2.4 months (range 0.1-8.3 months), and median age at the end of the study was 4.3 years (range 7.7 months -12.2 years). All patients developed facial-muscle weakness before the age of 15 months. Speech was studied in four patients. Articulation was disordered, with hypernasal resonance and reduced speech intelligibility in all four. Swallowing function was studied in six patients, the most important findings being ineffective swallowing with residues of food (5/6), penetration or aspiration (3/6), and reduced pharyngeal and/or laryngeal sensibility (2/6). We conclude that facial-muscle weakness, speech disorders and dysphagia are common in long-term survivors receiving ERT for classic infantile Pompe disease. To improve speech and reduce the risk for aspiration, early treatment by a speech therapist and regular swallowing assessments are recommended.
Recent Advances and Perspectives on Nonadiabatic Mixed Quantum-Classical Dynamics.
Crespo-Otero, Rachel; Barbatti, Mario
2018-05-16
Nonadiabatic mixed quantum-classical (NA-MQC) dynamics methods form a class of computational theoretical approaches in quantum chemistry tailored to investigate the time evolution of nonadiabatic phenomena in molecules and supramolecular assemblies. NA-MQC is characterized by a partition of the molecular system into two subsystems: one to be treated quantum mechanically (usually but not restricted to electrons) and another to be dealt with classically (nuclei). The two subsystems are connected through nonadiabatic couplings terms to enforce self-consistency. A local approximation underlies the classical subsystem, implying that direct dynamics can be simulated, without needing precomputed potential energy surfaces. The NA-MQC split allows reducing computational costs, enabling the treatment of realistic molecular systems in diverse fields. Starting from the three most well-established methods-mean-field Ehrenfest, trajectory surface hopping, and multiple spawning-this review focuses on the NA-MQC dynamics methods and programs developed in the last 10 years. It stresses the relations between approaches and their domains of application. The electronic structure methods most commonly used together with NA-MQC dynamics are reviewed as well. The accuracy and precision of NA-MQC simulations are critically discussed, and general guidelines to choose an adequate method for each application are delivered.
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.
2006-01-01
The radial return and Mendelson methods for integrating the equations of classical plasticity, which appear independently in the literature, are shown to be identical. Both methods are presented in detail as are the specifics of their algorithmic implementation. Results illustrate the methods' equivalence across a range of conditions and address the question of when the methods require iteration in order for the plastic state to remain on the yield surface. FORTRAN code implementations of the radial return and Mendelson methods are provided in the appendix.
NASA Astrophysics Data System (ADS)
Novaes, Douglas D.; Teixeira, Marco A.; Zeli, Iris O.
2018-05-01
Generic bifurcation theory was classically well developed for smooth differential systems, establishing results for k-parameter families of planar vector fields. In the present study we focus on a qualitative analysis of 2-parameter families, , of planar Filippov systems assuming that Z 0,0 presents a codimension-two minimal set. Such object, named elementary simple two-fold cycle, is characterized by a regular trajectory connecting a visible two-fold singularity to itself, for which the second derivative of the first return map is nonvanishing. We analyzed the codimension-two scenario through the exhibition of its bifurcation diagram.
[A case of 63,X/64,XX mosaicism in a subfertile pony mare].
Pieńkowska-Schelling, A; Handler, J; Neuhauser, S; Schelling, C
2016-04-01
The present case report describes a 6-year old subfertile pony mare, which became pregnant after the eleventh artificial insemination. The examination of the ovaries and the uterus did not reveal any abnormal clinical findings and the mare showed a regular oestrous cycle. Based on cytogenetic and molecular genetic analyses it became possible to elucidate the observed subfertility. The mosaic karyotype of the mare consisted of 63,X (20%) and 64,XX (80%) cells. A PCR analysis failed to amplify sequences from the equine SRY gene. The observed classic 63,X/64,XX mosaicism is a plausible explanation for the subfertility of the mare.
First experimental test of a trace formula for billiard systems showing mixed dynamics.
Dembowski, C; Gräf, H D; Heine, A; Hesse, T; Rehfeld, H; Richter, A
2001-04-09
In general, trace formulas relate the density of states for a given quantum mechanical system to the properties of the periodic orbits of its classical counterpart. Here we report for the first time on a semiclassical description of microwave spectra taken from superconducting billiards of the Limaçon family showing mixed dynamics in terms of a generalized trace formula derived by Ullmo et al. [Phys. Rev. E 54, 136 (1996)]. This expression not only describes mixed-typed behavior but also the limiting cases of fully regular and fully chaotic systems and thus presents a continuous interpolation between the Berry-Tabor and Gutzwiller formulas.
Study of CP(N-1) theta-vacua by cluster simulation of SU(N) quantum spin ladders.
Beard, B B; Pepe, M; Riederer, S; Wiese, U-J
2005-01-14
D-theory provides an alternative lattice regularization of the 2D CP(N-1) quantum field theory in which continuous classical fields emerge from the dimensional reduction of discrete SU(N) quantum spins. Spin ladders consisting of n transversely coupled spin chains lead to a CP(N-1) model with a vacuum angle theta=npi. In D-theory no sign problem arises and an efficient cluster algorithm is used to investigate theta-vacuum effects. At theta=pi there is a first order phase transition with spontaneous breaking of charge conjugation symmetry for CP(N-1) models with N>2.
On the Analytical and Numerical Properties of the Truncated Laplace Transform
2014-05-01
classical study of the truncated Fourier trans- form. The resulting algorithms are applicable to all environments likely to be encountered in applications...other words, (((La,b)∗ ◦ La,b) (un)) (t) = ∫ b a 1 t+ s un(s)ds = α 2 nun (t). (2.69) Observation 2.22. Similarly, La,b ◦ (La,b)∗ of a function g ∈ L2(0...3.20)) are even and odd functions in the regular sense: Un(s) = (Cγ(un)) (s) = (−1) nUn (−s). (3.25) In particular, at the point s = 0, we have: U2j+1(0
Growth and development in children with classic congenital adrenal hyperplasia.
Bonfig, Walter
2017-02-01
Final height outcome in classic congenital adrenal hyperplasia (CAH) has been of interest for many years. With analysis of growth patterns and used glucocorticoid regimens, enhanced treatment strategies have been developed and are still under development. Most of the current reports on final height outcome are confirmative of previous results. Final height data is still reported in cohorts that were diagnosed clinically and not by newborn screening. Clinical diagnosis of CAH leads to delayed diagnosis especially of simple virilizing CAH with significantly advanced bone age resulting in early pubertal development and reduced final height. In contrast salt-wasting CAH is diagnosed at an earlier stage in most cases resulting in better final height outcome in some cohorts. Nevertheless, final height outcome in patients with CAH treated with glucocorticoids is lower than the population norm and also at the lower end of genetic potential. Achievement of regular adult height is still a challenge with conventional glucocorticoid treatment in patients with CAH, which is why new hydrocortisone formulations and new treatment options for CAH are underway.
Solving the patient zero inverse problem by using generalized simulated annealing
NASA Astrophysics Data System (ADS)
Menin, Olavo H.; Bauch, Chris T.
2018-01-01
Identifying patient zero - the initially infected source of a given outbreak - is an important step in epidemiological investigations of both existing and emerging infectious diseases. Here, the use of the Generalized Simulated Annealing algorithm (GSA) to solve the inverse problem of finding the source of an outbreak is studied. The classical disease natural histories susceptible-infected (SI), susceptible-infected-susceptible (SIS), susceptible-infected-recovered (SIR) and susceptible-infected-recovered-susceptible (SIRS) in a regular lattice are addressed. Both the position of patient zero and its time of infection are considered unknown. The algorithm performance with respect to the generalization parameter q˜v and the fraction ρ of infected nodes for whom infection was ascertained is assessed. Numerical experiments show the algorithm is able to retrieve the epidemic source with good accuracy, even when ρ is small, but present no evidence to support that GSA performs better than its classical version. Our results suggest that simulated annealing could be a helpful tool for identifying patient zero in an outbreak where not all cases can be ascertained.
Students with Chronic Conditions: Experiences and Challenges of Regular Education Teachers
ERIC Educational Resources Information Center
Selekman, Janice
2017-01-01
School nurses have observed the increasing prevalence of children with chronic conditions in the school setting; however, little is known about teacher experiences with these children in their regular classrooms. The purpose of this mixed-method study was to describe the experiences and challenges of regular education teachers when they have…
The Temporal Dynamics of Regularity Extraction in Non-Human Primates
ERIC Educational Resources Information Center
Minier, Laure; Fagot, Joël; Rey, Arnaud
2016-01-01
Extracting the regularities of our environment is one of our core cognitive abilities. To study the fine-grained dynamics of the extraction of embedded regularities, a method combining the advantages of the artificial language paradigm (Saffran, Aslin, & Newport, [Saffran, J. R., 1996]) and the serial response time task (Nissen & Bullemer,…
Modifications of the PCPT method for HJB equations
NASA Astrophysics Data System (ADS)
Kossaczký, I.; Ehrhardt, M.; Günther, M.
2016-10-01
In this paper we will revisit the modification of the piecewise constant policy timestepping (PCPT) method for solving Hamilton-Jacobi-Bellman (HJB) equations. This modification is called piecewise predicted policy timestepping (PPPT) method and if properly used, it may be significantly faster. We will quickly recapitulate the algorithms of PCPT, PPPT methods and of the classical implicit method and apply them on a passport option pricing problem with non-standard payoff. We will present modifications needed to solve this problem effectively with the PPPT method and compare the performance with the PCPT method and the classical implicit method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kengne, Jacques; Kenmogne, Fabien
2014-12-15
The nonlinear dynamics of fourth-order Silva-Young type chaotic oscillators with flat power spectrum recently introduced by Tamaseviciute and collaborators is considered. In this type of oscillators, a pair of semiconductor diodes in an anti-parallel connection acts as the nonlinear component necessary for generating chaotic oscillations. Based on the Shockley diode equation and an appropriate selection of the state variables, a smooth mathematical model (involving hyperbolic sine and cosine functions) is derived for a better description of both the regular and chaotic dynamics of the system. The complex behavior of the oscillator is characterized in terms of its parameters by usingmore » time series, bifurcation diagrams, Lyapunov exponents' plots, Poincaré sections, and frequency spectra. It is shown that the onset of chaos is achieved via the classical period-doubling and symmetry restoring crisis scenarios. Some PSPICE simulations of the nonlinear dynamics of the oscillator are presented in order to confirm the ability of the proposed mathematical model to accurately describe/predict both the regular and chaotic behaviors of the oscillator.« less
Breaking time reversal in a simple smooth chaotic system.
Tomsovic, Steven; Ullmo, Denis; Nagano, Tatsuro
2003-06-01
Within random matrix theory, the statistics of the eigensolutions depend fundamentally on the presence (or absence) of time reversal symmetry. Accepting the Bohigas-Giannoni-Schmit conjecture, this statement extends to quantum systems with chaotic classical analogs. For practical reasons, much of the supporting numerical studies of symmetry breaking have been done with billiards or maps, and little with simple, smooth systems. There are two main difficulties in attempting to break time reversal invariance in a continuous time system with a smooth potential. The first is avoiding false time reversal breaking. The second is locating a parameter regime in which the symmetry breaking is strong enough to transform the fluctuation properties fully to the broken symmetry case, and yet remain weak enough so as not to regularize the dynamics sufficiently that the system is no longer chaotic. We give an example of a system of two coupled quartic oscillators whose energy level statistics closely match with those of the Gaussian unitary ensemble, and which possesses only a minor proportion of regular motion in its phase space.
A Note on Weak Solutions of Conservation Laws and Energy/Entropy Conservation
NASA Astrophysics Data System (ADS)
Gwiazda, Piotr; Michálek, Martin; Świerczewska-Gwiazda, Agnieszka
2018-03-01
A common feature of systems of conservation laws of continuum physics is that they are endowed with natural companion laws which are in such cases most often related to the second law of thermodynamics. This observation easily generalizes to any symmetrizable system of conservation laws; they are endowed with nontrivial companion conservation laws, which are immediately satisfied by classical solutions. Not surprisingly, weak solutions may fail to satisfy companion laws, which are then often relaxed from equality to inequality and overtake the role of physical admissibility conditions for weak solutions. We want to answer the question: what is a critical regularity of weak solutions to a general system of conservation laws to satisfy an associated companion law as an equality? An archetypal example of such a result was derived for the incompressible Euler system in the context of Onsager's conjecture in the early nineties. This general result can serve as a simple criterion to numerous systems of mathematical physics to prescribe the regularity of solutions needed for an appropriate companion law to be satisfied.
The full Keller-Segel model is well-posed on nonsmooth domains
NASA Astrophysics Data System (ADS)
Horstmann, D.; Meinlschmidt, H.; Rehberg, J.
2018-04-01
In this paper we prove that the full Keller-Segel system, a quasilinear strongly coupled reaction-crossdiffusion system of four parabolic equations, is well-posed in the sense that it always admits an unique local-in-time solution in an adequate function space, provided that the initial values are suitably regular. The proof is done via an abstract solution theorem for nonlocal quasilinear equations by Amann and is carried out for general source terms. It is fundamentally based on recent nontrivial elliptic and parabolic regularity results which hold true even on rather general nonsmooth spatial domains. For space dimensions 2 and 3, this enables us to work in a nonsmooth setting which is not available in classical parabolic systems theory. Apparently, there exists no comparable existence result for the full Keller-Segel system up to now. Due to the large class of possibly nonsmooth domains admitted, we also obtain new results for the ‘standard’ Keller-Segel system consisting of only two equations as a special case. This work is dedicated to Prof Willi Jäger.
The charge conserving Poisson-Boltzmann equations: Existence, uniqueness, and maximum principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Chiun-Chang, E-mail: chlee@mail.nhcue.edu.tw
2014-05-15
The present article is concerned with the charge conserving Poisson-Boltzmann (CCPB) equation in high-dimensional bounded smooth domains. The CCPB equation is a Poisson-Boltzmann type of equation with nonlocal coefficients. First, under the Robin boundary condition, we get the existence of weak solutions to this equation. The main approach is variational, based on minimization of a logarithm-type energy functional. To deal with the regularity of weak solutions, we establish a maximum modulus estimate for the standard Poisson-Boltzmann (PB) equation to show that weak solutions of the CCPB equation are essentially bounded. Then the classical solutions follow from the elliptic regularity theorem.more » Second, a maximum principle for the CCPB equation is established. In particular, we show that in the case of global electroneutrality, the solution achieves both its maximum and minimum values at the boundary. However, in the case of global non-electroneutrality, the solution may attain its maximum value at an interior point. In addition, under certain conditions on the boundary, we show that the global non-electroneutrality implies pointwise non-electroneutrality.« less
Radial accretion flows on static spherically symmetric black holes
NASA Astrophysics Data System (ADS)
Chaverra, Eliana; Sarbach, Olivier
2015-08-01
We analyze the steady radial accretion of matter into a nonrotating black hole. Neglecting the self-gravity of the accreting matter, we consider a rather general class of static, spherically symmetric and asymptotically flat background spacetimes with a regular horizon. In addition to the Schwarzschild metric, this class contains certain deformation of it, which could arise in alternative gravity theories or from solutions of the classical Einstein equations in the presence of external matter fields. Modeling the ambient matter surrounding the black hole by a relativistic perfect fluid, we reformulate the accretion problem as a dynamical system, and under rather general assumptions on the fluid equation of state, we determine the local and global qualitative behavior of its phase flow. Based on our analysis and generalizing previous work by Michel, we prove that for any given positive particle density number at infinity, there exists a unique radial, steady-state accretion flow which is regular at the horizon. We determine the physical parameters of the flow, including its accretion and compression rates, and discuss their dependency on the background metric.
Learning about the scale of the solar system using digital planetarium visualizations
NASA Astrophysics Data System (ADS)
Yu, Ka Chun; Sahami, Kamran; Dove, James
2017-07-01
We studied the use of a digital planetarium for teaching relative distances and sizes in introductory undergraduate astronomy classes. Inspired in part by the classic short film The Powers of Ten and large physical scale models of the Solar System that can be explored on foot, we created lectures using virtual versions of these two pedagogical approaches for classes that saw either an immersive treatment in the planetarium or a non-immersive version in the regular classroom (with N = 973 students participating in total). Students who visited the planetarium had not only the greatest learning gains, but their performance increased with time, whereas students who saw the same visuals projected onto a flat display in their classroom showed less retention over time. The gains seen in the students who visited the planetarium reveal that this medium is a powerful tool for visualizing scale over multiple orders of magnitude. However the modest gains for the students in the regular classroom also show the utility of these visualization approaches for the broader category of classroom physics simulations.
The transitional behaviour of avalanches in cohesive granular materials
NASA Astrophysics Data System (ADS)
Quintanilla, M. A. S.; Valverde, J. M.; Castellanos, A.
2006-07-01
We present a statistical analysis of avalanches of granular materials that partially fill a slowly rotated horizontal drum. For large sized noncohesive grains the classical coherent oscillation is reproduced, consisting of a quasi-periodic succession of regularly sized avalanches. As the powder cohesiveness is increased by decreasing the particle size, we observe a gradual crossover to a complex dynamics that resembles the transitional behaviour observed in fusion plasmas. For particle size below ~50 µm, avalanches lose a characteristic size, retain a short term memory and turn gradually decorrelated in the long term as described by a Markov process. In contrast, large grains made cohesive by coating them with adhesive microparticles display a distinct phenomenology, characterized by a quasi-regular succession of well defined small precursors and large relaxation events. The transition from a one-peaked distribution (noncohesive large beads) to a flattened distribution (fine cohesive beads) passing through the two-peaked distribution of cohesive large beads had already been predicted using a coupled-map lattice model, as the relaxation mechanism of grain reorganization becomes dominant to the detriment of inertia.
Spatially adaptive bases in wavelet-based coding of semi-regular meshes
NASA Astrophysics Data System (ADS)
Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter
2010-05-01
In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.
Class of regular bouncing cosmologies
NASA Astrophysics Data System (ADS)
Vasilić, Milovan
2017-06-01
In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.
SU-E-T-278: Realization of Dose Verification Tool for IMRT Plan Based On DPM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Jinfeng; Cao, Ruifen; Dai, Yumei
Purpose: To build a Monte Carlo dose verification tool for IMRT Plan by implementing a irradiation source model into DPM code. Extend the ability of DPM to calculate any incident angles and irregular-inhomogeneous fields. Methods: With the virtual source and the energy spectrum which unfolded from the accelerator measurement data,combined with optimized intensity maps to calculate the dose distribution of the irradiation irregular-inhomogeneous field. The irradiation source model of accelerator was substituted by a grid-based surface source. The contour and the intensity distribution of the surface source were optimized by ARTS (Accurate/Advanced Radiotherapy System) optimization module based on the tumormore » configuration. The weight of the emitter was decided by the grid intensity. The direction of the emitter was decided by the combination of the virtual source and the emitter emitting position. The photon energy spectrum unfolded from the accelerator measurement data was adjusted by compensating the contaminated electron source. For verification, measured data and realistic clinical IMRT plan were compared with DPM dose calculation. Results: The regular field was verified by comparing with the measured data. It was illustrated that the differences were acceptable (<2% inside the field, 2–3mm in the penumbra). The dose calculation of irregular field by DPM simulation was also compared with that of FSPB (Finite Size Pencil Beam) and the passing rate of gamma analysis was 95.1% for peripheral lung cancer. The regular field and the irregular rotational field were all within the range of permitting error. The computing time of regular fields were less than 2h, and the test of peripheral lung cancer was 160min. Through parallel processing, the adapted DPM could complete the calculation of IMRT plan within half an hour. Conclusion: The adapted parallelized DPM code with irradiation source model is faster than classic Monte Carlo codes. Its computational accuracy and speed satisfy the clinical requirement, and it is expectable to be a Monte Carlo dose verification tool for IMRT Plan. Strategic Priority Research Program of the China Academy of Science(XDA03040000); National Natural Science Foundation of China (81101132)« less
Classical and Quantum-Mechanical State Reconstruction
ERIC Educational Resources Information Center
Khanna, F. C.; Mello, P. A.; Revzen, M.
2012-01-01
The aim of this paper is to present the subject of state reconstruction in classical and in quantum physics, a subject that deals with the experimentally acquired information that allows the determination of the physical state of a system. Our first purpose is to explain a method for retrieving a classical state in phase space, similar to that…
ERIC Educational Resources Information Center
Zhong, Zhenshan; Sun, Mengyao
2018-01-01
The power of general education curriculum comes from the enduring classics. The authors apply research methods such as questionnaire survey, interview, and observation to investigate the state of general education curriculum implementation at N University and analyze problems faced by incorporating classics. Based on this, the authors propose that…
Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs
NASA Astrophysics Data System (ADS)
Salimi, S.; Jafarizadeh, M. A.
2009-06-01
In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete Kn, charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied.
NASA Astrophysics Data System (ADS)
Ghaderi, A. H.; Darooneh, A. H.
The behavior of nonlinear systems can be analyzed by artificial neural networks. Air temperature change is one example of the nonlinear systems. In this work, a new neural network method is proposed for forecasting maximum air temperature in two cities. In this method, the regular graph concept is used to construct some partially connected neural networks that have regular structures. The learning results of fully connected ANN and networks with proposed method are compared. In some case, the proposed method has the better result than conventional ANN. After specifying the best network, the effect of input pattern numbers on the prediction is studied and the results show that the increase of input patterns has a direct effect on the prediction accuracy.
Phase retrieval using regularization method in intensity correlation imaging
NASA Astrophysics Data System (ADS)
Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin
2014-11-01
Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition
[Study on expression styles of meridian diseases in the Internal Classic].
Jia-Jie; Zhao, Jing-sheng
2007-01-01
To probe expression styles of meridian diseases in the Internal Classic. Expression styles for meridian diseases in the Internal Classic were divided by using literature study methods. Expression styles of meridian diseases in the Internal Classic include the 4 types, i. e. twelve meridians, the six channels on the foot, indications of acupoints, and diseases of zang and fu organs. The recognition of later generations on the meridians diseases in the Lingshu Chanels has a certain history limitation.
A quantum-classical theory with nonlinear and stochastic dynamics
NASA Astrophysics Data System (ADS)
Burić, N.; Popović, D. B.; Radonjić, M.; Prvanović, S.
2014-12-01
The method of constrained dynamical systems on the quantum-classical phase space is utilized to develop a theory of quantum-classical hybrid systems. Effects of the classical degrees of freedom on the quantum part are modeled using an appropriate constraint, and the interaction also includes the effects of neglected degrees of freedom. Dynamical law of the theory is given in terms of nonlinear stochastic differential equations with Hamiltonian and gradient terms. The theory provides a successful dynamical description of the collapse during quantum measurement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donangelo, R.J.
An integral representation for the classical limit of the quantum mechanical S-matrix is developed and applied to heavy-ion Coulomb excitation and Coulomb-nuclear interference. The method combines the quantum principle of superposition with exact classical dynamics to describe the projectile-target system. A detailed consideration of the classical trajectories and of the dimensionless parameters that characterize the system is carried out. The results are compared, where possible, to exact quantum mechanical calculations and to conventional semiclassical calculations. It is found that in the case of backscattering the classical limit S-matrix method is able to almost exactly reproduce the quantum-mechanical S-matrix elements, andmore » therefore the transition probabilities, even for projectiles as light as protons. The results also suggest that this approach should be a better approximation for heavy-ion multiple Coulomb excitation than earlier semiclassical methods, due to a more accurate description of the classical orbits in the electromagnetic field of the target nucleus. Calculations using this method indicate that the rotational excitation probabilities in the Coulomb-nuclear interference region should be very sensitive to the details of the potential at the surface of the nucleus, suggesting that heavy-ion rotational excitation could constitute a sensitive probe of the nuclear potential in this region. The application to other problems as well as the present limits of applicability of the formalism are also discussed.« less
Comparison of adaptive critic-based and classical wide-area controllers for power systems.
Ray, Swakshar; Venayagamoorthy, Ganesh Kumar; Chaudhuri, Balarko; Majumder, Rajat
2008-08-01
An adaptive critic design (ACD)-based damping controller is developed for a thyristor-controlled series capacitor (TCSC) installed in a power system with multiple poorly damped interarea modes. The performance of this ACD computational intelligence-based method is compared with two classical techniques, which are observer-based state-feedback (SF) control and linear matrix inequality LMI-H(infinity) robust control. Remote measurements are used as feedback signals to the wide-area damping controller for modulating the compensation of the TCSC. The classical methods use a linearized model of the system whereas the ACD method is purely measurement-based, leading to a nonlinear controller with fixed parameters. A comparative analysis of the controllers' performances is carried out under different disturbance scenarios. The ACD-based design has shown promising performance with very little knowledge of the system compared to classical model-based controllers. This paper also discusses the advantages and disadvantages of ACDs, SF, and LMI-H(infinity).
Regularization of the Perturbed Spatial Restricted Three-Body Problem by L-Transformations
NASA Astrophysics Data System (ADS)
Poleshchikov, S. M.
2018-03-01
Equations of motion for the perturbed circular restricted three-body problem have been regularized in canonical variables in a moving coordinate system. Two different L-matrices of the fourth order are used in the regularization. Conditions for generalized symplecticity of the constructed transform have been checked. In the unperturbed case, the regular equations have a polynomial structure. The regular equations have been numerically integrated using the Runge-Kutta-Fehlberg method. The results of numerical experiments are given for the Earth-Moon system parameters taking into account the perturbation of the Sun for different L-matrices.
ERIC Educational Resources Information Center
Reetz, Linda J.; Hoover, John H.
Intended for use in preservice or inservice training of regular secondary educators, the module examines principles of communication, assessment, teaching methods, and classroom management through text, an annotated bibliography, and overhead masters. The first section covers communicating with handicapped students, their parents, and other…
Metal Ion Modeling Using Classical Mechanics
2017-01-01
Metal ions play significant roles in numerous fields including chemistry, geochemistry, biochemistry, and materials science. With computational tools increasingly becoming important in chemical research, methods have emerged to effectively face the challenge of modeling metal ions in the gas, aqueous, and solid phases. Herein, we review both quantum and classical modeling strategies for metal ion-containing systems that have been developed over the past few decades. This Review focuses on classical metal ion modeling based on unpolarized models (including the nonbonded, bonded, cationic dummy atom, and combined models), polarizable models (e.g., the fluctuating charge, Drude oscillator, and the induced dipole models), the angular overlap model, and valence bond-based models. Quantum mechanical studies of metal ion-containing systems at the semiempirical, ab initio, and density functional levels of theory are reviewed as well with a particular focus on how these methods inform classical modeling efforts. Finally, conclusions and future prospects and directions are offered that will further enhance the classical modeling of metal ion-containing systems. PMID:28045509
Spatio-Temporal Regularization for Longitudinal Registration to Subject-Specific 3d Template
Guizard, Nicolas; Fonov, Vladimir S.; García-Lorenzo, Daniel; Nakamura, Kunio; Aubert-Broche, Bérengère; Collins, D. Louis
2015-01-01
Neurodegenerative diseases such as Alzheimer's disease present subtle anatomical brain changes before the appearance of clinical symptoms. Manual structure segmentation is long and tedious and although automatic methods exist, they are often performed in a cross-sectional manner where each time-point is analyzed independently. With such analysis methods, bias, error and longitudinal noise may be introduced. Noise due to MR scanners and other physiological effects may also introduce variability in the measurement. We propose to use 4D non-linear registration with spatio-temporal regularization to correct for potential longitudinal inconsistencies in the context of structure segmentation. The major contribution of this article is the use of individual template creation with spatio-temporal regularization of the deformation fields for each subject. We validate our method with different sets of real MRI data, compare it to available longitudinal methods such as FreeSurfer, SPM12, QUARC, TBM, and KNBSI, and demonstrate that spatially local temporal regularization yields more consistent rates of change of global structures resulting in better statistical power to detect significant changes over time and between populations. PMID:26301716
NASA Astrophysics Data System (ADS)
Petržala, Jaromír
2018-07-01
The knowledge of the emission function of a city is crucial for simulation of sky glow in its vicinity. The indirect methods to achieve this function from radiances measured over a part of the sky have been recently developed. In principle, such methods represent an ill-posed inverse problem. This paper deals with the theoretical feasibility study of various approaches to solving of given inverse problem. Particularly, it means testing of fitness of various stabilizing functionals within the Tikhonov's regularization. Further, the L-curve and generalized cross validation methods were investigated as indicators of an optimal regularization parameter. At first, we created the theoretical model for calculation of a sky spectral radiance in the form of a functional of an emission spectral radiance. Consequently, all the mentioned approaches were examined in numerical experiments with synthetical data generated for the fictitious city and loaded by random errors. The results demonstrate that the second order Tikhonov's regularization method together with regularization parameter choice by the L-curve maximum curvature criterion provide solutions which are in good agreement with the supposed model emission functions.
Huang, Wei; Xiao, Liang; Liu, Hongyi; Wei, Zhihui
2015-01-19
Due to the instrumental and imaging optics limitations, it is difficult to acquire high spatial resolution hyperspectral imagery (HSI). Super-resolution (SR) imagery aims at inferring high quality images of a given scene from degraded versions of the same scene. This paper proposes a novel hyperspectral imagery super-resolution (HSI-SR) method via dictionary learning and spatial-spectral regularization. The main contributions of this paper are twofold. First, inspired by the compressive sensing (CS) framework, for learning the high resolution dictionary, we encourage stronger sparsity on image patches and promote smaller coherence between the learned dictionary and sensing matrix. Thus, a sparsity and incoherence restricted dictionary learning method is proposed to achieve higher efficiency sparse representation. Second, a variational regularization model combing a spatial sparsity regularization term and a new local spectral similarity preserving term is proposed to integrate the spectral and spatial-contextual information of the HSI. Experimental results show that the proposed method can effectively recover spatial information and better preserve spectral information. The high spatial resolution HSI reconstructed by the proposed method outperforms reconstructed results by other well-known methods in terms of both objective measurements and visual evaluation.
Iterative image reconstruction that includes a total variation regularization for radial MRI.
Kojima, Shinya; Shinohara, Hiroyuki; Hashimoto, Takeyuki; Hirata, Masami; Ueno, Eiko
2015-07-01
This paper presents an iterative image reconstruction method for radial encodings in MRI based on a total variation (TV) regularization. The algebraic reconstruction method combined with total variation regularization (ART_TV) is implemented with a regularization parameter specifying the weight of the TV term in the optimization process. We used numerical simulations of a Shepp-Logan phantom, as well as experimental imaging of a phantom that included a rectangular-wave chart, to evaluate the performance of ART_TV, and to compare it with that of the Fourier transform (FT) method. The trade-off between spatial resolution and signal-to-noise ratio (SNR) was investigated for different values of the regularization parameter by experiments on a phantom and a commercially available MRI system. ART_TV was inferior to the FT with respect to the evaluation of the modulation transfer function (MTF), especially at high frequencies; however, it outperformed the FT with regard to the SNR. In accordance with the results of SNR measurement, visual impression suggested that the image quality of ART_TV was better than that of the FT for reconstruction of a noisy image of a kiwi fruit. In conclusion, ART_TV provides radial MRI with improved image quality for low-SNR data; however, the regularization parameter in ART_TV is a critical factor for obtaining improvement over the FT.