Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition
Fraley, Chris; Percival, Daniel
2014-01-01
Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001
Estimation of High-Dimensional Graphical Models Using Regularized Score Matching
Lin, Lina; Drton, Mathias; Shojaie, Ali
2017-01-01
Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498
Zhang, Jian-Hua; Böhme, Johann F
2007-11-01
In this paper we report an adaptive regularization network (ARN) approach to realizing fast blind separation of cerebral evoked potentials (EPs) from background electroencephalogram (EEG) activity with no need to make any explicit assumption on the statistical (or deterministic) signal model. The ARNs are proposed to construct nonlinear EEG and EP signal models. A novel adaptive regularization training (ART) algorithm is proposed to improve the generalization performance of the ARN. Two adaptive neural modeling methods based on the ARN are developed and their implementation and performance analysis are also presented. The computer experiments using simulated and measured visual evoked potential (VEP) data have shown that the proposed ARN modeling paradigm yields computationally efficient and more accurate VEP signal estimation owing to its intrinsic model-free and nonlinear processing characteristics.
Regularization method for large eddy simulations of shock-turbulence interactions
NASA Astrophysics Data System (ADS)
Braun, N. O.; Pullin, D. I.; Meiron, D. I.
2018-05-01
The rapid change in scales over a shock has the potential to introduce unique difficulties in Large Eddy Simulations (LES) of compressible shock-turbulence flows if the governing model does not sufficiently capture the spectral distribution of energy in the upstream turbulence. A method for the regularization of LES of shock-turbulence interactions is presented which is constructed to enforce that the energy content in the highest resolved wavenumbers decays as k - 5 / 3, and is computed locally in physical-space at low computational cost. The application of the regularization to an existing subgrid scale model is shown to remove high wavenumber errors while maintaining agreement with Direct Numerical Simulations (DNS) of forced and decaying isotropic turbulence. Linear interaction analysis is implemented to model the interaction of a shock with isotropic turbulence from LES. Comparisons to analytical models suggest that the regularization significantly improves the ability of the LES to predict amplifications in subgrid terms over the modeled shockwave. LES and DNS of decaying, modeled post shock turbulence are also considered, and inclusion of the regularization in shock-turbulence LES is shown to improve agreement with lower Reynolds number DNS.
Computational Labs Using VPython Complement Conventional Labs in Online and Regular Physics Classes
NASA Astrophysics Data System (ADS)
Bachlechner, Martina E.
2009-03-01
Fairmont State University has developed online physics classes for the high-school teaching certificate based on the text book Matter and Interaction by Chabay and Sherwood. This lead to using computational VPython labs also in the traditional class room setting to complement conventional labs. The computational modeling process has proven to provide an excellent basis for the subsequent conventional lab and allows for a concrete experience of the difference between behavior according to a model and realistic behavior. Observations in the regular class room setting feed back into the development of the online classes.
Moncho, Salvador; Autschbach, Jochen
2010-01-12
A benchmark study for relativistic density functional calculations of NMR spin-spin coupling constants has been performed. The test set contained 47 complexes with heavy metal atoms (W, Pt, Hg, Tl, Pb) with a total of 88 coupling constants involving one or two heavy metal atoms. One-, two-, three-, and four-bond spin-spin couplings have been computed at different levels of theory (nonhybrid vs hybrid DFT, scalar vs two-component relativistic). The computational model was based on geometries fully optimized at the BP/TZP scalar relativistic zeroth-order regular approximation (ZORA) and the conductor-like screening model (COSMO) to include solvent effects. The NMR computations also employed the continuum solvent model. Computations in the gas phase were performed in order to assess the importance of the solvation model. The relative median deviations between various computational models and experiment were found to range between 13% and 21%, with the highest-level computational model (hybrid density functional computations including scalar plus spin-orbit relativistic effects, the COSMO solvent model, and a Gaussian finite-nucleus model) performing best.
Complex optimization for big computational and experimental neutron datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Complex optimization for big computational and experimental neutron datasets
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...
2016-11-07
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S
2014-05-01
We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead.
Reducing errors in the GRACE gravity solutions using regularization
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2012-09-01
The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.
Generic effective source for scalar self-force calculations
NASA Astrophysics Data System (ADS)
Wardell, Barry; Vega, Ian; Thornburg, Jonathan; Diener, Peter
2012-05-01
A leading approach to the modeling of extreme mass ratio inspirals involves the treatment of the smaller mass as a point particle and the computation of a regularized self-force acting on that particle. In turn, this computation requires knowledge of the regularized retarded field generated by the particle. A direct calculation of this regularized field may be achieved by replacing the point particle with an effective source and solving directly a wave equation for the regularized field. This has the advantage that all quantities are finite and require no further regularization. In this work, we present a method for computing an effective source which is finite and continuous everywhere, and which is valid for a scalar point particle in arbitrary geodesic motion in an arbitrary background spacetime. We explain in detail various technical and practical considerations that underlie its use in several numerical self-force calculations. We consider as examples the cases of a particle in a circular orbit about Schwarzschild and Kerr black holes, and also the case of a particle following a generic timelike geodesic about a highly spinning Kerr black hole. We provide numerical C code for computing an effective source for various orbital configurations about Schwarzschild and Kerr black holes.
ERIC Educational Resources Information Center
Lee, Young-Jin
2017-01-01
Purpose: The purpose of this paper is to develop a quantitative model of problem solving performance of students in the computer-based mathematics learning environment. Design/methodology/approach: Regularized logistic regression was used to create a quantitative model of problem solving performance of students that predicts whether students can…
Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction
Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.
2016-01-01
X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902
Likelihood ratio decisions in memory: three implied regularities.
Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T
2009-06-01
We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.
Graded meshes in bio-thermal problems with transmission-line modeling method.
Milan, Hugo F M; Carvalho, Carlos A T; Maia, Alex S C; Gebremedhin, Kifle G
2014-10-01
In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems. Copyright © 2014 Elsevier Ltd. All rights reserved.
Teichtmeister, S.; Aldakheel, F.
2016-01-01
This work outlines a novel variational-based theory for the phase-field modelling of ductile fracture in elastic–plastic solids undergoing large strains. The phase-field approach regularizes sharp crack surfaces within a pure continuum setting by a specific gradient damage modelling. It is linked to a formulation of gradient plasticity at finite strains. The framework includes two independent length scales which regularize both the plastic response as well as the crack discontinuities. This ensures that the damage zones of ductile fracture are inside of plastic zones, and guarantees on the computational side a mesh objectivity in post-critical ranges. PMID:27002069
NASA Astrophysics Data System (ADS)
Hegedűs, Árpád
2018-03-01
In this paper, using the light-cone lattice regularization, we compute the finite volume expectation values of the composite operator \\overline{Ψ}Ψ between pure fermion states in the Massive Thirring Model. In the light-cone regularized picture, this expectation value is related to 2-point functions of lattice spin operators being located at neighboring sites of the lattice. The operator \\overline{Ψ}Ψ is proportional to the trace of the stress-energy tensor. This is why the continuum finite volume expectation values can be computed also from the set of non-linear integral equations (NLIE) governing the finite volume spectrum of the theory. Our results for the expectation values coming from the computation of lattice correlators agree with those of the NLIE computations. Previous conjectures for the LeClair-Mussardo-type series representation of the expectation values are also checked.
A Computational Model of Early Argument Structure Acquisition
ERIC Educational Resources Information Center
Alishahi, Afra; Stevenson, Suzanne
2008-01-01
How children go about learning the general regularities that govern language, as well as keeping track of the exceptions to them, remains one of the challenging open questions in the cognitive science of language. Computational modeling is an important methodology in research aimed at addressing this issue. We must determine appropriate learning…
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
Miehe, C; Teichtmeister, S; Aldakheel, F
2016-04-28
This work outlines a novel variational-based theory for the phase-field modelling of ductile fracture in elastic-plastic solids undergoing large strains. The phase-field approach regularizes sharp crack surfaces within a pure continuum setting by a specific gradient damage modelling. It is linked to a formulation of gradient plasticity at finite strains. The framework includes two independent length scales which regularize both the plastic response as well as the crack discontinuities. This ensures that the damage zones of ductile fracture are inside of plastic zones, and guarantees on the computational side a mesh objectivity in post-critical ranges. © 2016 The Author(s).
The generative power of weighted one-sided and regular sticker systems
NASA Astrophysics Data System (ADS)
Siang, Gan Yee; Heng, Fong Wan; Sarmin, Nor Haniza; Turaev, Sherzod
2014-06-01
Sticker systems were introduced in 1998 as one of the DNA computing models by using the recombination behavior of DNA molecules. The Watson-Crick complementary principle of DNA molecules is abstractly used in the sticker systems to perform the computation of sticker systems. In this paper, the generative power of weighted one-sided sticker systems and weighted regular sticker systems are investigated. Moreover, the relationship of the families of languages generated by these two variants of sticker systems to the Chomsky hierarchy is also presented.
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng Jinchao; Qin Chenghu; Jia Kebin
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less
NASA Astrophysics Data System (ADS)
Uieda, Leonardo; Barbosa, Valéria C. F.
2017-01-01
Estimating the relief of the Moho from gravity data is a computationally intensive nonlinear inverse problem. What is more, the modelling must take the Earths curvature into account when the study area is of regional scale or greater. We present a regularized nonlinear gravity inversion method that has a low computational footprint and employs a spherical Earth approximation. To achieve this, we combine the highly efficient Bott's method with smoothness regularization and a discretization of the anomalous Moho into tesseroids (spherical prisms). The computational efficiency of our method is attained by harnessing the fact that all matrices involved are sparse. The inversion results are controlled by three hyperparameters: the regularization parameter, the anomalous Moho density-contrast, and the reference Moho depth. We estimate the regularization parameter using the method of hold-out cross-validation. Additionally, we estimate the density-contrast and the reference depth using knowledge of the Moho depth at certain points. We apply the proposed method to estimate the Moho depth for the South American continent using satellite gravity data and seismological data. The final Moho model is in accordance with previous gravity-derived models and seismological data. The misfit to the gravity and seismological data is worse in the Andes and best in oceanic areas, central Brazil and Patagonia, and along the Atlantic coast. Similarly to previous results, the model suggests a thinner crust of 30-35 km under the Andean foreland basins. Discrepancies with the seismological data are greatest in the Guyana Shield, the central Solimões and Amazonas Basins, the Paraná Basin, and the Borborema province. These differences suggest the existence of crustal or mantle density anomalies that were unaccounted for during gravity data processing.
Chen, Ying; Pham, Tuan D
2013-05-15
We apply for the first time the sample entropy (SampEn) and regularity dimension model for measuring signal complexity to quantify the structural complexity of the brain on MRI. The concept of the regularity dimension is based on the theory of chaos for studying nonlinear dynamical systems, where power laws and entropy measure are adopted to develop the regularity dimension for modeling a mathematical relationship between the frequencies with which information about signal regularity changes in various scales. The sample entropy and regularity dimension of MRI-based brain structural complexity are computed for early Alzheimer's disease (AD) elder adults and age and gender-matched non-demented controls, as well as for a wide range of ages from young people to elder adults. A significantly higher global cortical structure complexity is detected in AD individuals (p<0.001). The increase of SampEn and the regularity dimension are also found to be accompanied with aging which might indicate an age-related exacerbation of cortical structural irregularity. The provided model can be potentially used as an imaging bio-marker for early prediction of AD and age-related cognitive decline. Copyright © 2013 Elsevier B.V. All rights reserved.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
NASA Astrophysics Data System (ADS)
Tucker, Laura Jane
Under the harsh conditions of limited nutrient and hard growth surface, Paenibacillus dendritiformis in agar plates form two classes of patterns (morphotypes). The first class, called the dendritic morphotype, has radially directed branches. The second class, called the chiral morphotype, exhibits uniform handedness. The dendritic morphotype has been modeled successfully using a continuum model on a regular lattice; however, a suitable computational approach was not known to solve a continuum chiral model. This work details a new computational approach to solving the chiral continuum model of pattern formation in P. dendritiformis. The approach utilizes a random computational lattice and new methods for calculating certain derivative terms found in the model.
Turk-Browne, Nicholas B.; Botvinick, Matthew M.; Norman, Kenneth A.
2017-01-01
A growing literature suggests that the hippocampus is critical for the rapid extraction of regularities from the environment. Although this fits with the known role of the hippocampus in rapid learning, it seems at odds with the idea that the hippocampus specializes in memorizing individual episodes. In particular, the Complementary Learning Systems theory argues that there is a computational trade-off between learning the specifics of individual experiences and regularities that hold across those experiences. We asked whether it is possible for the hippocampus to handle both statistical learning and memorization of individual episodes. We exposed a neural network model that instantiates known properties of hippocampal projections and subfields to sequences of items with temporal regularities. We found that the monosynaptic pathway—the pathway connecting entorhinal cortex directly to region CA1—was able to support statistical learning, while the trisynaptic pathway—connecting entorhinal cortex to CA1 through dentate gyrus and CA3—learned individual episodes, with apparent representations of regularities resulting from associative reactivation through recurrence. Thus, in paradigms involving rapid learning, the computational trade-off between learning episodes and regularities may be handled by separate anatomical pathways within the hippocampus itself. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’. PMID:27872368
Schapiro, Anna C; Turk-Browne, Nicholas B; Botvinick, Matthew M; Norman, Kenneth A
2017-01-05
A growing literature suggests that the hippocampus is critical for the rapid extraction of regularities from the environment. Although this fits with the known role of the hippocampus in rapid learning, it seems at odds with the idea that the hippocampus specializes in memorizing individual episodes. In particular, the Complementary Learning Systems theory argues that there is a computational trade-off between learning the specifics of individual experiences and regularities that hold across those experiences. We asked whether it is possible for the hippocampus to handle both statistical learning and memorization of individual episodes. We exposed a neural network model that instantiates known properties of hippocampal projections and subfields to sequences of items with temporal regularities. We found that the monosynaptic pathway-the pathway connecting entorhinal cortex directly to region CA1-was able to support statistical learning, while the trisynaptic pathway-connecting entorhinal cortex to CA1 through dentate gyrus and CA3-learned individual episodes, with apparent representations of regularities resulting from associative reactivation through recurrence. Thus, in paradigms involving rapid learning, the computational trade-off between learning episodes and regularities may be handled by separate anatomical pathways within the hippocampus itself.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).
Modeling Regular Replacement for String Constraint Solving
NASA Technical Reports Server (NTRS)
Fu, Xiang; Li, Chung-Chih
2010-01-01
Bugs in user input sanitation of software systems often lead to vulnerabilities. Among them many are caused by improper use of regular replacement. This paper presents a precise modeling of various semantics of regular substitution, such as the declarative, finite, greedy, and reluctant, using finite state transducers (FST). By projecting an FST to its input/output tapes, we are able to solve atomic string constraints, which can be applied to both the forward and backward image computation in model checking and symbolic execution of text processing programs. We report several interesting discoveries, e.g., certain fragments of the general problem can be handled using less expressive deterministic FST. A compact representation of FST is implemented in SUSHI, a string constraint solver. It is applied to detecting vulnerabilities in web applications
ERIC Educational Resources Information Center
Kiraz, George Anton
This book presents a tractable computational model that can cope with complex morphological operations, especially in Semitic languages, and less complex morphological systems present in Western languages. It outlines a new generalized regular rewrite rule system that uses multiple finite-state automata to cater to root-and-pattern morphology,…
FeynArts model file for MSSM transition counterterms from DREG to DRED
NASA Astrophysics Data System (ADS)
Stöckinger, Dominik; Varšo, Philipp
2012-02-01
The FeynArts model file MSSMdreg2dred implements MSSM transition counterterms which can convert one-loop Green functions from dimensional regularization to dimensional reduction. They correspond to a slight extension of the well-known Martin/Vaughn counterterms, specialized to the MSSM, and can serve also as supersymmetry-restoring counterterms. The paper provides full analytic results for the counterterms and gives one- and two-loop usage examples. The model file can simplify combining MS¯-parton distribution functions with supersymmetric renormalization or avoiding the renormalization of ɛ-scalars in dimensional reduction. Program summaryProgram title:MSSMdreg2dred.mod Catalogue identifier: AEKR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL-License [1] No. of lines in distributed program, including test data, etc.: 7600 No. of bytes in distributed program, including test data, etc.: 197 629 Distribution format: tar.gz Programming language: Mathematica, FeynArts Computer: Any, capable of running Mathematica and FeynArts Operating system: Any, with running Mathematica, FeynArts installation Classification: 4.4, 5, 11.1 Subprograms used: Cat Id Title Reference ADOW_v1_0 FeynArts CPC 140 (2001) 418 Nature of problem: The computation of one-loop Feynman diagrams in the minimal supersymmetric standard model (MSSM) requires regularization. Two schemes, dimensional regularization and dimensional reduction are both common but have different pros and cons. In order to combine the advantages of both schemes one would like to easily convert existing results from one scheme into the other. Solution method: Finite counterterms are constructed which correspond precisely to the one-loop scheme differences for the MSSM. They are provided as a FeynArts [2] model file. Using this model file together with FeynArts, the (ultra-violet) regularization of any MSSM one-loop Green function is switched automatically from dimensional regularization to dimensional reduction. In particular the counterterms serve as supersymmetry-restoring counterterms for dimensional regularization. Restrictions: The counterterms are restricted to the one-loop level and the MSSM. Running time: A few seconds to generate typical Feynman graphs with FeynArts.
Discovering Structural Regularity in 3D Geometry
Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.
2010-01-01
We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292
20 CFR 226.35 - Deductions from regular annuity rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...
Accelerated Edge-Preserving Image Restoration Without Boundary Artifacts
Matakos, Antonios; Ramani, Sathish; Fessler, Jeffrey A.
2013-01-01
To reduce blur in noisy images, regularized image restoration methods have been proposed that use non-quadratic regularizers (like l1 regularization or total-variation) that suppress noise while preserving edges in the image. Most of these methods assume a circulant blur (periodic convolution with a blurring kernel) that can lead to wraparound artifacts along the boundaries of the image due to the implied periodicity of the circulant model. Using a non-circulant model could prevent these artifacts at the cost of increased computational complexity. In this work we propose to use a circulant blur model combined with a masking operator that prevents wraparound artifacts. The resulting model is non-circulant, so we propose an efficient algorithm using variable splitting and augmented Lagrangian (AL) strategies. Our variable splitting scheme, when combined with the AL framework and alternating minimization, leads to simple linear systems that can be solved non-iteratively using FFTs, eliminating the need for more expensive CG-type solvers. The proposed method can also efficiently tackle a variety of convex regularizers including edge-preserving (e.g., total-variation) and sparsity promoting (e.g., l1 norm) regularizers. Simulation results show fast convergence of the proposed method, along with improved image quality at the boundaries where the circulant model is inaccurate. PMID:23372080
Understanding health care communication preferences of veteran primary care users.
LaVela, Sherri L; Schectman, Gordon; Gering, Jeffrey; Locatelli, Sara M; Gawron, Andrew; Weaver, Frances M
2012-09-01
To assess veterans' health communication preferences (in-person, telephone, or electronic) for primary care needs and the impact of computer use on preferences. Structured patient interviews (n=448). Bivariate analyses examined preferences for primary care by 'infrequent' vs. 'regular' computer users. Only 54% were regular computer users, nearly all of whom had ever used the internet. 'Telephone' was preferred for 6 of 10 reasons (general medical questions, medication questions and refills, preventive care reminders, scheduling, and test results); although telephone was preferred by markedly fewer regular computer users. 'In-person' was preferred for new/ongoing conditions/symptoms, treatment instructions, and next care steps; these preferences were unaffected by computer use frequency. Among regular computer users, 1/3 preferred 'electronic' for preventive reminders (37%), test results (34%), and refills (32%). For most primary care needs, telephone communication was preferred, although by a greater proportion of infrequent vs. regular computer users. In-person communication was preferred for reasons that may require an exam or visual instructions. About 1/3 of regular computer users prefer electronic communication for routine needs, e.g., preventive reminders, test results, and refills. These findings can be used to plan patient-centered care that is aligned with veterans' preferred health communication methods. Published by Elsevier Ireland Ltd.
NASA Astrophysics Data System (ADS)
Li, Gang; Zhao, Qing
2017-03-01
In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.
20 CFR 226.14 - Employee regular annuity rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...
20 CFR 226.34 - Divorced spouse regular annuity rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...
An Unified Multiscale Framework for Planar, Surface, and Curve Skeletonization.
Jalba, Andrei C; Sobiecki, Andre; Telea, Alexandru C
2016-01-01
Computing skeletons of 2D shapes, and medial surface and curve skeletons of 3D shapes, is a challenging task. In particular, there is no unified framework that detects all types of skeletons using a single model, and also produces a multiscale representation which allows to progressively simplify, or regularize, all skeleton types. In this paper, we present such a framework. We model skeleton detection and regularization by a conservative mass transport process from a shape's boundary to its surface skeleton, next to its curve skeleton, and finally to the shape center. The resulting density field can be thresholded to obtain a multiscale representation of progressively simplified surface, or curve, skeletons. We detail a numerical implementation of our framework which is demonstrably stable and has high computational efficiency. We demonstrate our framework on several complex 2D and 3D shapes.
ERIC Educational Resources Information Center
Geigel, Joan; And Others
A self-paced program designed to integrate the use of computers and physics courseware into the regular classroom environment is offered for physics high school teachers in this module on projectile and circular motion. A diversity of instructional strategies including lectures, demonstrations, videotapes, computer simulations, laboratories, and…
NASA Astrophysics Data System (ADS)
Adavi, Zohre; Mashhadi-Hossainali, Masoud
2015-04-01
Water vapor is considered as one of the most important weather parameter in meteorology. Its non-uniform distribution, which is due to the atmospheric phenomena above the surface of the earth, depends both on space and time. Due to the limited spatial and temporal coverage of observations, estimating water vapor is still a challenge in meteorology and related fields such as positioning and geodetic techniques. Tomography is a method for modeling the spatio-temporal variations of this parameter. By analyzing the impact of troposphere on the Global Navigation Satellite (GNSS) signals, inversion techniques are used for modeling the water vapor in this approach. Non-uniqueness and instability of solution are the two characteristic features of this problem. Horizontal and/or vertical constraints are usually used to compute a unique solution for this problem. Here, a hybrid regularization method is used for computing a regularized solution. The adopted method is based on the Least-Square QR (LSQR) and Tikhonov regularization techniques. This method benefits from the advantages of both the iterative and direct techniques. Moreover, it is independent of initial values. Based on this property and using an appropriate resolution for the model, firstly the number of model elements which are not constrained by GPS measurement are minimized and then; water vapor density is only estimated at the voxels which are constrained by these measurements. In other words, no constraint is added to solve the problem. Reconstructed profiles of water vapor are validated using radiosonde measurements.
NASA Astrophysics Data System (ADS)
Validi, AbdoulAhad
2014-03-01
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.
Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.
Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata
2008-09-01
A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.
29 CFR 778.208 - Inclusion and exclusion of bonuses in computing the “regular rate.”
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 3 2012-07-01 2012-07-01 false Inclusion and exclusion of bonuses in computing the... Inclusion and exclusion of bonuses in computing the “regular rate.” Section 7(e) of the Act requires the inclusion in the regular rate of all remuneration for employment except eight specified types of payments...
29 CFR 778.208 - Inclusion and exclusion of bonuses in computing the “regular rate.”
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 3 2014-07-01 2014-07-01 false Inclusion and exclusion of bonuses in computing the... Inclusion and exclusion of bonuses in computing the “regular rate.” Section 7(e) of the Act requires the inclusion in the regular rate of all remuneration for employment except eight specified types of payments...
29 CFR 778.208 - Inclusion and exclusion of bonuses in computing the “regular rate.”
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 3 2013-07-01 2013-07-01 false Inclusion and exclusion of bonuses in computing the... Inclusion and exclusion of bonuses in computing the “regular rate.” Section 7(e) of the Act requires the inclusion in the regular rate of all remuneration for employment except eight specified types of payments...
29 CFR 778.208 - Inclusion and exclusion of bonuses in computing the “regular rate.”
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Inclusion and exclusion of bonuses in computing the... Inclusion and exclusion of bonuses in computing the “regular rate.” Section 7(e) of the Act requires the inclusion in the regular rate of all remuneration for employment except seven specified types of payments...
29 CFR 778.208 - Inclusion and exclusion of bonuses in computing the “regular rate.”
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 3 2011-07-01 2011-07-01 false Inclusion and exclusion of bonuses in computing the... Inclusion and exclusion of bonuses in computing the “regular rate.” Section 7(e) of the Act requires the inclusion in the regular rate of all remuneration for employment except eight specified types of payments...
NASA Astrophysics Data System (ADS)
Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li
2014-09-01
This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.
Regularized lattice Boltzmann model for immiscible two-phase flows with power-law rheology
NASA Astrophysics Data System (ADS)
Ba, Yan; Wang, Ningning; Liu, Haihu; Li, Qiang; He, Guoqiang
2018-03-01
In this work, a regularized lattice Boltzmann color-gradient model is developed for the simulation of immiscible two-phase flows with power-law rheology. This model is as simple as the Bhatnagar-Gross-Krook (BGK) color-gradient model except that an additional regularization step is introduced prior to the collision step. In the regularization step, the pseudo-inverse method is adopted as an alternative solution for the nonequilibrium part of the total distribution function, and it can be easily extended to other discrete velocity models no matter whether a forcing term is considered or not. The obtained expressions for the nonequilibrium part are merely related to macroscopic variables and velocity gradients that can be evaluated locally. Several numerical examples, including the single-phase and two-phase layered power-law fluid flows between two parallel plates, and the droplet deformation and breakup in a simple shear flow, are conducted to test the capability and accuracy of the proposed color-gradient model. Results show that the present model is more stable and accurate than the BGK color-gradient model for power-law fluids with a wide range of power-law indices. Compared to its multiple-relaxation-time counterpart, the present model can increase the computing efficiency by around 15%, while keeping the same accuracy and stability. Also, the present model is found to be capable of reasonably predicting the critical capillary number of droplet breakup.
Collective dynamics of 'small-world' networks.
Watts, D J; Strogatz, S H
1998-06-04
Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon (popularly known as six degrees of separation. The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.
NASA Astrophysics Data System (ADS)
Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.
2016-01-01
Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.
NASA Astrophysics Data System (ADS)
Rebillat, Marc; Schoukens, Maarten
2018-05-01
Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
Nonlinear refraction and reflection travel time tomography
Zhang, Jiahua; ten Brink, Uri S.; Toksoz, M.N.
1998-01-01
We develop a rapid nonlinear travel time tomography method that simultaneously inverts refraction and reflection travel times on a regular velocity grid. For travel time and ray path calculations, we apply a wave front method employing graph theory. The first-arrival refraction travel times are calculated on the basis of cell velocities, and the later refraction and reflection travel times are computed using both cell velocities and given interfaces. We solve a regularized nonlinear inverse problem. A Laplacian operator is applied to regularize the model parameters (cell slownesses and reflector geometry) so that the inverse problem is valid for a continuum. The travel times are also regularized such that we invert travel time curves rather than travel time points. A conjugate gradient method is applied to minimize the nonlinear objective function. After obtaining a solution, we perform nonlinear Monte Carlo inversions for uncertainty analysis and compute the posterior model covariance. In numerical experiments, we demonstrate that combining the first arrival refraction travel times with later reflection travel times can better reconstruct the velocity field as well as the reflector geometry. This combination is particularly important for modeling crustal structures where large velocity variations occur in the upper crust. We apply this approach to model the crustal structure of the California Borderland using ocean bottom seismometer and land data collected during the Los Angeles Region Seismic Experiment along two marine survey lines. Details of our image include a high-velocity zone under the Catalina Ridge, but a smooth gradient zone between. Catalina Ridge and San Clemente Ridge. The Moho depth is about 22 km with lateral variations. Copyright 1998 by the American Geophysical Union.
Wu, Haifeng; Sun, Tao; Wang, Jingjing; Li, Xia; Wang, Wei; Huo, Da; Lv, Pingxin; He, Wen; Wang, Keyang; Guo, Xiuhua
2013-08-01
The objective of this study was to investigate the method of the combination of radiological and textural features for the differentiation of malignant from benign solitary pulmonary nodules by computed tomography. Features including 13 gray level co-occurrence matrix textural features and 12 radiological features were extracted from 2,117 CT slices, which came from 202 (116 malignant and 86 benign) patients. Lasso-type regularization to a nonlinear regression model was applied to select predictive features and a BP artificial neural network was used to build the diagnostic model. Eight radiological and two textural features were obtained after the Lasso-type regularization procedure. Twelve radiological features alone could reach an area under the ROC curve (AUC) of 0.84 in differentiating between malignant and benign lesions. The 10 selected characters improved the AUC to 0.91. The evaluation results showed that the method of selecting radiological and textural features appears to yield more effective in the distinction of malignant from benign solitary pulmonary nodules by computed tomography.
NASA Astrophysics Data System (ADS)
Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin
2018-02-01
Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model
ATHENA 3D: A finite element code for ultrasonic wave propagation
NASA Astrophysics Data System (ADS)
Rose, C.; Rupin, F.; Fouquet, T.; Chassignole, B.
2014-04-01
The understanding of wave propagation phenomena requires use of robust numerical models. 3D finite element (FE) models are generally prohibitively time consuming. However, advances in computing processor speed and memory allow them to be more and more competitive. In this context, EDF R&D developed the 3D version of the well-validated FE code ATHENA2D. The code is dedicated to the simulation of wave propagation in all kinds of elastic media and in particular, heterogeneous and anisotropic materials like welds. It is based on solving elastodynamic equations in the calculation zone expressed in terms of stress and particle velocities. The particularity of the code relies on the fact that the discretization of the calculation domain uses a Cartesian regular 3D mesh while the defect of complex geometry can be described using a separate (2D) mesh using the fictitious domains method. This allows combining the rapidity of regular meshes computation with the capability of modelling arbitrary shaped defects. Furthermore, the calculation domain is discretized with a quasi-explicit time evolution scheme. Thereby only local linear systems of small size have to be solved. The final step to reduce the computation time relies on the fact that ATHENA3D has been parallelized and adapted to the use of HPC resources. In this paper, the validation of the 3D FE model is discussed. A cross-validation of ATHENA 3D and CIVA is proposed for several inspection configurations. The performances in terms of calculation time are also presented in the cases of both local computer and computation cluster use.
Meshfree and efficient modeling of swimming cells
NASA Astrophysics Data System (ADS)
Gallagher, Meurig T.; Smith, David J.
2018-05-01
Locomotion in Stokes flow is an intensively studied problem because it describes important biological phenomena such as the motility of many species' sperm, bacteria, algae, and protozoa. Numerical computations can be challenging, particularly in three dimensions, due to the presence of moving boundaries and complex geometries; methods which combine ease of implementation and computational efficiency are therefore needed. A recently proposed method to discretize the regularized Stokeslet boundary integral equation without the need for a connected mesh is applied to the inertialess locomotion problem in Stokes flow. The mathematical formulation and key aspects of the computational implementation in matlab® or GNU Octave are described, followed by numerical experiments with biflagellate algae and multiple uniflagellate sperm swimming between no-slip surfaces, for which both swimming trajectories and flow fields are calculated. These computational experiments required minutes of time on modest hardware; an extensible implementation is provided in a GitHub repository. The nearest-neighbor discretization dramatically improves convergence and robustness, a key challenge in extending the regularized Stokeslet method to complicated three-dimensional biological fluid problems.
Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data
Jung, Jaewook; Jwa, Yoonseok; Sohn, Gunho
2017-01-01
With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS) data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP) technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark datasets. The results show that the proposed method can robustly produce accurate regularized 3D building rooftop models. PMID:28335486
Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data.
Jung, Jaewook; Jwa, Yoonseok; Sohn, Gunho
2017-03-19
With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS) data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP) technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark datasets. The results show that the proposed method can robustly produce accurate regularized 3D building rooftop models.
GPU-accelerated regularized iterative reconstruction for few-view cone beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca
2015-04-15
Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less
Composite SAR imaging using sequential joint sparsity
NASA Astrophysics Data System (ADS)
Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.
2017-06-01
This paper investigates accurate and efficient ℓ1 regularization methods for generating synthetic aperture radar (SAR) images. Although ℓ1 regularization algorithms are already employed in SAR imaging, practical and efficient implementation in terms of real time imaging remain a challenge. Here we demonstrate that fast numerical operators can be used to robustly implement ℓ1 regularization methods that are as or more efficient than traditional approaches such as back projection, while providing superior image quality. In particular, we develop a sequential joint sparsity model for composite SAR imaging which naturally combines the joint sparsity methodology with composite SAR. Our technique, which can be implemented using standard, fractional, or higher order total variation regularization, is able to reduce the effects of speckle and other noisy artifacts with little additional computational cost. Finally we show that generalizing total variation regularization to non-integer and higher orders provides improved flexibility and robustness for SAR imaging.
ERIC Educational Resources Information Center
Molenaar, Ivo W.
The technical problems involved in obtaining Bayesian model estimates for the regression parameters in m similar groups are studied. The available computer programs, BPREP (BASIC), and BAYREG, both written in FORTRAN, require an amount of computer processing that does not encourage regular use. These programs are analyzed so that the performance…
Modelling Trial-by-Trial Changes in the Mismatch Negativity
Lieder, Falk; Daunizeau, Jean; Garrido, Marta I.; Friston, Karl J.; Stephan, Klaas E.
2013-01-01
The mismatch negativity (MMN) is a differential brain response to violations of learned regularities. It has been used to demonstrate that the brain learns the statistical structure of its environment and predicts future sensory inputs. However, the algorithmic nature of these computations and the underlying neurobiological implementation remain controversial. This article introduces a mathematical framework with which competing ideas about the computational quantities indexed by MMN responses can be formalized and tested against single-trial EEG data. This framework was applied to five major theories of the MMN, comparing their ability to explain trial-by-trial changes in MMN amplitude. Three of these theories (predictive coding, model adjustment, and novelty detection) were formalized by linking the MMN to different manifestations of the same computational mechanism: approximate Bayesian inference according to the free-energy principle. We thereby propose a unifying view on three distinct theories of the MMN. The relative plausibility of each theory was assessed against empirical single-trial MMN amplitudes acquired from eight healthy volunteers in a roving oddball experiment. Models based on the free-energy principle provided more plausible explanations of trial-by-trial changes in MMN amplitude than models representing the two more traditional theories (change detection and adaptation). Our results suggest that the MMN reflects approximate Bayesian learning of sensory regularities, and that the MMN-generating process adjusts a probabilistic model of the environment according to prediction errors. PMID:23436989
Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals.
Engemann, Denis A; Gramfort, Alexandre
2015-03-01
Magnetoencephalography and electroencephalography (M/EEG) measure non-invasively the weak electromagnetic fields induced by post-synaptic neural currents. The estimation of the spatial covariance of the signals recorded on M/EEG sensors is a building block of modern data analysis pipelines. Such covariance estimates are used in brain-computer interfaces (BCI) systems, in nearly all source localization methods for spatial whitening as well as for data covariance estimation in beamformers. The rationale for such models is that the signals can be modeled by a zero mean Gaussian distribution. While maximizing the Gaussian likelihood seems natural, it leads to a covariance estimate known as empirical covariance (EC). It turns out that the EC is a poor estimate of the true covariance when the number of samples is small. To address this issue the estimation needs to be regularized. The most common approach downweights off-diagonal coefficients, while more advanced regularization methods are based on shrinkage techniques or generative models with low rank assumptions: probabilistic PCA (PPCA) and factor analysis (FA). Using cross-validation all of these models can be tuned and compared based on Gaussian likelihood computed on unseen data. We investigated these models on simulations, one electroencephalography (EEG) dataset as well as magnetoencephalography (MEG) datasets from the most common MEG systems. First, our results demonstrate that different models can be the best, depending on the number of samples, heterogeneity of sensor types and noise properties. Second, we show that the models tuned by cross-validation are superior to models with hand-selected regularization. Hence, we propose an automated solution to the often overlooked problem of covariance estimation of M/EEG signals. The relevance of the procedure is demonstrated here for spatial whitening and source localization of MEG signals. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Susyanto, Nanang
2017-12-01
We propose a simple derivation of the Cramer-Rao Lower Bound (CRLB) of parameters under equality constraints from the CRLB without constraints in regular parametric models. When a regular parametric model and an equality constraint of the parameter are given, a parametric submodel can be defined by restricting the parameter under that constraint. The tangent space of this submodel is then computed with the help of the implicit function theorem. Finally, the score function of the restricted parameter is obtained by projecting the efficient influence function of the unrestricted parameter on the appropriate inner product spaces.
Contextuality as a Resource for Models of Quantum Computation with Qubits
NASA Astrophysics Data System (ADS)
Bermejo-Vega, Juan; Delfosse, Nicolas; Browne, Dan E.; Okay, Cihan; Raussendorf, Robert
2017-09-01
A central question in quantum computation is to identify the resources that are responsible for quantum speed-up. Quantum contextuality has been recently shown to be a resource for quantum computation with magic states for odd-prime dimensional qudits and two-dimensional systems with real wave functions. The phenomenon of state-independent contextuality poses a priori an obstruction to characterizing the case of regular qubits, the fundamental building block of quantum computation. Here, we establish contextuality of magic states as a necessary resource for a large class of quantum computation schemes on qubits. We illustrate our result with a concrete scheme related to measurement-based quantum computation.
Local-aggregate modeling for big data via distributed optimization: Applications to neuroimaging.
Hu, Yue; Allen, Genevera I
2015-12-01
Technological advances have led to a proliferation of structured big data that have matrix-valued covariates. We are specifically motivated to build predictive models for multi-subject neuroimaging data based on each subject's brain imaging scans. This is an ultra-high-dimensional problem that consists of a matrix of covariates (brain locations by time points) for each subject; few methods currently exist to fit supervised models directly to this tensor data. We propose a novel modeling and algorithmic strategy to apply generalized linear models (GLMs) to this massive tensor data in which one set of variables is associated with locations. Our method begins by fitting GLMs to each location separately, and then builds an ensemble by blending information across locations through regularization with what we term an aggregating penalty. Our so called, Local-Aggregate Model, can be fit in a completely distributed manner over the locations using an Alternating Direction Method of Multipliers (ADMM) strategy, and thus greatly reduces the computational burden. Furthermore, we propose to select the appropriate model through a novel sequence of faster algorithmic solutions that is similar to regularization paths. We will demonstrate both the computational and predictive modeling advantages of our methods via simulations and an EEG classification problem. © 2015, The International Biometric Society.
Cortical dipole imaging using truncated total least squares considering transfer matrix error.
Hori, Junichi; Takeuchi, Kosuke
2013-01-01
Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.
A Structured Microprogram Set for the SUMC Computer to Emulate the IBM System/360, Model 50
NASA Technical Reports Server (NTRS)
Gimenez, Cesar R.
1975-01-01
Similarities between regular and structured microprogramming were examined. An explanation of machine branching architecture (particularly in the SUMC computer), required for ease of structured microprogram implementation is presented. Implementation of a structured microprogram set in the SUMC to emulate the IBM System/360 is described and a comparison is made between the structured set with a nonstructured set previously written for the SUMC.
ERIC Educational Resources Information Center
Levin, Sidney
1984-01-01
Presents the listing (TRS-80) for a computer program which derives the relativistic equation (employing as a model the concept of a moving clock which emits photons at regular intervals) and calculates transformations of time, mass, and length with increasing velocities (Einstein-Lorentz transformations). (JN)
29 CFR 548.100 - Introductory statement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... simplify bookkeeping and computation of overtime pay. 1 The regular rate is the average hourly earnings of... AUTHORIZATION OF ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Introduction § 548.100... requirements of computing overtime pay at the regular rate, 1 and to allow, under specific conditions, the use...
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
20 CFR 226.33 - Spouse regular annuity rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Spouse regular annuity rate. 226.33 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.33 Spouse regular annuity rate. The final tier I and tier II rates, from §§ 226.30 and 226.32, are...
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
Collective stochastic coherence in recurrent neuronal networks
NASA Astrophysics Data System (ADS)
Sancristóbal, Belén; Rebollo, Beatriz; Boada, Pol; Sanchez-Vives, Maria V.; Garcia-Ojalvo, Jordi
2016-09-01
Recurrent networks of dynamic elements frequently exhibit emergent collective oscillations, which can show substantial regularity even when the individual elements are considerably noisy. How noise-induced dynamics at the local level coexists with regular oscillations at the global level is still unclear. Here we show that a combination of stochastic recurrence-based initiation with deterministic refractoriness in an excitable network can reconcile these two features, leading to maximum collective coherence for an intermediate noise level. We report this behaviour in the slow oscillation regime exhibited by a cerebral cortex network under dynamical conditions resembling slow-wave sleep and anaesthesia. Computational analysis of a biologically realistic network model reveals that an intermediate level of background noise leads to quasi-regular dynamics. We verify this prediction experimentally in cortical slices subject to varying amounts of extracellular potassium, which modulates neuronal excitability and thus synaptic noise. The model also predicts that this effectively regular state should exhibit noise-induced memory of the spatial propagation profile of the collective oscillations, which is also verified experimentally. Taken together, these results allow us to construe the high regularity observed experimentally in the brain as an instance of collective stochastic coherence.
Optimizing human activity patterns using global sensitivity analysis.
Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M
2014-12-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.
Optimizing human activity patterns using global sensitivity analysis
Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2014-01-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less
A multiplicative regularization for force reconstruction
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2017-02-01
Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.
A MATLAB based 3D modeling and inversion code for MT data
NASA Astrophysics Data System (ADS)
Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.
2017-07-01
The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.
Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O
1994-01-01
The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.
LRSSLMDA: Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction
Huang, Li
2017-01-01
Predicting novel microRNA (miRNA)-disease associations is clinically significant due to miRNAs’ potential roles of diagnostic biomarkers and therapeutic targets for various human diseases. Previous studies have demonstrated the viability of utilizing different types of biological data to computationally infer new disease-related miRNAs. Yet researchers face the challenge of how to effectively integrate diverse datasets and make reliable predictions. In this study, we presented a computational model named Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction (LRSSLMDA), which projected miRNAs/diseases’ statistical feature profile and graph theoretical feature profile to a common subspace. It used Laplacian regularization to preserve the local structures of the training data and a L1-norm constraint to select important miRNA/disease features for prediction. The strength of dimensionality reduction enabled the model to be easily extended to much higher dimensional datasets than those exploited in this study. Experimental results showed that LRSSLMDA outperformed ten previous models: the AUC of 0.9178 in global leave-one-out cross validation (LOOCV) and the AUC of 0.8418 in local LOOCV indicated the model’s superior prediction accuracy; and the average AUC of 0.9181+/-0.0004 in 5-fold cross validation justified its accuracy and stability. In addition, three types of case studies further demonstrated its predictive power. Potential miRNAs related to Colon Neoplasms, Lymphoma, Kidney Neoplasms, Esophageal Neoplasms and Breast Neoplasms were predicted by LRSSLMDA. Respectively, 98%, 88%, 96%, 98% and 98% out of the top 50 predictions were validated by experimental evidences. Therefore, we conclude that LRSSLMDA would be a valuable computational tool for miRNA-disease association prediction. PMID:29253885
NASA Astrophysics Data System (ADS)
Prot, Olivier; SantolíK, OndřEj; Trotignon, Jean-Gabriel; Deferaudy, Hervé
2006-06-01
An entropy regularization algorithm (ERA) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the wave distribution function (WDF) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any prespecified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However, these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.
High-Accuracy Comparison Between the Post-Newtonian and Self-Force Dynamics of Black-Hole Binaries
NASA Astrophysics Data System (ADS)
Blanchet, Luc; Detweiler, Steven; Le Tiec, Alexandre; Whiting, Bernard F.
The relativistic motion of a compact binary system moving in circular orbit is investigated using the post-Newtonian (PN) approximation and the perturbative self-force (SF) formalism. A particular gauge-invariant observable quantity is computed as a function of the binary's orbital frequency. The conservative effect induced by the gravitational SF is obtained numerically with high precision, and compared to the PN prediction developed to high order. The PN calculation involves the computation of the 3PN regularized metric at the location of the particle. Its divergent self-field is regularized by means of dimensional regularization. The poles ∝ {(d - 3)}^{-1} that occur within dimensional regularization at the 3PN order disappear from the final gauge-invariant result. The leading 4PN and next-to-leading 5PN conservative logarithmic contributions originating from gravitational wave tails are also obtained. Making use of these exact PN results, some previously unknown PN coefficients are measured up to the very high 7PN order by fitting to the numerical SF data. Using just the 2PN and new logarithmic terms, the value of the 3PN coefficient is also confirmed numerically with very high precision. The consistency of this cross-cultural comparison provides a crucial test of the very different regularization methods used in both SF and PN formalisms, and illustrates the complementarity of these approximation schemes when modeling compact binary systems.
Modeling of polychromatic attenuation using computed tomography reconstructed images
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.
1999-01-01
This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.
ERIC Educational Resources Information Center
Smith, Glenn Gordon
2012-01-01
This study compared books with embedded computer games (via pentop computers with microdot paper and audio feedback) with regular books with maps, in terms of fifth graders' comprehension and retention of spatial details from stories. One group read a story in hard copy with embedded computer games, the other group read it in regular book format…
Extraordinary Oscillations of an Ordinary Forced Pendulum
ERIC Educational Resources Information Center
Butikov, Eugene I.
2008-01-01
Several well-known and newly discovered counterintuitive regular and chaotic modes of the sinusoidally driven rigid planar pendulum are discussed and illustrated by computer simulations. The software supporting the investigation offers many interesting predefined examples that demonstrate various peculiarities of this famous physical model.…
Yi, Huangjian; Chen, Duofang; Li, Wei; Zhu, Shouping; Wang, Xiaorui; Liang, Jimin; Tian, Jie
2013-05-01
Fluorescence molecular tomography (FMT) is an important imaging technique of optical imaging. The major challenge of the reconstruction method for FMT is the ill-posed and underdetermined nature of the inverse problem. In past years, various regularization methods have been employed for fluorescence target reconstruction. A comparative study between the reconstruction algorithms based on l1-norm and l2-norm for two imaging models of FMT is presented. The first imaging model is adopted by most researchers, where the fluorescent target is of small size to mimic small tissue with fluorescent substance, as demonstrated by the early detection of a tumor. The second model is the reconstruction of distribution of the fluorescent substance in organs, which is essential to drug pharmacokinetics. Apart from numerical experiments, in vivo experiments were conducted on a dual-modality FMT/micro-computed tomography imaging system. The experimental results indicated that l1-norm regularization is more suitable for reconstructing the small fluorescent target, while l2-norm regularization performs better for the reconstruction of the distribution of fluorescent substance.
Gençay, R; Qi, M
2001-01-01
We study the effectiveness of cross validation, Bayesian regularization, early stopping, and bagging to mitigate overfitting and improving generalization for pricing and hedging derivative securities with daily S&P 500 index daily call options from January 1988 to December 1993. Our results indicate that Bayesian regularization can generate significantly smaller pricing and delta-hedging errors than the baseline neural-network (NN) model and the Black-Scholes model for some years. While early stopping does not affect the pricing errors, it significantly reduces the hedging error (HE) in four of the six years we investigated. Although computationally most demanding, bagging seems to provide the most accurate pricing and delta hedging. Furthermore, the standard deviation of the MSPE of bagging is far less than that of the baseline model in all six years, and the standard deviation of the average HE of bagging is far less than that of the baseline model in five out of six years. We conclude that they be used at least in cases when no appropriate hints are available.
Swimming in a two-dimensional Brinkman fluid: Computational modeling and regularized solutions
NASA Astrophysics Data System (ADS)
Leiderman, Karin; Olson, Sarah D.
2016-02-01
The incompressible Brinkman equation represents the homogenized fluid flow past obstacles that comprise a small volume fraction. In nondimensional form, the Brinkman equation can be characterized by a single parameter that represents the friction or resistance due to the obstacles. In this work, we derive an exact fundamental solution for 2D Brinkman flow driven by a regularized point force and describe the numerical method to use it in practice. To test our solution and method, we compare numerical results with an analytic solution of a stationary cylinder in a uniform Brinkman flow. Our method is also compared to asymptotic theory; for an infinite-length, undulating sheet of small amplitude, we recover an increasing swimming speed as the resistance is increased. With this computational framework, we study a model swimmer of finite length and observe an enhancement in propulsion and efficiency for small to moderate resistance. Finally, we study the interaction of two swimmers where attraction does not occur when the initial separation distance is larger than the screening length.
Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K
2013-08-01
A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.
Prakash, Jaya; Yalavarthy, Phaneendra K
2013-03-01
Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.
Adolescent Marijuana Use Intentions: Using Theory to Plan an Intervention
ERIC Educational Resources Information Center
Sayeed, Sarah; Fishbein, Martin; Hornik, Robert; Cappella, Joseph; Kirkland Ahern, R.
2005-01-01
This paper uses an integrated model of behavior change to predict intentions to use marijuana occasionally and regularly in a US-based national sample of male and female 12 to 18 year olds (n = 600). The model combines key constructs from the theory of reasoned action and social cognitive theory. The survey was conducted on laptop computers, and…
ERIC Educational Resources Information Center
Rodriguez, Luis J.; Torres, M. Ines
2006-01-01
Previous works in English have revealed that disfluencies follow regular patterns and that incorporating them into the language model of a speech recognizer leads to lower perplexities and sometimes to a better performance. Although work on disfluency modeling has been applied outside the English community (e.g., in Japanese), as far as we know…
NASA Astrophysics Data System (ADS)
Dang, H.; Stayman, J. W.; Xu, J.; Sisniega, A.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.
2016-03-01
Intracranial hemorrhage (ICH) is associated with pathologies such as hemorrhagic stroke and traumatic brain injury. Multi-detector CT is the current front-line imaging modality for detecting ICH (fresh blood contrast 40-80 HU, down to 1 mm). Flat-panel detector (FPD) cone-beam CT (CBCT) offers a potential alternative with a smaller scanner footprint, greater portability, and lower cost potentially well suited to deployment at the point of care outside standard diagnostic radiology and emergency room settings. Previous studies have suggested reliable detection of ICH down to 3 mm in CBCT using high-fidelity artifact correction and penalized weighted least-squared (PWLS) image reconstruction with a post-artifact-correction noise model. However, ICH reconstructed by traditional image regularization exhibits nonuniform spatial resolution and noise due to interaction between the statistical weights and regularization, which potentially degrades the detectability of ICH. In this work, we propose three regularization methods designed to overcome these challenges. The first two compute spatially varying certainty for uniform spatial resolution and noise, respectively. The third computes spatially varying regularization strength to achieve uniform "detectability," combining both spatial resolution and noise in a manner analogous to a delta-function detection task. Experiments were conducted on a CBCT test-bench, and image quality was evaluated for simulated ICH in different regions of an anthropomorphic head. The first two methods improved the uniformity in spatial resolution and noise compared to traditional regularization. The third exhibited the highest uniformity in detectability among all methods and best overall image quality. The proposed regularization provides a valuable means to achieve uniform image quality in CBCT of ICH and is being incorporated in a CBCT prototype for ICH imaging.
Electrooptical adaptive switching network for the hypercube computer
NASA Technical Reports Server (NTRS)
Chow, E.; Peterson, J.
1988-01-01
An all-optical network design for the hyperswitch network using regular free-space interconnects between electronic processor nodes is presented. The adaptive routing model used is described, and an adaptive routing control example is presented. The design demonstrates that existing electrooptical techniques are sufficient for implementing efficient parallel architectures without the need for more complex means of implementing arbitrary interconnection schemes. The electrooptical hyperswitch network significantly improves the communication performance of the hypercube computer.
Computationally efficient statistical differential equation modeling using homogenization
Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.
2013-01-01
Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.
Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2017-05-01
Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.
Boudreau, François; Walthouwer, Michel Jean Louis; de Vries, Hein; Dagenais, Gilles R; Turbide, Ginette; Bourlaud, Anne-Sophie; Moreau, Michel; Côté, José; Poirier, Paul
2015-10-09
The relationship between physical activity and cardiovascular disease (CVD) protection is well documented. Numerous factors (e.g. patient motivation, lack of facilities, physician time constraints) can contribute to poor PA adherence. Web-based computer-tailored interventions offer an innovative way to provide tailored feedback and to empower adults to engage in regular moderate- to vigorous-intensity PA. To describe the rationale, design and content of a web-based computer-tailored PA intervention for Canadian adults enrolled in a randomized controlled trial (RCT). 244 men and women aged between 35 and 70 years, without CVD or physical disability, not participating in regular moderate- to vigorous-intensity PA, and familiar with and having access to a computer at home, were recruited from the Quebec City Prospective Urban and Rural Epidemiological (PURE) study centre. Participants were randomized into two study arms: 1) an experimental group receiving the intervention and 2) a waiting list control group. The fully automated web-based computer-tailored PA intervention consists of seven 10- to 15-min sessions over an 8-week period. The theoretical underpinning of the intervention is based on the I-Change Model. The aim of the intervention was to reach a total of 150 min per week of moderate- to vigorous-intensity aerobic PA. This study will provide useful information before engaging in a large RCT to assess the long-term participation and maintenance of PA, the potential impact of regular PA on CVD risk factors and the cost-effectiveness of a web-based computer-tailored intervention. ISRCTN36353353 registered on 24/07/2014.
A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer.
Hwang, Seong Jae; Collins, Maxwell D; Ravi, Sathya N; Ithapu, Vamsi K; Adluru, Nagesh; Johnson, Sterling C; Singh, Vikas
2015-12-01
Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a "black box" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.
Yan, Zai You; Hung, Kin Chew; Zheng, Hui
2003-05-01
Regularization of the hypersingular integral in the normal derivative of the conventional Helmholtz integral equation through a double surface integral method or regularization relationship has been studied. By introducing the new concept of discretized operator matrix, evaluation of the double surface integrals is reduced to calculate the product of two discretized operator matrices. Such a treatment greatly improves the computational efficiency. As the number of frequencies to be computed increases, the computational cost of solving the composite Helmholtz integral equation is comparable to that of solving the conventional Helmholtz integral equation. In this paper, the detailed formulation of the proposed regularization method is presented. The computational efficiency and accuracy of the regularization method are demonstrated for a general class of acoustic radiation and scattering problems. The radiation of a pulsating sphere, an oscillating sphere, and a rigid sphere insonified by a plane acoustic wave are solved using the new method with curvilinear quadrilateral isoparametric elements. It is found that the numerical results rapidly converge to the corresponding analytical solutions as finer meshes are applied.
A gradient enhanced plasticity-damage microplane model for concrete
NASA Astrophysics Data System (ADS)
Zreid, Imadeddin; Kaliske, Michael
2018-03-01
Computational modeling of concrete poses two main types of challenges. The first is the mathematical description of local response for such a heterogeneous material under all stress states, and the second is the stability and efficiency of the numerical implementation in finite element codes. The paper at hand presents a comprehensive approach addressing both issues. Adopting the microplane theory, a combined plasticity-damage model is formulated and regularized by an implicit gradient enhancement. The plasticity part introduces a new microplane smooth 3-surface cap yield function, which provides a stable numerical solution within an implicit finite element algorithm. The damage part utilizes a split, which can describe the transition of loading between tension and compression. Regularization of the model by the implicit gradient approach eliminates the mesh sensitivity and numerical instabilities. Identification methods for model parameters are proposed and several numerical examples of plain and reinforced concrete are carried out for illustration.
Learning rational temporal eye movement strategies.
Hoppe, David; Rothkopf, Constantin A
2016-07-19
During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.
Evaluation of Computer Based Testing in lieu of Regular Examinations in Computer Literacy
NASA Astrophysics Data System (ADS)
Murayama, Koichi
Because computer based testing (CBT) has many advantages compared with the conventional paper and pencil testing (PPT) examination method, CBT has begun to be used in various situations in Japan, such as in qualifying examinations and in the TOEFL. This paper describes the usefulness and the problems of CBT applied to a regular college examination. The regular computer literacy examinations for first year students were held using CBT, and the results were analyzed. Responses to a questionnaire indicated many students accepted CBT with no unpleasantness and considered CBT a positive factor, improving their motivation to study. CBT also decreased the work of faculty in terms of marking tests and reducing data.
Renormalized stress-energy tensor for stationary black holes
NASA Astrophysics Data System (ADS)
Levi, Adam
2017-01-01
We continue the presentation of the pragmatic mode-sum regularization (PMR) method for computing the renormalized stress-energy tensor (RSET). We show in detail how to employ the t -splitting variant of the method, which was first presented for ⟨ϕ2⟩ren , to compute the RSET in a stationary, asymptotically flat background. This variant of the PMR method was recently used to compute the RSET for an evaporating spinning black hole. As an example for regularization, we demonstrate here the computation of the RSET for a minimally coupled, massless scalar field on Schwarzschild background in all three vacuum states. We discuss future work and possible improvements of the regularization schemes in the PMR method.
Computational analysis of nonlinearities within dynamics of cable-based driving systems
NASA Astrophysics Data System (ADS)
Anghelache, G. D.; Nastac, S.
2017-08-01
This paper deals with computational nonlinear dynamics of mechanical systems containing some flexural parts within the actuating scheme, and, especially, the situations of the cable-based driving systems were treated. It was supposed both functional nonlinearities and the real characteristic of the power supply, in order to obtain a realistically computer simulation model being able to provide very feasible results regarding the system dynamics. It was taken into account the transitory and stable regimes during a regular exploitation cycle. The authors present a particular case of a lift system, supposed to be representatively for the objective of this study. The simulations were made based on the values of the essential parameters acquired from the experimental tests and/or the regular practice in the field. The results analysis and the final discussions reveal the correlated dynamic aspects within the mechanical parts, the driving system, and the power supply, whole of these supplying potential sources of particular resonances, within some transitory phases of the working cycle, and which can affect structural and functional dynamics. In addition, it was underlines the influences of computational hypotheses on the both quantitative and qualitative behaviour of the system. Obviously, the most significant consequence of this theoretical and computational research consist by developing an unitary and feasible model, useful to dignify the nonlinear dynamic effects into the systems with cable-based driving scheme, and hereby to help an optimization of the exploitation regime including a dynamics control measures.
Probability-based collaborative filtering model for predicting gene-disease associations.
Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan
2017-12-28
Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene-disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our model. Firstly, on the basis of a typical latent factorization model, we propose model I with an average heterogeneous regularization. Secondly, we develop modified model II with personal heterogeneous regularization to enhance the accuracy of aforementioned models. In this model, vector space similarity or Pearson correlation coefficient metrics and data on related species are also used. We compared the results of PCFM with the results of four state-of-arts approaches. The results show that PCFM performs better than other advanced approaches. PCFM model can be leveraged for predictions of disease genes, especially for new human genes or diseases with no known relationships.
A singularity free analytical solution of artificial satellite motion with drag
NASA Technical Reports Server (NTRS)
Mueller, A.
1978-01-01
An analytical satellite theory based on the regular, canonical Poincare-Similar (PS phi) elements is described along with an accurate density model which can be implemented into the drag theory. A computationally efficient manner in which to expand the equations of motion into a fourier series is discussed.
Regularity Aspects in Inverse Musculoskeletal Biomechanics
NASA Astrophysics Data System (ADS)
Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten
2008-09-01
Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.
A Hierarchical Visualization Analysis Model of Power Big Data
NASA Astrophysics Data System (ADS)
Li, Yongjie; Wang, Zheng; Hao, Yang
2018-01-01
Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.
APC: A New Code for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2014-01-01
A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.
Uterine Contraction Modeling and Simulation
NASA Technical Reports Server (NTRS)
Liu, Miao; Belfore, Lee A.; Shen, Yuzhong; Scerbo, Mark W.
2010-01-01
Building a training system for medical personnel to properly interpret fetal heart rate tracing requires developing accurate models that can relate various signal patterns to certain pathologies. In addition to modeling the fetal heart rate signal itself, the change of uterine pressure that bears strong relation to fetal heart rate and provides indications of maternal and fetal status should also be considered. In this work, we have developed a group of parametric models to simulate uterine contractions during labor and delivery. Through analysis of real patient records, we propose to model uterine contraction signals by three major components: regular contractions, impulsive noise caused by fetal movements, and low amplitude noise invoked by maternal breathing and measuring apparatus. The regular contractions are modeled by an asymmetric generalized Gaussian function and least squares estimation is used to compute the parameter values of the asymmetric generalized Gaussian function based on uterine contractions of real patients. Regular contractions are detected based on thresholding and derivative analysis of uterine contractions. Impulsive noise caused by fetal movements and low amplitude noise by maternal breathing and measuring apparatus are modeled by rational polynomial functions and Perlin noise, respectively. Experiment results show the synthesized uterine contractions can mimic the real uterine contractions realistically, demonstrating the effectiveness of the proposed algorithm.
SPILC: An expert student advisor
NASA Technical Reports Server (NTRS)
Read, D. R.
1990-01-01
The Lamar University Computer Science Department serves about 350 undergraduate C.S. majors, and 70 graduate majors. B.S. degrees are offered in Computer Science and Computer and Information Science, and an M.S. degree is offered in Computer Science. In addition, the Computer Science Department plays a strong service role, offering approximately sixteen service course sections per long semester. The department has eight regular full-time faculty members, including the Department Chairman and the Undergraduate Advisor, and from three to seven part-time faculty members. Due to the small number of regular faculty members and the resulting very heavy teaching loads, undergraduate advising has become a difficult problem for the department. There is a one week early registration period and a three-day regular registration period once each semester. The Undergraduate Advisor's regular teaching load of two classes, 6 - 8 semester hours, per semester, together with the large number of majors and small number of regular faculty, cause long queues and short tempers during these advising periods. The situation is aggravated by the fact that entering freshmen are rarely accompanied by adequate documentation containing the facts necessary for proper counselling. There has been no good method of obtaining necessary facts and documenting both the information provided by the student and the resulting advice offered by the counsellors.
A parallel implementation of an off-lattice individual-based model of multicellular populations
NASA Astrophysics Data System (ADS)
Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe
2015-07-01
As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.
Gao, Fei; Wang, Guo-Bao; Xiang, Zhan-Wang; Yang, Bin; Xue, Jing-Bing; Mo, Zhi-Qiang; Zhong, Zhi-Hui; Zhang, Tao; Zhang, Fu-Jun; Fan, Wei-Jun
2016-05-03
This study sought to prospectively evaluate the feasibility and safety of a preoperative mathematic model for computed tomographic(CT) guided microwave(MW) ablation treatment of hepatic dome tumors. This mathematic model was a regular cylinder quantifying appropriate puncture routes from the bottom up. A total of 103 patients with hepatic dome tumors were enrolled and randomly divided into 2 groups based on whether this model was used or not: Group A (using the model; n = 43) versus Group B (not using the model; n = 60). All tumors were treated by CT-guided MW ablation and follow-up contrast CT were reviewed. The average number of times for successful puncture, average ablation time, and incidence of right shoulder pain were less in Group A than Group B (1.4 vs. 2.5, P = 0.001; 8.8 vs. 11.1 minutes, P = 0.003; and 4.7% vs. 20%, P = 0.039). The technical success rate was higher in Group A than Group B (97.7% vs. 85.0%, P = 0.032). There were no significant differences between the two groups in primary and secondary technique efficacy rates (97.7% vs. 88.3%, P = 0.081; 90.0% vs. 72.7%, P = 0.314). No major complications occurred in both groups. The mathematic model of regular cylinder is feasible and safe for CT-guided MW ablation in treating hepatic dome tumors.
Mu, Guangyu; Liu, Ying; Wang, Limin
2015-01-01
The spatial pooling method such as spatial pyramid matching (SPM) is very crucial in the bag of features model used in image classification. SPM partitions the image into a set of regular grids and assumes that the spatial layout of all visual words obey the uniform distribution over these regular grids. However, in practice, we consider that different visual words should obey different spatial layout distributions. To improve SPM, we develop a novel spatial pooling method, namely spatial distribution pooling (SDP). The proposed SDP method uses an extension model of Gauss mixture model to estimate the spatial layout distributions of the visual vocabulary. For each visual word type, SDP can generate a set of flexible grids rather than the regular grids from the traditional SPM. Furthermore, we can compute the grid weights for visual word tokens according to their spatial coordinates. The experimental results demonstrate that SDP outperforms the traditional spatial pooling methods, and is competitive with the state-of-the-art classification accuracy on several challenging image datasets.
Efficient field-theoretic simulation of polymer solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villet, Michael C.; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu; Department of Materials, University of California, Santa Barbara, California 93106
2014-12-14
We present several developments that facilitate the efficient field-theoretic simulation of polymers by complex Langevin sampling. A regularization scheme using finite Gaussian excluded volume interactions is used to derive a polymer solution model that appears free of ultraviolet divergences and hence is well-suited for lattice-discretized field theoretic simulation. We show that such models can exhibit ultraviolet sensitivity, a numerical pathology that dramatically increases sampling error in the continuum lattice limit, and further show that this pathology can be eliminated by appropriate model reformulation by variable transformation. We present an exponential time differencing algorithm for integrating complex Langevin equations for fieldmore » theoretic simulation, and show that the algorithm exhibits excellent accuracy and stability properties for our regularized polymer model. These developments collectively enable substantially more efficient field-theoretic simulation of polymers, and illustrate the importance of simultaneously addressing analytical and numerical pathologies when implementing such computations.« less
Computed inverse resonance imaging for magnetic susceptibility map reconstruction.
Chen, Zikuan; Calhoun, Vince
2012-01-01
This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.
Computed inverse MRI for magnetic susceptibility map reconstruction
Chen, Zikuan; Calhoun, Vince
2015-01-01
Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372
Design of a multiple kernel learning algorithm for LS-SVM by convex programming.
Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou
2011-06-01
As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Zhang, Cheng; Zhang, Tao; Zheng, Jian; Li, Ming; Lu, Yanfei; You, Jiali; Guan, Yihui
2015-01-01
In recent years, X-ray computed tomography (CT) is becoming widely used to reveal patient's anatomical information. However, the side effect of radiation, relating to genetic or cancerous diseases, has caused great public concern. The problem is how to minimize radiation dose significantly while maintaining image quality. As a practical application of compressed sensing theory, one category of methods takes total variation (TV) minimization as the sparse constraint, which makes it possible and effective to get a reconstruction image of high quality in the undersampling situation. On the other hand, a preliminary attempt of low-dose CT reconstruction based on dictionary learning seems to be another effective choice. But some critical parameters, such as the regularization parameter, cannot be determined by detecting datasets. In this paper, we propose a reweighted objective function that contributes to a numerical calculation model of the regularization parameter. A number of experiments demonstrate that this strategy performs well with better reconstruction images and saving of a large amount of time.
Regularized Filters for L1-Norm-Based Common Spatial Patterns.
Wang, Haixian; Li, Xiaomeng
2016-02-01
The l1 -norm-based common spatial patterns (CSP-L1) approach is a recently developed technique for optimizing spatial filters in the field of electroencephalogram (EEG)-based brain computer interfaces. The l1 -norm-based expression of dispersion in CSP-L1 alleviates the negative impact of outliers. In this paper, we further improve the robustness of CSP-L1 by taking into account noise which does not necessarily have as large a deviation as with outliers. The noise modelling is formulated by using the waveform length of the EEG time course. With the noise modelling, we then regularize the objective function of CSP-L1, in which the l1-norm is used in two folds: one is the dispersion and the other is the waveform length. An iterative algorithm is designed to resolve the optimization problem of the regularized objective function. A toy illustration and the experiments of classification on real EEG data sets show the effectiveness of the proposed method.
A multi-resolution approach to electromagnetic modeling.
NASA Astrophysics Data System (ADS)
Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu
2018-04-01
We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.
NASA Astrophysics Data System (ADS)
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
Computationally efficient algorithm for Gaussian Process regression in case of structured samples
NASA Astrophysics Data System (ADS)
Belyaev, M.; Burnaev, E.; Kapushev, Y.
2016-04-01
Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.
NASA Astrophysics Data System (ADS)
Caranicolas, Nicolaos D.; Zotos, Euaggelos E.
2013-02-01
We investigate the transition from regular to chaotic motion in a composite galaxy model with a disk-halo, a massive dense nucleus and a dark halo component. We obtain relationships connecting the critical value of the mass of the nucleus or the critical value of the angular momentum Lzc, with the mass Mh of the dark halo, where the transition from regular motion to chaos occurs. We also present 3D diagrams connecting the mass of nucleus the energy and the percentage of stars that can show chaotic motion. The fraction of the chaotic orbits observed in the (r,pr) phase plane, as a function of the mass of the dark halo is also computed. We use a semi-numerical method, that is a combination of theoretical and numerical procedure. The theoretical results obtained using the version 8.0 of the Mathematica package, while all the numerical calculations were made using a Bulirsch-Stöer FORTRAN routine in double precision. The results can be obtained in semi-numerical or numerical form and give good description for the connection of the physical quantities entering the model and the transition between regular and chaotic motion. We observe that the mass of the dark halo, the mass of the dense nucleus and the Lz component of the angular momentum, are important physical quantities, as they are linked to the regular or chaotic character of orbits in disk galaxies described by the model. Our numerical experiments suggest, that the amount of the dark matter plays an important role in disk galaxies represented by the model, as the mass of the halo affects, not only the regular or chaotic nature of motion but it is also connected with the existence of the different families of regular orbits. Comparison of the present results with earlier work is also presented.
Innovation from a Computational Social Science Perspective: Analyses and Models
ERIC Educational Resources Information Center
Casstevens, Randy M.
2013-01-01
Innovation processes are critical for preserving and improving our standard of living. While innovation has been studied by many disciplines, the focus has been on qualitative measures that are specific to a single technological domain. I adopt a quantitative approach to investigate underlying regularities that generalize across multiple domains.…
A versatile model for soft patchy particles with various patch arrangements.
Li, Zhan-Wei; Zhu, You-Liang; Lu, Zhong-Yuan; Sun, Zhao-Yan
2016-01-21
We propose a simple and general mesoscale soft patchy particle model, which can felicitously describe the deformable and surface-anisotropic characteristics of soft patchy particles. This model can be used in dynamics simulations to investigate the aggregation behavior and mechanism of various types of soft patchy particles with tunable number, size, direction, and geometrical arrangement of the patches. To improve the computational efficiency of this mesoscale model in dynamics simulations, we give the simulation algorithm that fits the compute unified device architecture (CUDA) framework of NVIDIA graphics processing units (GPUs). The validation of the model and the performance of the simulations using GPUs are demonstrated by simulating several benchmark systems of soft patchy particles with 1 to 4 patches in a regular geometrical arrangement. Because of its simplicity and computational efficiency, the soft patchy particle model will provide a powerful tool to investigate the aggregation behavior of soft patchy particles, such as patchy micelles, patchy microgels, and patchy dendrimers, over larger spatial and temporal scales.
Time-lapse Inversion of Electrical Resistivity Data
NASA Astrophysics Data System (ADS)
Nguyen, F.; Kemna, A.
2005-12-01
Time-lapse geophysical measurements (also known as monitoring, repeat or multi-frame survey) now play a critical role for monitoring -non-destructively- changes induced by human, as reservoir compaction, or to study natural processes, as flow and transport in porous media. To invert such data sets into time-varying subsurface properties, several strategies are found in different engineering or scientific fields (e.g., in biomedical, process tomography, or geophysical applications). Indeed, for time-lapse surveys, the data sets and the models at each time frame have the particularity to be closely related to their "neighbors", if the process does not induce chaotic or very high variations. Therefore, the information contained in the different frames can be used for constraining the inversion in the others. A first strategy consists in imposing constraints to the model based on prior estimation, a priori spatiotemporal or temporal behavior (arbitrary or based on a law describing the monitored process), restriction of changes in certain areas, or data changes reproducibility. A second strategy aims to invert directly the model changes, where the objective function penalizes those models whose spatial, temporal, or spatiotemporal behavior differs from a prior assumption or from a computed a priori. Clearly, the incorporation of time-lapse a priori information, determined from data sets or assumed, in the inversion process has been proven to improve significantly the resolving capability, mainly by removing artifacts. However, there is a lack of comparison of these methods. In this paper, we focus on Tikhonov-like inversion approaches for electrical tomography imaging to evaluate the capability of the different existing strategies, and to propose new ones. To evaluate the bias inevitably introduced by time-lapse regularization, we quantified the relative contribution of the different approaches to the resolving power of the method. Furthermore, we incorporated different noise levels and types (random and/or systematic) to determine the strategies' ability to cope with real data. Introducing additional regularization terms yields also more regularization parameters to compute. Since this is a difficult and computationally costly task, we propose that it should be proportional to the velocity of the process. To achieve these objectives, we tested the different methods using synthetic models, and experimental data, taking noise and error propagation into account. Our study shows that the choice of the inversion strategy highly depends on the nature and magnitude of noise, whereas the choice of the regularization term strongly influences the resulting image according to the a priori assumption. This study was developed under the scope of the European project ALERT (GOCE-CT-2004-505329).
Effects of high-frequency damping on iterative convergence of implicit viscous solver
NASA Astrophysics Data System (ADS)
Nishikawa, Hiroaki; Nakashima, Yoshitaka; Watanabe, Norihiko
2017-11-01
This paper discusses effects of high-frequency damping on iterative convergence of an implicit defect-correction solver for viscous problems. The study targets a finite-volume discretization with a one parameter family of damped viscous schemes. The parameter α controls high-frequency damping: zero damping with α = 0, and larger damping for larger α (> 0). Convergence rates are predicted for a model diffusion equation by a Fourier analysis over a practical range of α. It is shown that the convergence rate attains its minimum at α = 1 on regular quadrilateral grids, and deteriorates for larger values of α. A similar behavior is observed for regular triangular grids. In both quadrilateral and triangular grids, the solver is predicted to diverge for α smaller than approximately 0.5. Numerical results are shown for the diffusion equation and the Navier-Stokes equations on regular and irregular grids. The study suggests that α = 1 and 4/3 are suitable values for robust and efficient computations, and α = 4 / 3 is recommended for the diffusion equation, which achieves higher-order accuracy on regular quadrilateral grids. Finally, a Jacobian-Free Newton-Krylov solver with the implicit solver (a low-order Jacobian approximately inverted by a multi-color Gauss-Seidel relaxation scheme) used as a variable preconditioner is recommended for practical computations, which provides robust and efficient convergence for a wide range of α.
X-ray computed tomography using curvelet sparse regularization.
Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias
2015-04-01
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
RNA folding kinetics using Monte Carlo and Gillespie algorithms.
Clote, Peter; Bayegan, Amir H
2018-04-01
RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see http://bioinformatics.bc.edu/clote/RNAexpNumNbors .
Interprocedural Analysis and the Verification of Concurrent Programs
2009-01-01
SSPE ) problem is to compute a regular expression that represents paths(s, v) for all vertices v in the graph. The syntax of regular expressions is as...follows: r ::= ∅ | ε | e | r1 ∪ r2 | r1.r2 | r∗, where e stands for an edge in G. We can use any algorithm for SSPE to compute regular expressions for...a closed representation of loops provides an exponential speedup.2 Tarjan’s path-expression algorithm solves the SSPE problem efficiently. It uses
Subjective randomness as statistical inference.
Griffiths, Thomas L; Daniels, Dylan; Austerweil, Joseph L; Tenenbaum, Joshua B
2018-06-01
Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. Algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences - which have been the focus of much of the previous work on subjective randomness - but also to binary matrices and spatial clustering. Copyright © 2018 Elsevier Inc. All rights reserved.
Impedance computed tomography using an adaptive smoothing coefficient algorithm.
Suzuki, A; Uchiyama, A
2001-01-01
In impedance computed tomography, a fixed coefficient regularization algorithm has been frequently used to improve the ill-conditioning problem of the Newton-Raphson algorithm. However, a lot of experimental data and a long period of computation time are needed to determine a good smoothing coefficient because a good smoothing coefficient has to be manually chosen from a number of coefficients and is a constant for each iteration calculation. Thus, sometimes the fixed coefficient regularization algorithm distorts the information or fails to obtain any effect. In this paper, a new adaptive smoothing coefficient algorithm is proposed. This algorithm automatically calculates the smoothing coefficient from the eigenvalue of the ill-conditioned matrix. Therefore, the effective images can be obtained within a short computation time. Also the smoothing coefficient is automatically adjusted by the information related to the real resistivity distribution and the data collection method. In our impedance system, we have reconstructed the resistivity distributions of two phantoms using this algorithm. As a result, this algorithm only needs one-fifth the computation time compared to the fixed coefficient regularization algorithm. When compared to the fixed coefficient regularization algorithm, it shows that the image is obtained more rapidly and applicable in real-time monitoring of the blood vessel.
NASA Astrophysics Data System (ADS)
Zhang, Yi-Qi; Paszkiewicz, Mateusz; Du, Ping; Zhang, Liding; Lin, Tao; Chen, Zhi; Klyatskaya, Svetlana; Ruben, Mario; Seitsonen, Ari P.; Barth, Johannes V.; Klappenberger, Florian
2018-03-01
Interfacial supramolecular self-assembly represents a powerful tool for constructing regular and quasicrystalline materials. In particular, complex two-dimensional molecular tessellations, such as semi-regular Archimedean tilings with regular polygons, promise unique properties related to their nontrivial structures. However, their formation is challenging, because current methods are largely limited to the direct assembly of precursors, that is, where structure formation relies on molecular interactions without using chemical transformations. Here, we have chosen ethynyl-iodophenanthrene (which features dissymmetry in both geometry and reactivity) as a single starting precursor to generate the rare semi-regular (3.4.6.4) Archimedean tiling with long-range order on an atomically flat substrate through a multi-step reaction. Intriguingly, the individual chemical transformations converge to form a symmetric alkynyl-Ag-alkynyl complex as the new tecton in high yields. Using a combination of microscopy and X-ray spectroscopy tools, as well as computational modelling, we show that in situ generated catalytic Ag complexes mediate the tecton conversion.
Macke, A; Mishchenko, M I
1996-07-20
We ascertain the usefulness of simple ice particle geometries for modeling the intensity distribution of light scattering by atmospheric ice particles. To this end, similarities and differences in light scattering by axis-equivalent, regular and distorted hexagonal cylindric, ellipsoidal, and circular cylindric ice particles are reported. All the results pertain to particles with sizes much larger than a wavelength and are based on a geometrical optics approximation. At a nonabsorbing wavelength of 0.55 µm, ellipsoids (circular cylinders) have a much (slightly) larger asymmetry parameter g than regular hexagonal cylinders. However, our computations show that only random distortion of the crystal shape leads to a closer agreement with g values as small as 0.7 as derived from some remote-sensing data analysis. This may suggest that scattering by regular particle shapes is not necessarily representative of real atmospheric ice crystals at nonabsorbing wavelengths. On the other hand, if real ice particles happen to be hexagonal, they may be approximated by circular cylinders at absorbing wavelengths.
A Computational Model of Linguistic Humor in Puns.
Kao, Justine T; Levy, Roger; Goodman, Noah D
2016-07-01
Humor plays an essential role in human interactions. Precisely what makes something funny, however, remains elusive. While research on natural language understanding has made significant advancements in recent years, there has been little direct integration of humor research with computational models of language understanding. In this paper, we propose two information-theoretic measures-ambiguity and distinctiveness-derived from a simple model of sentence processing. We test these measures on a set of puns and regular sentences and show that they correlate significantly with human judgments of funniness. Moreover, within a set of puns, the distinctiveness measure distinguishes exceptionally funny puns from mediocre ones. Our work is the first, to our knowledge, to integrate a computational model of general language understanding and humor theory to quantitatively predict humor at a fine-grained level. We present it as an example of a framework for applying models of language processing to understand higher level linguistic and cognitive phenomena. © 2015 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
NASA Technical Reports Server (NTRS)
Borysow, Aleksandra
1998-01-01
Accurate knowledge of certain collision-induced absorption continua of molecular pairs such as H2-H2, H2-He, H2-CH4, CO2-CO2, etc., is a prerequisite for most spectral analyses and modelling attempts of atmospheres of planets and cold stars. We collect and regularly update simple, state of the art computer programs for the calculation of the absorption coefficient of such molecular pairs over a broad range of temperatures and frequencies, for the various rotovibrational bands. The computational results are in agreement with the existing laboratory measurements of such absorption continua, recorded with a spectral resolution of a few wavenumbers, but reliable computational results may be expected even in the far wings, and at temperatures for which laboratory measurements do not exist. Detailed information is given concerning the systems thus studied, the temperature and frequency ranges considered, the rotovibrational bands thus modelled, and how one may obtain copies of the FORTRAN77 computer programs by e-mail.
Ting, Samuel T; Ahmad, Rizwan; Jin, Ning; Craft, Jason; Serafim da Silveira, Juliana; Xue, Hui; Simonetti, Orlando P
2017-04-01
Sparsity-promoting regularizers can enable stable recovery of highly undersampled magnetic resonance imaging (MRI), promising to improve the clinical utility of challenging applications. However, lengthy computation time limits the clinical use of these methods, especially for dynamic MRI with its large corpus of spatiotemporal data. Here, we present a holistic framework that utilizes the balanced sparse model for compressive sensing and parallel computing to reduce the computation time of cardiac MRI recovery methods. We propose a fast, iterative soft-thresholding method to solve the resulting ℓ1-regularized least squares problem. In addition, our approach utilizes a parallel computing environment that is fully integrated with the MRI acquisition software. The methodology is applied to two formulations of the multichannel MRI problem: image-based recovery and k-space-based recovery. Using measured MRI data, we show that, for a 224 × 144 image series with 48 frames, the proposed k-space-based approach achieves a mean reconstruction time of 2.35 min, a 24-fold improvement compared a reconstruction time of 55.5 min for the nonlinear conjugate gradient method, and the proposed image-based approach achieves a mean reconstruction time of 13.8 s. Our approach can be utilized to achieve fast reconstruction of large MRI datasets, thereby increasing the clinical utility of reconstruction techniques based on compressed sensing. Magn Reson Med 77:1505-1515, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Estimation of Faults in DC Electrical Power System
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott
2009-01-01
This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.
A Regularized Volumetric Fusion Framework for Large-Scale 3D Reconstruction
NASA Astrophysics Data System (ADS)
Rajput, Asif; Funk, Eugen; Börner, Anko; Hellwich, Olaf
2018-07-01
Modern computational resources combined with low-cost depth sensing systems have enabled mobile robots to reconstruct 3D models of surrounding environments in real-time. Unfortunately, low-cost depth sensors are prone to produce undesirable estimation noise in depth measurements which result in either depth outliers or introduce surface deformations in the reconstructed model. Conventional 3D fusion frameworks integrate multiple error-prone depth measurements over time to reduce noise effects, therefore additional constraints such as steady sensor movement and high frame-rates are required for high quality 3D models. In this paper we propose a generic 3D fusion framework with controlled regularization parameter which inherently reduces noise at the time of data fusion. This allows the proposed framework to generate high quality 3D models without enforcing additional constraints. Evaluation of the reconstructed 3D models shows that the proposed framework outperforms state of art techniques in terms of both absolute reconstruction error and processing time.
Modelling and Holographic Visualization of Space Radiation-Induced DNA Damage
NASA Technical Reports Server (NTRS)
Plante, Ianik
2017-01-01
Space radiation is composed by a mixture of ions of different energies. Among these, heavy inos are of particular importance because their health effects are poorly understood. In. the recent years, a software named RITRACKS (Relativistic Ion Tracks) was developed to simulate the detailed radiation track structure, several DNA models and DNA damage. As the DNA structure is complex due to packing, it is difficult to the damage using a regular computer screen.
Yang, Bin; Xue, Jing-Bing; Mo, Zhi-Qiang; Zhong, Zhi-Hui; Zhang, Tao; Zhang, Fu-Jun; Fan, Wei-Jun
2016-01-01
Purpose This study sought to prospectively evaluate the feasibility and safety of a preoperative mathematic model for computed tomographic(CT) guided microwave(MW) ablation treatment of hepatic dome tumors. Methods This mathematic model was a regular cylinder quantifying appropriate puncture routes from the bottom up. A total of 103 patients with hepatic dome tumors were enrolled and randomly divided into 2 groups based on whether this model was used or not: Group A (using the model; n = 43) versus Group B (not using the model; n = 60). All tumors were treated by CT-guided MW ablation and follow-up contrast CT were reviewed. Results The average number of times for successful puncture, average ablation time, and incidence of right shoulder pain were less in Group A than Group B (1.4 vs. 2.5, P = 0.001; 8.8 vs. 11.1 minutes, P = 0.003; and 4.7% vs. 20%, P = 0.039). The technical success rate was higher in Group A than Group B (97.7% vs. 85.0%, P = 0.032). There were no significant differences between the two groups in primary and secondary technique efficacy rates (97.7% vs. 88.3%, P = 0.081; 90.0% vs. 72.7%, P = 0.314). No major complications occurred in both groups. Conclusion The mathematic model of regular cylinder is feasible and safe for CT-guided MW ablation in treating hepatic dome tumors. PMID:27028994
Image restoration by minimizing zero norm of wavelet frame coefficients
NASA Astrophysics Data System (ADS)
Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue
2016-11-01
In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.
5 CFR 532.257 - Regular nonappropriated fund wage schedules in foreign areas.
Code of Federal Regulations, 2010 CFR
2010-01-01
.... These schedules will provide rates of pay for nonsupervisory, leader, and supervisory employees. (b) Schedules will be— (1) Computed on the basis of a simple average of all regular nonappropriated fund wage... each nonsupervisory grade will be derived by computing a simple average of each step 2 rate for each of...
5 CFR 532.257 - Regular nonappropriated fund wage schedules in foreign areas.
Code of Federal Regulations, 2011 CFR
2011-01-01
.... These schedules will provide rates of pay for nonsupervisory, leader, and supervisory employees. (b) Schedules will be— (1) Computed on the basis of a simple average of all regular nonappropriated fund wage... each nonsupervisory grade will be derived by computing a simple average of each step 2 rate for each of...
ERIC Educational Resources Information Center
Gurung, Binod
2013-01-01
Alternative high school students are the at-risk students of educational failure lacking behavioral, emotional, and cognitive engagement with school and the schoolwork. They are also generally considered as the at-risk computer users, who use technology for development of skills and drill and practice when compared to their regular counterparts,…
Fast incorporation of optical flow into active polygons.
Unal, Gozde; Krim, Hamid; Yezzi, Anthony
2005-06-01
In this paper, we first reconsider, in a different light, the addition of a prediction step to active contour-based visual tracking using an optical flow and clarify the local computation of the latter along the boundaries of continuous active contours with appropriate regularizers. We subsequently detail our contribution of computing an optical flow-based prediction step directly from the parameters of an active polygon, and of exploiting it in object tracking. This is in contrast to an explicitly separate computation of the optical flow and its ad hoc application. It also provides an inherent regularization effect resulting from integrating measurements along polygon edges. As a result, we completely avoid the need of adding ad hoc regularizing terms to the optical flow computations, and the inevitably arbitrary associated weighting parameters. This direct integration of optical flow into the active polygon framework distinguishes this technique from most previous contour-based approaches, where regularization terms are theoretically, as well as practically, essential. The greater robustness and speed due to a reduced number of parameters of this technique are additional and appealing features.
CDP++.Italian: Modelling Sublexical and Supralexical Inconsistency in a Shallow Orthography
Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco
2014-01-01
Most models of reading aloud have been constructed to explain data in relatively complex orthographies like English and French. Here, we created an Italian version of the Connectionist Dual Process Model of Reading Aloud (CDP++) to examine the extent to which the model could predict data in a language which has relatively simple orthography-phonology relationships but is relatively complex at a suprasegmental (word stress) level. We show that the model exhibits good quantitative performance and accounts for key phenomena observed in naming studies, including some apparently contradictory findings. These effects include stress regularity and stress consistency, both of which have been especially important in studies of word recognition and reading aloud in Italian. Overall, the results of the model compare favourably to an alternative connectionist model that can learn non-linear spelling-to-sound mappings. This suggests that CDP++ is currently the leading computational model of reading aloud in Italian, and that its simple linear learning mechanism adequately captures the statistical regularities of the spelling-to-sound mapping both at the segmental and supra-segmental levels. PMID:24740261
An experimental comparison of various methods of nearfield acoustic holography
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
2017-05-19
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
An experimental comparison of various methods of nearfield acoustic holography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
Partially chaotic orbits in a perturbed cubic force model
NASA Astrophysics Data System (ADS)
Muzzio, J. C.
2017-11-01
Three types of orbits are theoretically possible in autonomous Hamiltonian systems with 3 degrees of freedom: fully chaotic (they only obey the energy integral), partially chaotic (they obey an additional isolating integral besides energy) and regular (they obey two isolating integrals besides energy). The existence of partially chaotic orbits has been denied by several authors, however, arguing either that there is a sudden transition from regularity to full chaoticity or that a long enough follow-up of a supposedly partially chaotic orbit would reveal a fully chaotic nature. This situation needs clarification, because partially chaotic orbits might play a significant role in the process of chaotic diffusion. Here we use numerically computed Lyapunov exponents to explore the phase space of a perturbed three-dimensional cubic force toy model, and a generalization of the Poincaré maps to show that partially chaotic orbits are actually present in that model. They turn out to be double orbits joined by a bifurcation zone, which is the most likely source of their chaos, and they are encapsulated in regions of phase space bounded by regular orbits similar to each one of the components of the double orbit.
Control of the transition between regular and mach reflection of shock waves
NASA Astrophysics Data System (ADS)
Alekseev, A. K.
2012-06-01
A control problem was considered that makes it possible to switch the flow between stationary Mach and regular reflection of shock waves within the dual solution domain. The sensitivity of the flow was computed by solving adjoint equations. A control disturbance was sought by applying gradient optimization methods. According to the computational results, the transition from regular to Mach reflection can be executed by raising the temperature. The transition from Mach to regular reflection can be achieved by lowering the temperature at moderate Mach numbers and is impossible at large numbers. The reliability of the numerical results was confirmed by verifying them with the help of a posteriori analysis.
Agrawal, M; Pardasani, K R; Adlakha, N
2014-08-01
The investigators in the past have developed some models of temperature distribution in the human limb assuming it as a regular circular or elliptical tapered cylinder. But in reality the limb is not of regular tapered cylindrical shape. The radius and eccentricity are not same throughout the limb. In view of above a model of temperature distribution in the irregular tapered elliptical shaped human limb is proposed for a three dimensional steady state case in this paper. The limb is assumed to be composed of multiple cylindrical substructures with variable radius and eccentricity. The mathematical model incorporates the effect of blood mass flow rate, metabolic activity and thermal conductivity. The outer surface is exposed to the environment and appropriate boundary conditions have been framed. The finite element method has been employed to obtain the solution. The temperature profiles have been computed in the dermal layers of a human limb and used to study the effect of shape, microstructure and biophysical parameters on temperature distribution in human limbs. The proposed model is one of the most realistic model as compared to conventional models as this can be effectively employed to every regular and nonregular structures of the body with variable radius and eccentricity to study the thermal behaviour. Copyright © 2014 Elsevier Ltd. All rights reserved.
Delayed Acquisition of Non-Adjacent Vocalic Distributional Regularities
ERIC Educational Resources Information Center
Gonzalez-Gomez, Nayeli; Nazzi, Thierry
2016-01-01
The ability to compute non-adjacent regularities is key in the acquisition of a new language. In the domain of phonology/phonotactics, sensitivity to non-adjacent regularities between consonants has been found to appear between 7 and 10 months. The present study focuses on the emergence of a posterior-anterior (PA) bias, a regularity involving two…
A generalized Condat's algorithm of 1D total variation regularization
NASA Astrophysics Data System (ADS)
Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly
2017-09-01
A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.
Recent developments in the theory of protein folding: searching for the global energy minimum.
Scheraga, H A
1996-04-16
Statistical mechanical theories and computer simulation are being used to gain an understanding of the fundamental features of protein folding. A major obstacle in the computation of protein structures is the multiple-minima problem arising from the existence of many local minima in the multidimensional energy landscape of the protein. This problem has been surmounted for small open-chain and cyclic peptides, and for regular-repeating sequences of models of fibrous proteins. Progress is being made in resolving this problem for globular proteins.
Color Image Restoration Using Nonlocal Mumford-Shah Regularizers
NASA Astrophysics Data System (ADS)
Jung, Miyoun; Bresson, Xavier; Chan, Tony F.; Vese, Luminita A.
We introduce several color image restoration algorithms based on the Mumford-Shah model and nonlocal image information. The standard Ambrosio-Tortorelli and Shah models are defined to work in a small local neighborhood, which are sufficient to denoise smooth regions with sharp boundaries. However, textures are not local in nature and require semi-local/non-local information to be denoised efficiently. Inspired from recent work (NL-means of Buades, Coll, Morel and NL-TV of Gilboa, Osher), we extend the standard models of Ambrosio-Tortorelli and Shah approximations to Mumford-Shah functionals to work with nonlocal information, for better restoration of fine structures and textures. We present several applications of the proposed nonlocal MS regularizers in image processing such as color image denoising, color image deblurring in the presence of Gaussian or impulse noise, color image inpainting, and color image super-resolution. In the formulation of nonlocal variational models for the image deblurring with impulse noise, we propose an efficient preprocessing step for the computation of the weight function w. In all the applications, the proposed nonlocal regularizers produce superior results over the local ones, especially in image inpainting with large missing regions. Experimental results and comparisons between the proposed nonlocal methods and the local ones are shown.
Computation of repetitions and regularities of biologically weighted sequences.
Christodoulakis, M; Iliopoulos, C; Mouchard, L; Perdikuri, K; Tsakalidis, A; Tsichlas, K
2006-01-01
Biological weighted sequences are used extensively in molecular biology as profiles for protein families, in the representation of binding sites and often for the representation of sequences produced by a shotgun sequencing strategy. In this paper, we address three fundamental problems in the area of biologically weighted sequences: (i) computation of repetitions, (ii) pattern matching, and (iii) computation of regularities. Our algorithms can be used as basic building blocks for more sophisticated algorithms applied on weighted sequences.
Regularization of Instantaneous Frequency Attribute Computations
NASA Astrophysics Data System (ADS)
Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.
2014-12-01
We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
Room Use for Group Instruction in Regularly Scheduled Classes.
ERIC Educational Resources Information Center
Phay, John E.; McCary, Arthur D.
A method by which accurate accounting by computer might be made of apace and room use by regularly scheduled classes in institutions of higher learning is furnished. Based on well-defined terms, a master room schedule and a master course schedule are prepared on computer cards. This information is then compared with the reported individual room…
Deformation-based augmented reality for hepatic surgery.
Haouchine, Nazim; Dequidt, Jérémie; Berger, Marie-Odile; Cotin, Stéphane
2013-01-01
In this paper we introduce a method for augmenting the laparoscopic view during hepatic tumor resection. Using augmented reality techniques, vessels, tumors and cutting planes computed from pre-operative data can be overlaid onto the laparoscopic video. Compared to current techniques, which are limited to a rigid registration of the pre-operative liver anatomy with the intra-operative image, we propose a real-time, physics-based, non-rigid registration. The main strength of our approach is that the deformable model can also be used to regularize the data extracted from the computer vision algorithms. We show preliminary results on a video sequence which clearly highlights the interest of using physics-based model for elastic registration.
Zhang, Cheng; Zhang, Tao; Li, Ming; Lu, Yanfei; You, Jiali; Guan, Yihui
2015-01-01
In recent years, X-ray computed tomography (CT) is becoming widely used to reveal patient's anatomical information. However, the side effect of radiation, relating to genetic or cancerous diseases, has caused great public concern. The problem is how to minimize radiation dose significantly while maintaining image quality. As a practical application of compressed sensing theory, one category of methods takes total variation (TV) minimization as the sparse constraint, which makes it possible and effective to get a reconstruction image of high quality in the undersampling situation. On the other hand, a preliminary attempt of low-dose CT reconstruction based on dictionary learning seems to be another effective choice. But some critical parameters, such as the regularization parameter, cannot be determined by detecting datasets. In this paper, we propose a reweighted objective function that contributes to a numerical calculation model of the regularization parameter. A number of experiments demonstrate that this strategy performs well with better reconstruction images and saving of a large amount of time. PMID:26550024
Nonrigid iterative closest points for registration of 3D biomedical surfaces
NASA Astrophysics Data System (ADS)
Liang, Luming; Wei, Mingqiang; Szymczak, Andrzej; Petrella, Anthony; Xie, Haoran; Qin, Jing; Wang, Jun; Wang, Fu Lee
2018-01-01
Advanced 3D optical and laser scanners bring new challenges to computer graphics. We present a novel nonrigid surface registration algorithm based on Iterative Closest Point (ICP) method with multiple correspondences. Our method, called the Nonrigid Iterative Closest Points (NICPs), can be applied to surfaces of arbitrary topology. It does not impose any restrictions on the deformation, e.g. rigidity or articulation. Finally, it does not require parametrization of input meshes. Our method is based on an objective function that combines distance and regularization terms. Unlike the standard ICP, the distance term is determined based on multiple two-way correspondences rather than single one-way correspondences between surfaces. A Laplacian-based regularization term is proposed to take full advantage of multiple two-way correspondences. This term regularizes the surface movement by enforcing vertices to move coherently with their 1-ring neighbors. The proposed method achieves good performances when no global pose differences or significant amount of bending exists in the models, for example, families of similar shapes, like human femur and vertebrae models.
Symmetries for Light-Front Quantization of Yukawa Model with Renormalization
NASA Astrophysics Data System (ADS)
Żochowski, Jan; Przeszowski, Jerzy A.
2017-12-01
In this work we discuss the Yukawa model with the extra term of self-interacting scalar field in D=1+3 dimensions. We present the method of derivation the light-front commutators and anti-commutators from the Heisenberg equations induced by the kinematical generating operator of the translation P+. Mentioned Heisenberg equations are the starting point for obtaining this algebra of the (anti-) commutators. Some discrepancies between existing and proposed method of quantization are revealed. The Lorentz and the CPT symmetry, together with some features of the quantum theory were applied to obtain the two-point Wightman function for the free fermions. Moreover, these Wightman functions were computed especially without referring to the Fock expansion. The Gaussian effective potential for the Yukawa model was found in the terms of the Wightman functions. It was regularized by the space-like point-splitting method. The coupling constants within the model were redefined. The optimum mass parameters remained regularization independent. Finally, the Gaussian effective potential was renormalized.
New second order Mumford-Shah model based on Γ-convergence approximation for image processing
NASA Astrophysics Data System (ADS)
Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li
2016-05-01
In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.
Computational Dysfunctions in Anxiety: Failure to Differentiate Signal From Noise.
Huang, He; Thompson, Wesley; Paulus, Martin P
2017-09-15
Differentiating whether an action leads to an outcome by chance or by an underlying statistical regularity that signals environmental change profoundly affects adaptive behavior. Previous studies have shown that anxious individuals may not appropriately differentiate between these situations. This investigation aims to precisely quantify the process deficit in anxious individuals and determine the degree to which these process dysfunctions are specific to anxiety. One hundred twenty-two subjects recruited as part of an ongoing large clinical population study completed a change point detection task. Reinforcement learning models were used to explicate observed behavioral differences in low anxiety (Overall Anxiety Severity and Impairment Scale score ≤ 8) and high anxiety (Overall Anxiety Severity and Impairment Scale score ≥ 9) groups. High anxiety individuals used a suboptimal decision strategy characterized by a higher lose-shift rate. Computational models and simulations revealed that this difference was related to a higher base learning rate. These findings are better explained in a context-dependent reinforcement learning model. Anxious subjects' exaggerated response to uncertainty leads to a suboptimal decision strategy that makes it difficult for these individuals to determine whether an action is associated with an outcome by chance or by some statistical regularity. These findings have important implications for developing new behavioral intervention strategies using learning models. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
The convergence analysis of SpikeProp algorithm with smoothing L1∕2 regularization.
Zhao, Junhong; Zurada, Jacek M; Yang, Jie; Wu, Wei
2018-07-01
Unlike the first and the second generation artificial neural networks, spiking neural networks (SNNs) model the human brain by incorporating not only synaptic state but also a temporal component into their operating model. However, their intrinsic properties require expensive computation during training. This paper presents a novel algorithm to SpikeProp for SNN by introducing smoothing L 1∕2 regularization term into the error function. This algorithm makes the network structure sparse, with some smaller weights that can be eventually removed. Meanwhile, the convergence of this algorithm is proved under some reasonable conditions. The proposed algorithms have been tested for the convergence speed, the convergence rate and the generalization on the classical XOR-problem, Iris problem and Wisconsin Breast Cancer classification. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Hong-Yong; Lu, Lan; Cao, Ke-Cai; Zhang, Si-Ying
2010-04-01
In this paper, the relations of the network topology and the moving consensus of multi-agent systems are studied. A consensus-prestissimo scale-free network model with the static preferential-consensus attachment is presented on the rewired link of the regular network. The effects of the static preferential-consensus BA network on the algebraic connectivity of the topology graph are compared with the regular network. The robustness gain to delay is analyzed for variable network topology with the same scale. The time to reach the consensus is studied for the dynamic network with and without communication delays. By applying the computer simulations, it is validated that the speed of the convergence of multi-agent systems can be greatly improved in the preferential-consensus BA network model with different configuration.
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.
NASA Astrophysics Data System (ADS)
Mahmoudabadi, H.; Briggs, G.
2016-12-01
Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.
NASA Astrophysics Data System (ADS)
Bianchi, Eugenio; De Lorenzo, Tommaso; Smerlak, Matteo
2015-06-01
We study the dynamics of vacuum entanglement in the process of gravitational collapse and subsequent black hole evaporation. In the first part of the paper, we introduce a covariant regularization of entanglement entropy tailored to curved spacetimes; this regularization allows us to propose precise definitions for the concepts of black hole "exterior entropy" and "radiation entropy." For a Vaidya model of collapse we find results consistent with the standard thermodynamic properties of Hawking radiation. In the second part of the paper, we compute the vacuum entanglement entropy of various spherically-symmetric spacetimes of interest, including the nonsingular black hole model of Bardeen, Hayward, Frolov and Rovelli-Vidotto and the "black hole fireworks" model of Haggard-Rovelli. We discuss specifically the role of event and trapping horizons in connection with the behavior of the radiation entropy at future null infinity. We observe in particular that ( i) in the presence of an event horizon the radiation entropy diverges at the end of the evaporation process, ( ii) in models of nonsingular evaporation (with a trapped region but no event horizon) the generalized second law holds only at early times and is violated in the "purifying" phase, ( iii) at late times the radiation entropy can become negative (i.e. the radiation can be less correlated than the vacuum) before going back to zero leading to an up-down-up behavior for the Page curve of a unitarily evaporating black hole.
Hesford, Andrew J.; Waag, Robert C.
2010-01-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366
NASA Astrophysics Data System (ADS)
Hesford, Andrew J.; Waag, Robert C.
2010-10-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Hesford, Andrew J; Waag, Robert C
2010-10-20
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
NASA Astrophysics Data System (ADS)
Schachtschneider, R.; Rother, M.; Lesur, V.
2013-12-01
We introduce a method that enables us to account for existing correlations between Gauss coefficients in core field modelling. The information about the correlations are obtained from a highly accurate field model based on CHAMP data, e.g. the GRIMM-3 model. We compute the covariance matrices of the geomagnetic field, the secular variation, and acceleration up to degree 18 and use these in the regularization scheme of the core field inversion. For testing our method we followed two different approaches by applying it to two different synthetic satellite data sets. The first is a short data set with a time span of only three months. Here we test how the information about correlations help to obtain an accurate model when only very little information are available. The second data set is a large one covering several years. In this case, besides reducing the residuals in general, we focus on the improvement of the model near the boundaries of the data set where the accerelation is generally more difficult to handle. In both cases the obtained covariance matrices are included in the damping scheme of the regularization. That way information from scales that could otherwise not be resolved by the data can be extracted. We show that by using this technique we are able to improve the models of the field and the secular variation for both, the short and the long term data set, compared to approaches using more conventional regularization techniques.
NASA Astrophysics Data System (ADS)
Aono, Masashi; Gunji, Yukio-Pegio
2004-08-01
How can non-algorithmic/non-deterministic computational syntax be computed? "The hyperincursive system" introduced by Dubois is an anticipatory system embracing the contradiction/uncertainty. Although it may provide a novel viewpoint for the understanding of complex systems, conventional digital computers cannot run faithfully as the hyperincursive computational syntax specifies, in a strict sense. Then is it an imaginary story? In this paper we try to argue that it is not. We show that a model of complex systems "Elementary Conflictable Cellular Automata (ECCA)" proposed by Aono and Gunji is embracing the hyperincursivity and the nonlocality. ECCA is based on locality-only type settings basically as well as other CA models, and/but at the same time, each cell is required to refer to globality-dominant regularity. Due to this contradictory locality-globality loop, the time evolution equation specifies that the system reaches the deadlock/infinite-loop. However, we show that there is a possibility of the resolution of these problems if the computing system has parallel and/but non-distributed property like an amoeboid organism. This paper is an introduction to "the slime mold computing" that is an attempt to cultivate an unconventional notion of computation.
NASA Astrophysics Data System (ADS)
Schneider, Barry I.; Segura, Javier; Gil, Amparo; Guan, Xiaoxu; Bartschat, Klaus
2018-04-01
This is a revised and updated version of a modern Fortran 90 code to compute the regular Plm (x) and irregular Qlm (x) associated Legendre functions for all x ∈(- 1 , + 1) (on the cut) and | x | > 1 and integer degree (l) and order (m). The necessity to revise the code comes as a consequence of some comments of Prof. James Bremer of the UC//Davis Mathematics Department, who discovered that there were errors in the code for large integer degree and order for the normalized regular Legendre functions on the cut.
29 CFR 541.701 - Customarily and regularly.
Code of Federal Regulations, 2012 CFR
2012-07-01
... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...
29 CFR 541.701 - Customarily and regularly.
Code of Federal Regulations, 2013 CFR
2013-07-01
... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...
29 CFR 541.701 - Customarily and regularly.
Code of Federal Regulations, 2014 CFR
2014-07-01
... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...
29 CFR 541.701 - Customarily and regularly.
Code of Federal Regulations, 2011 CFR
2011-07-01
... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...
Neural correlates of auditory scene analysis and perception
Cohen, Yale E.
2014-01-01
The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354
Application of real rock pore-threat statistics to a regular pore network model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rakibul, M.; Sarker, H.; McIntyre, D.
2011-01-01
This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results. In applied engineering, sometimes quick result with reasonable accuracy is acceptable than the more time consuming results. Present work is an effort to check the accuracy and validity of a previously developed pore network model for obtaining important petrophysical properties of rocks based on cutting-sized sample data.« less
Application of real rock pore-throat statistics to a regular pore network model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarker, M.R.; McIntyre, D.; Ferer, M.
2011-01-01
This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results. In applied engineering, sometimes quick result with reasonable accuracy is acceptable than the more time consuming results. Present work is an effort to check the accuracy and validity of a previously developed pore network model for obtaining important petrophysical properties of rocks based on cutting-sized sample data. Introduction« less
Puleo, Elaine; Bennett, Gary G; Emmons, Karen M
2007-01-01
Background As advances in computer access continue to be made, there is a need to better understand the challenges of increasing access for racial/ethnic minorities, particularly among those with lower incomes. Larger social contextual factors, such as social networks and neighborhood factors, may influence computer ownership and the number of places where individuals have access to computers. Objectives We examined the associations of sociodemographic and social contextual factors with computer ownership and frequency of use among 1554 adults living in urban public housing. Methods Bivariate associations between dependent variables (computer ownership and regular computer use) and independent variables were used to build multivariable logistic models adjusted for age and site clusters. Results Participants (N = total weighted size of 2270) were on average 51.0 (± 21.4) years old, primarily African American or Hispanic, and earned less than US $20000 per year. More than half owned a computer, and 42% were regular computer users. Reporting computer ownership was more likely if participants lived above the poverty level (OR = 1.78, 95% CI = 1.39-2.29), completed high school (OR = 2.46, 95% CI = 1.70-3.55), were in financial hardship (OR = 1.38, 95% CI = 1.06-1.81), were employed and supervised others (OR = 1.94, 95% CI = 1.08-3.46), and had multiple role responsibilities (OR = 2.18, 95% CI = 1.31-3.61). Regular computer use was more likely if participants were non-Hispanic (OR = 1.94, 95% CI = 1.30-2.91), lived above the poverty level (OR = 2.84, 95% CI = 1.90-4.24), completed high school (OR = 4.43, 95% CI = 3.04-6.46), were employed and supervised others (OR = 2.41, 95% CI = 1.37-4.22), felt safe in their neighborhood (OR = 1.57, 95% CI = 1.08-2.30), and had greater social network ties (OR = 3.09, 95% CI = 1.26-7.59). Conclusions Disparities in computer ownership and use are narrowing, even among those with very low incomes; however, identifying factors that contribute to disparities in access for these groups will be necessary to ensure the efficacy of future technology-based interventions. A unique finding of our study is that it may be equally as important to consider specific social contextual factors when trying to increase access and use among low-income minorities, such as social network ties, household responsibilities, and neighborhood safety. PMID:18093903
Ullman, Michael T; Pancheva, Roumyana; Love, Tracy; Yee, Eiling; Swinney, David; Hickok, Gregory
2005-05-01
Are the linguistic forms that are memorized in the mental lexicon and those that are specified by the rules of grammar subserved by distinct neurocognitive systems or by a single computational system with relatively broad anatomic distribution? On a dual-system view, the productive -ed-suffixation of English regular past tense forms (e.g., look-looked) depends upon the mental grammar, whereas irregular forms (e.g., dig-dug) are retrieved from lexical memory. On a single-mechanism view, the computation of both past tense types depends on associative memory. Neurological double dissociations between regulars and irregulars strengthen the dual-system view. The computation of real and novel, regular and irregular past tense forms was investigated in 20 aphasic subjects. Aphasics with non-fluent agrammatic speech and left frontal lesions were consistently more impaired at the production, reading, and judgment of regular than irregular past tenses. Aphasics with fluent speech and word-finding difficulties, and with left temporal/temporo-parietal lesions, showed the opposite pattern. These patterns held even when measures of frequency, phonological complexity, articulatory difficulty, and other factors were held constant. The data support the view that the memorized words of the mental lexicon are subserved by a brain system involving left temporal/temporo-parietal structures, whereas aspects of the mental grammar, in particular the computation of regular morphological forms, are subserved by a distinct system involving left frontal structures.
NASA Astrophysics Data System (ADS)
Castellanos, Víctor; Castillo-Santos, Francisco Eduardo; Dela-Rosa, Miguel Angel; Loreto-Hernández, Iván
In this paper, we analyze the Hopf and Bautin bifurcation of a given system of differential equations, corresponding to a tritrophic food chain model with Holling functional response types III and IV for the predator and superpredator, respectively. We distinguish two cases, when the prey has linear or logistic growth. In both cases we guarantee the existence of a limit cycle bifurcating from an equilibrium point in the positive octant of ℝ3. In order to do so, for the Hopf bifurcation we compute explicitly the first Lyapunov coefficient, the transversality Hopf condition, and for the Bautin bifurcation we also compute the second Lyapunov coefficient and verify the regularity conditions.
On configurational forces for gradient-enhanced inelasticity
NASA Astrophysics Data System (ADS)
Floros, Dimosthenis; Larsson, Fredrik; Runesson, Kenneth
2018-04-01
In this paper we discuss how configurational forces can be computed in an efficient and robust manner when a constitutive continuum model of gradient-enhanced viscoplasticity is adopted, whereby a suitably tailored mixed variational formulation in terms of displacements and micro-stresses is used. It is demonstrated that such a formulation produces sufficient regularity to overcome numerical difficulties that are notorious for a local constitutive model. In particular, no nodal smoothing of the internal variable fields is required. Moreover, the pathological mesh sensitivity that has been reported in the literature for a standard local model is no longer present. Numerical results in terms of configurational forces are shown for (1) a smooth interface and (2) a discrete edge crack. The corresponding configurational forces are computed for different values of the intrinsic length parameter. It is concluded that the convergence of the computed configurational forces with mesh refinement depends strongly on this parameter value. Moreover, the convergence behavior for the limit situation of rate-independent plasticity is unaffected by the relaxation time parameter.
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
NASA Astrophysics Data System (ADS)
Geng, Weihua; Zhao, Shan
2017-12-01
We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.
Computational inverse methods of heat source in fatigue damage problems
NASA Astrophysics Data System (ADS)
Chen, Aizhou; Li, Yuan; Yan, Bo
2018-04-01
Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.
Cawley, Gavin C; Talbot, Nicola L C
2006-10-01
Gene selection algorithms for cancer classification, based on the expression of a small number of biomarker genes, have been the subject of considerable research in recent years. Shevade and Keerthi propose a gene selection algorithm based on sparse logistic regression (SLogReg) incorporating a Laplace prior to promote sparsity in the model parameters, and provide a simple but efficient training procedure. The degree of sparsity obtained is determined by the value of a regularization parameter, which must be carefully tuned in order to optimize performance. This normally involves a model selection stage, based on a computationally intensive search for the minimizer of the cross-validation error. In this paper, we demonstrate that a simple Bayesian approach can be taken to eliminate this regularization parameter entirely, by integrating it out analytically using an uninformative Jeffrey's prior. The improved algorithm (BLogReg) is then typically two or three orders of magnitude faster than the original algorithm, as there is no longer a need for a model selection step. The BLogReg algorithm is also free from selection bias in performance estimation, a common pitfall in the application of machine learning algorithms in cancer classification. The SLogReg, BLogReg and Relevance Vector Machine (RVM) gene selection algorithms are evaluated over the well-studied colon cancer and leukaemia benchmark datasets. The leave-one-out estimates of the probability of test error and cross-entropy of the BLogReg and SLogReg algorithms are very similar, however the BlogReg algorithm is found to be considerably faster than the original SLogReg algorithm. Using nested cross-validation to avoid selection bias, performance estimation for SLogReg on the leukaemia dataset takes almost 48 h, whereas the corresponding result for BLogReg is obtained in only 1 min 24 s, making BLogReg by far the more practical algorithm. BLogReg also demonstrates better estimates of conditional probability than the RVM, which are of great importance in medical applications, with similar computational expense. A MATLAB implementation of the sparse logistic regression algorithm with Bayesian regularization (BLogReg) is available from http://theoval.cmp.uea.ac.uk/~gcc/cbl/blogreg/
Flow in curved ducts of varying cross-section
NASA Astrophysics Data System (ADS)
Sotiropoulos, F.; Patel, V. C.
1992-07-01
Two numerical methods for solving the incompressible Navier-Stokes equations are compared with each other by applying them to calculate laminar and turbulent flows through curved ducts of regular cross-section. Detailed comparisons, between the computed solutions and experimental data, are carried out in order to validate the two methods and to identify their relative merits and disadvantages. Based on the conclusions of this comparative study a numerical method is developed for simulating viscous flows through curved ducts of varying cross-sections. The proposed method is capable of simulating the near-wall turbulence using fine computational meshes across the sublayer in conjunction with a two-layer k-epsilon model. Numerical solutions are obtained for: (1) a straight transition duct geometry, and (2) a hydroturbine draft-tube configuration at model scale Reynolds number for various inlet swirl intensities. The report also provides a detailed literature survey that summarizes all the experimental and computational work in the area of duct flows.
A study of modelling simplifications in ground vibration predictions for railway traffic at grade
NASA Astrophysics Data System (ADS)
Germonpré, M.; Degrande, G.; Lombaert, G.
2017-10-01
Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.
Feedback Enhances Feedforward Figure-Ground Segmentation by Changing Firing Mode
Supèr, Hans; Romeo, August
2011-01-01
In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforwardspiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic) spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses withthe responses to a homogenous texture. We propose that feedback controlsfigure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons. PMID:21738747
Feedback enhances feedforward figure-ground segmentation by changing firing mode.
Supèr, Hans; Romeo, August
2011-01-01
In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforward spiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic) spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses with the responses to a homogenous texture. We propose that feedback controls figure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons.
Calculation of Rate Spectra from Noisy Time Series Data
Voelz, Vincent A.; Pande, Vijay S.
2011-01-01
As the resolution of experiments to measure folding kinetics continues to improve, it has become imperative to avoid bias that may come with fitting data to a predetermined mechanistic model. Towards this end, we present a rate spectrum approach to analyze timescales present in kinetic data. Computing rate spectra of noisy time series data via numerical discrete inverse Laplace transform is an ill-conditioned inverse problem, so a regularization procedure must be used to perform the calculation. Here, we show the results of different regularization procedures applied to noisy multi-exponential and stretched exponential time series, as well as data from time-resolved folding kinetics experiments. In each case, the rate spectrum method recapitulates the relevant distribution of timescales present in the data, with different priors on the rate amplitudes naturally corresponding to common biases toward simple phenomenological models. These results suggest an attractive alternative to the “Occam’s razor” philosophy of simply choosing models with the fewest number of relaxation rates. PMID:22095854
5 CFR 550.1305 - Treatment as basic pay.
Code of Federal Regulations, 2010 CFR
2010-01-01
... within the regular tour of duty, but outside the basic 40-hour workweek, is basic pay only for the... hours that are part of a firefighter's regular tour of duty (as computed under § 550.1303) and the... overtime pay for hours in a firefighter's regular tour of duty is derived by multiplying the applicable...
Verifying Three-Dimensional Skull Model Reconstruction Using Cranial Index of Symmetry
Kung, Woon-Man; Chen, Shuo-Tsung; Lin, Chung-Hsiang; Lu, Yu-Mei; Chen, Tzu-Hsuan; Lin, Muh-Shi
2013-01-01
Background Difficulty exists in scalp adaptation for cranioplasty with customized computer-assisted design/manufacturing (CAD/CAM) implant in situations of excessive wound tension and sub-cranioplasty dead space. To solve this clinical problem, the CAD/CAM technique should include algorithms to reconstruct a depressed contour to cover the skull defect. Satisfactory CAM-derived alloplastic implants are based on highly accurate three-dimensional (3-D) CAD modeling. Thus, it is quite important to establish a symmetrically regular CAD/CAM reconstruction prior to depressing the contour. The purpose of this study is to verify the aesthetic outcomes of CAD models with regular contours using cranial index of symmetry (CIS). Materials and methods From January 2011 to June 2012, decompressive craniectomy (DC) was performed for 15 consecutive patients in our institute. 3-D CAD models of skull defects were reconstructed using commercial software. These models were checked in terms of symmetry by CIS scores. Results CIS scores of CAD reconstructions were 99.24±0.004% (range 98.47–99.84). CIS scores of these CAD models were statistically significantly greater than 95%, identical to 99.5%, but lower than 99.6% (p<0.001, p = 0.064, p = 0.021 respectively, Wilcoxon matched pairs signed rank test). These data evidenced the highly accurate symmetry of these CAD models with regular contours. Conclusions CIS calculation is beneficial to assess aesthetic outcomes of CAD-reconstructed skulls in terms of cranial symmetry. This enables further accurate CAD models and CAM cranial implants with depressed contours, which are essential in patients with difficult scalp adaptation. PMID:24204566
Simplified paraboloid phase model-based phase tracker for demodulation of a single complex fringe.
He, A; Deepan, B; Quan, C
2017-09-01
A regularized phase tracker (RPT) is an effective method for demodulation of single closed-fringe patterns. However, lengthy calculation time, specially designed scanning strategy, and sign-ambiguity problems caused by noise and saddle points reduce its effectiveness, especially for demodulating large and complex fringe patterns. In this paper, a simplified paraboloid phase model-based regularized phase tracker (SPRPT) is proposed. In SPRPT, first and second phase derivatives are pre-determined by the density-direction-combined method and discrete higher-order demodulation algorithm, respectively. Hence, cost function is effectively simplified to reduce the computation time significantly. Moreover, pre-determined phase derivatives improve the robustness of the demodulation of closed, complex fringe patterns. Thus, no specifically designed scanning strategy is needed; nevertheless, it is robust against the sign-ambiguity problem. The paraboloid phase model also assures better accuracy and robustness against noise. Both the simulated and experimental fringe patterns (obtained using electronic speckle pattern interferometry) are used to validate the proposed method, and a comparison of the proposed method with existing RPT methods is carried out. The simulation results show that the proposed method has achieved the highest accuracy with less computational time. The experimental result proves the robustness and the accuracy of the proposed method for demodulation of noisy fringe patterns and its feasibility for static and dynamic applications.
Liu, Fang; Eugenio, Evercita C
2018-04-01
Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.
Ab initio calculation of excess properties of La{sub 1−x}(Ln,An){sub x}PO{sub 4} solid solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yan; JARA High-Performance Computing, Schinkelstrasse 2, 52062 Aachen; Kowalski, Piotr M., E-mail: p.kowalski@fz-juelich.de
2014-12-15
We used ab initio computational approach to predict the excess enthalpy of mixing and the corresponding regular/subregular model parameters for La{sub 1−x}Ln{sub x}PO{sub 4} (Ln=Ce,…, Tb) and La{sub 1−x}An{sub x}PO{sub 4} (An=Pu, Am and Cm) monazite-type solid solutions. We found that the regular model interaction parameter W computed for La{sub 1−x}Ln{sub x}PO{sub 4} solid solutions matches the few existing experimental data. Within the lanthanide series W increases quadratically with the volume mismatch between LaPO{sub 4} and LnPO{sub 4} endmembers (ΔV=V{sub LaPO{sub 4}}−V{sub LnPO{sub 4}}), so that W(kJ/mol)=0.618(ΔV(cm{sup 3}/mol)){sup 2}. We demonstrate that this relationship also fits the interaction parameters computedmore » for La{sub 1−x}An{sub x}PO{sub 4} solid solutions. This shows that lanthanides can be used as surrogates for investigation of the thermodynamic mixing properties of actinide-bearing solid solutions. - Highlights: • The excess enthalpies of mixing for monazite-type solid solutions are computed. • The excess enthalpies increase with the endmembers volume mismatch. • The relationship derived for lanthanides is transferable to La{sub 1−x}An{sub x}PO{sub 4} systems.« less
ERIC Educational Resources Information Center
Boudreau, Francois; Godin, Gaston; Poirier, Paul
2011-01-01
The promotion of regular physical activity for people with type 2 diabetes poses a challenge for public health authorities. The purpose of this study was to evaluate the efficiency of a computer-tailoring print-based intervention to promote the adoption of regular physical activity among people with type 2 diabetes. An experimental design was…
Tonelli, Paul; Mouret, Jean-Baptiste
2013-01-01
A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1) the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2) synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT). Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1) in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2) whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities. PMID:24236099
Duan, Jizhong; Liu, Yu; Jing, Peiguang
2018-02-01
Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.
Accelerating Large Data Analysis By Exploiting Regularities
NASA Technical Reports Server (NTRS)
Moran, Patrick J.; Ellsworth, David
2003-01-01
We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.
Minatel, Lurian; Verri, Fellippo Ramos; Kudo, Guilherme Abu Halawa; de Faria Almeida, Daniel Augusto; de Souza Batista, Victor Eduardo; Lemos, Cleidiel Aparecido Araujo; Pellizzer, Eduardo Piza; Santiago, Joel Ferreira
2017-02-01
A biomechanical analysis of different types of implant connections is relevant to clinical practice because it may impact the longevity of the rehabilitation treatment. Therefore, the objective of this study is to evaluate the Morse taper connections and the stress distribution of structures associated with the platform switching (PSW) concept. It will do this by obtaining data on the biomechanical behavior of the main structure in relation to the dental implant using the 3-dimensional finite element methodology. Four models were simulated (with each containing a single prosthesis over the implant) in the molar region, with the following specifications: M1 and M2 is an external hexagonal implant on a regular platform; M3 is an external hexagonal implant using PSW concept; and M4 is a Morse taper implant. The modeling process involved the use of images from InVesalius CT (computed tomography) processing software, which were refined using Rhinoceros 4.0 and SolidWorks 2011 CAD software. The models were then exported into the finite element program (FEMAP 11.0) to configure the meshes. The models were processed using NeiNastram software. The main results are that M1 (regular diameter 4mm) had the highest stress concentration area and highest microstrain concentration for bone tissue, dental implants, and the retaining screw (P<0.05). Using the PSW concept increases the area of the stress concentrations in the retaining screw (P<0.05) more than in the regular platform implant. It was concluded that the increase in diameter is beneficial for stress distribution and that the PSW concept had higher stress concentrations in the retaining screw and the crown compared to the regular platform implant. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, N.; Yue, X. Y.
2018-03-01
Macroscopic root water uptake models proportional to a root density distribution function (RDDF) are most commonly used to model water uptake by plants. As the water uptake is difficult and labor intensive to measure, these models are often calibrated by inverse modeling. Most previous inversion studies assume RDDF to be constant with depth and time or dependent on only depth for simplification. However, under field conditions, this function varies with type of soil and root growth and thus changes with both depth and time. This study proposes an inverse method to calibrate both spatially and temporally varying RDDF in unsaturated water flow modeling. To overcome the difficulty imposed by the ill-posedness, the calibration is formulated as an optimization problem in the framework of the Tikhonov regularization theory, adding additional constraint to the objective function. Then the formulated nonlinear optimization problem is numerically solved with an efficient algorithm on the basis of the finite element method. The advantage of our method is that the inverse problem is translated into a Tikhonov regularization functional minimization problem and then solved based on the variational construction, which circumvents the computational complexity in calculating the sensitivity matrix involved in many derivative-based parameter estimation approaches (e.g., Levenberg-Marquardt optimization). Moreover, the proposed method features optimization of RDDF without any prior form, which is applicable to a more general root water uptake model. Numerical examples are performed to illustrate the applicability and effectiveness of the proposed method. Finally, discussions on the stability and extension of this method are presented.
Coherence resonance in bursting neural networks
NASA Astrophysics Data System (ADS)
Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J.
2015-10-01
Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal—a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.
On a full Bayesian inference for force reconstruction problems
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.
NASA Astrophysics Data System (ADS)
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
Yagi, T; Ohshima, S; Funahashi, Y
1997-09-01
A linear analogue network model is proposed to describe the neuronal circuit of the outer retina consisting of cones, horizontal cells, and bipolar cells. The model reflects previous physiological findings on the spatial response properties of these neurons to dim illumination and is expressed by physiological mechanisms, i.e., membrane conductances, gap-junctional conductances, and strengths of chemical synaptic interactions. Using the model, we characterized the spatial filtering properties of the bipolar cell receptive field with the standard regularization theory, in which the early vision problems are attributed to minimization of a cost function. The cost function accompanying the present characterization is derived from the linear analogue network model, and one can gain intuitive insights on how physiological mechanisms contribute to the spatial filtering properties of the bipolar cell receptive field. We also elucidated a quantitative relation between the Laplacian of Gaussian operator and the bipolar cell receptive field. From the computational point of view, the dopaminergic modulation of the gap-junctional conductance between horizontal cells is inferred to be a suitable neural adaptation mechanism for transition between photopic and mesopic vision.
A Behavioral Study of Regularity, Irregularity and Rules in the English Past Tense
ERIC Educational Resources Information Center
Magen, Harriet S.
2014-01-01
Opposing views of storage and processing of morphologically complex words (e.g., past tense) have been suggested: the dual system, whereby regular forms are not in the lexicon but are generated by rule, while irregular forms are explicitly represented; the single system, whereby regular and irregular forms are computed by a single system, using…
Orbital theory in terms of KS elements with luni-solar perturbations
NASA Astrophysics Data System (ADS)
Sellamuthu, Harishkumar; Sharma, Ram
2016-07-01
Precise orbit computation of Earth orbiting satellites is essential for efficient mission planning of planetary exploration, navigation and satellite geodesy. The third-body perturbations of the Sun and the Moon predominantly affect the satellite motion in the high altitude and elliptical orbits, where the effect of atmospheric drag is negligible. The physics of the luni-solar gravity effect on Earth satellites have been studied extensively over the years. The combined luni-solar gravitational attraction will induce a cumulative effect on the dynamics of satellite orbits, which mainly oscillates the perigee altitude. Though accurate orbital parameters are computed by numerical integration with respect to complex force models, analytical theories are highly valued for the manifold of solutions restricted to relatively simple force models. During close approach, the classical equations of motion in celestial mechanics are almost singular and they are unstable for long-term orbit propagation. A new singularity-free analytical theory in terms of KS (Kustaanheimo and Stiefel) regular elements with respect to luni-solar perturbation is developed. These equations are regular everywhere and eccentric anomaly is the independent variable. Plataforma Solar de Almería (PSA) algorithm and a Fourier series algorithm are used to compute the accurate positions of the Sun and the Moon, respectively. Numerical studies are carried out for wide range of initial parameters and the analytical solutions are found to be satisfactory when compared with numerically integrated values. The symmetrical nature of the equations allows only two of the nine equations to be solved for computing the state vectors and the time. Only a change in the initial conditions is required to solve the other equations. This theory will find multiple applications including on-board software packages and for mission analysis purposes.
Santaniello, Sabato; McCarthy, Michelle M; Montgomery, Erwin B; Gale, John T; Kopell, Nancy; Sarma, Sridevi V
2015-02-10
High-frequency deep brain stimulation (HFS) is clinically recognized to treat parkinsonian movement disorders, but its mechanisms remain elusive. Current hypotheses suggest that the therapeutic merit of HFS stems from increasing the regularity of the firing patterns in the basal ganglia (BG). Although this is consistent with experiments in humans and animal models of Parkinsonism, it is unclear how the pattern regularization would originate from HFS. To address this question, we built a computational model of the cortico-BG-thalamo-cortical loop in normal and parkinsonian conditions. We simulated the effects of subthalamic deep brain stimulation both proximally to the stimulation site and distally through orthodromic and antidromic mechanisms for several stimulation frequencies (20-180 Hz) and, correspondingly, we studied the evolution of the firing patterns in the loop. The model closely reproduced experimental evidence for each structure in the loop and showed that neither the proximal effects nor the distal effects individually account for the observed pattern changes, whereas the combined impact of these effects increases with the stimulation frequency and becomes significant for HFS. Perturbations evoked proximally and distally propagate along the loop, rendezvous in the striatum, and, for HFS, positively overlap (reinforcement), thus causing larger poststimulus activation and more regular patterns in striatum. Reinforcement is maximal for the clinically relevant 130-Hz stimulation and restores a more normal activity in the nuclei downstream. These results suggest that reinforcement may be pivotal to achieve pattern regularization and restore the neural activity in the nuclei downstream and may stem from frequency-selective resonant properties of the loop.
Santaniello, Sabato; McCarthy, Michelle M.; Montgomery, Erwin B.; Gale, John T.; Kopell, Nancy; Sarma, Sridevi V.
2015-01-01
High-frequency deep brain stimulation (HFS) is clinically recognized to treat parkinsonian movement disorders, but its mechanisms remain elusive. Current hypotheses suggest that the therapeutic merit of HFS stems from increasing the regularity of the firing patterns in the basal ganglia (BG). Although this is consistent with experiments in humans and animal models of Parkinsonism, it is unclear how the pattern regularization would originate from HFS. To address this question, we built a computational model of the cortico-BG-thalamo-cortical loop in normal and parkinsonian conditions. We simulated the effects of subthalamic deep brain stimulation both proximally to the stimulation site and distally through orthodromic and antidromic mechanisms for several stimulation frequencies (20–180 Hz) and, correspondingly, we studied the evolution of the firing patterns in the loop. The model closely reproduced experimental evidence for each structure in the loop and showed that neither the proximal effects nor the distal effects individually account for the observed pattern changes, whereas the combined impact of these effects increases with the stimulation frequency and becomes significant for HFS. Perturbations evoked proximally and distally propagate along the loop, rendezvous in the striatum, and, for HFS, positively overlap (reinforcement), thus causing larger poststimulus activation and more regular patterns in striatum. Reinforcement is maximal for the clinically relevant 130-Hz stimulation and restores a more normal activity in the nuclei downstream. These results suggest that reinforcement may be pivotal to achieve pattern regularization and restore the neural activity in the nuclei downstream and may stem from frequency-selective resonant properties of the loop. PMID:25624501
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
Mesoscopic Effects in an Agent-Based Bargaining Model in Regular Lattices
Poza, David J.; Santos, José I.; Galán, José M.; López-Paredes, Adolfo
2011-01-01
The effect of spatial structure has been proved very relevant in repeated games. In this work we propose an agent based model where a fixed finite population of tagged agents play iteratively the Nash demand game in a regular lattice. The model extends the multiagent bargaining model by Axtell, Epstein and Young [1] modifying the assumption of global interaction. Each agent is endowed with a memory and plays the best reply against the opponent's most frequent demand. We focus our analysis on the transient dynamics of the system, studying by computer simulation the set of states in which the system spends a considerable fraction of the time. The results show that all the possible persistent regimes in the global interaction model can also be observed in this spatial version. We also find that the mesoscopic properties of the interaction networks that the spatial distribution induces in the model have a significant impact on the diffusion of strategies, and can lead to new persistent regimes different from those found in previous research. In particular, community structure in the intratype interaction networks may cause that communities reach different persistent regimes as a consequence of the hindering diffusion effect of fluctuating agents at their borders. PMID:21408019
Mesoscopic effects in an agent-based bargaining model in regular lattices.
Poza, David J; Santos, José I; Galán, José M; López-Paredes, Adolfo
2011-03-09
The effect of spatial structure has been proved very relevant in repeated games. In this work we propose an agent based model where a fixed finite population of tagged agents play iteratively the Nash demand game in a regular lattice. The model extends the multiagent bargaining model by Axtell, Epstein and Young modifying the assumption of global interaction. Each agent is endowed with a memory and plays the best reply against the opponent's most frequent demand. We focus our analysis on the transient dynamics of the system, studying by computer simulation the set of states in which the system spends a considerable fraction of the time. The results show that all the possible persistent regimes in the global interaction model can also be observed in this spatial version. We also find that the mesoscopic properties of the interaction networks that the spatial distribution induces in the model have a significant impact on the diffusion of strategies, and can lead to new persistent regimes different from those found in previous research. In particular, community structure in the intratype interaction networks may cause that communities reach different persistent regimes as a consequence of the hindering diffusion effect of fluctuating agents at their borders.
Deep learning for computational chemistry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goh, Garrett B.; Hodas, Nathan O.; Vishnu, Abhinav
The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. Inmore » this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.« less
Constructing a logical, regular axis topology from an irregular topology
Faraj, Daniel A.
2014-07-22
Constructing a logical regular topology from an irregular topology including, for each axial dimension and recursively, for each compute node in a subcommunicator until returning to a first node: adding to a logical line of the axial dimension a neighbor specified in a nearest neighbor list; calling the added compute node; determining, by the called node, whether any neighbor in the node's nearest neighbor list is available to add to the logical line; if a neighbor in the called compute node's nearest neighbor list is available to add to the logical line, adding, by the called compute node to the logical line, any neighbor in the called compute node's nearest neighbor list for the axial dimension not already added to the logical line; and, if no neighbor in the called compute node's nearest neighbor list is available to add to the logical line, returning to the calling compute node.
Constructing a logical, regular axis topology from an irregular topology
Faraj, Daniel A.
2014-07-01
Constructing a logical regular topology from an irregular topology including, for each axial dimension and recursively, for each compute node in a subcommunicator until returning to a first node: adding to a logical line of the axial dimension a neighbor specified in a nearest neighbor list; calling the added compute node; determining, by the called node, whether any neighbor in the node's nearest neighbor list is available to add to the logical line; if a neighbor in the called compute node's nearest neighbor list is available to add to the logical line, adding, by the called compute node to the logical line, any neighbor in the called compute node's nearest neighbor list for the axial dimension not already added to the logical line; and, if no neighbor in the called compute node's nearest neighbor list is available to add to the logical line, returning to the calling compute node.
2016-11-22
structure of the graph, we replace the ℓ1- norm by the nonconvex Capped -ℓ1 norm , and obtain the Generalized Capped -ℓ1 regularized logistic regression...X. M. Yuan. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Mathematics of Computation, 82(281):301...better approximations of ℓ0- norm theoretically and computationally beyond ℓ1- norm , for example, the compressive sensing (Xiao et al., 2011). The
Computer Games as Instructional Tools.
ERIC Educational Resources Information Center
Bright, George W.; Harvey, John G.
1984-01-01
Defines games, instructional games, and computer instructional games; discusses several unique capabilities that facilitate game playing and may make computer games more attractive to students than noncomputer alternatives; and examines the potential disadvantages of using instructional computer games on a regular basis. (MBR)
When benefits outweigh costs: reconsidering "automatic" phonological recoding when reading aloud.
Robidoux, Serje; Besner, Derek
2011-06-01
Skilled readers are slower to read aloud exception words (e.g., PINT) than regular words (e.g., MINT). In the case of exception words, sublexical knowledge competes with the correct pronunciation driven by lexical knowledge, whereas no such competition occurs for regular words. The dominant view is that the cost of this "regularity" effect is evidence that sublexical spelling-sound conversion is impossible to prevent (i.e., is "automatic"). This view has become so reified that the field rarely questions it. However, the results of simulations from the most successful computational models on the table suggest that the claim of "automatic" sublexical phonological recoding is premature given that there is also a benefit conferred by sublexical processing. Taken together with evidence from skilled readers that sublexical phonological recoding can be stopped, we suggest that the field is too narrowly focused when it asserts that sublexical phonological recoding is "automatic" and that a broader, more nuanced and contextually driven approach provides a more useful framework.
Stability of Tumor Growth Under Immunotherapy: A Computational Study
NASA Astrophysics Data System (ADS)
Singh, Sandeep; Sharma, Prabha; Singh, Phool
We present a mathematical model to study the growth of a solid tumor in the presence of regular doses of lymphocytes. We further extend it to take care of the periodic behavior of the lymphocytes, which are used for stimulating the immune system. Cell carrying capacity has been specified and a cell kill rate under immunotherapy is used to take care of how different metabolisms will react to the treatment. We analyze our model with respect to its stability and its sensitivity to the various parameters used.
Space - A unique environment for process modeling R&D
NASA Technical Reports Server (NTRS)
Overfelt, Tony
1991-01-01
Process modeling, the application of advanced computational techniques to simulate real processes as they occur in regular use, e.g., welding, casting and semiconductor crystal growth, is discussed. Using the low-gravity environment of space will accelerate the technical validation of the procedures and enable extremely accurate determinations of the many necessary thermophysical properties. Attention is given to NASA's centers for the commercial development of space; joint ventures of universities, industries, and goverment agencies to study the unique attributes of space that offer potential for applied R&D and eventual commercial exploitation.
Analysis models for the estimation of oceanic fields
NASA Technical Reports Server (NTRS)
Carter, E. F.; Robinson, A. R.
1987-01-01
A general model for statistically optimal estimates is presented for dealing with scalar, vector and multivariate datasets. The method deals with anisotropic fields and treats space and time dependence equivalently. Problems addressed include the analysis, or the production of synoptic time series of regularly gridded fields from irregular and gappy datasets, and the estimate of fields by compositing observations from several different instruments and sampling schemes. Technical issues are discussed, including the convergence of statistical estimates, the choice of representation of the correlations, the influential domain of an observation, and the efficiency of numerical computations.
The Volume of the Regular Octahedron
ERIC Educational Resources Information Center
Trigg, Charles W.
1974-01-01
Five methods are given for computing the area of a regular octahedron. It is suggested that students first construct an octahedron as this will aid in space visualization. Six further extensions are left for the reader to try. (LS)
Spark formation as a moving boundary process
NASA Astrophysics Data System (ADS)
Ebert, Ute
2006-03-01
The growth process of spark channels recently becomes accessible through complementary methods. First, I will review experiments with nanosecond photographic resolution and with fast and well defined power supplies that appropriately resolve the dynamics of electric breakdown [1]. Second, I will discuss the elementary physical processes as well as present computations of spark growth and branching with adaptive grid refinement [2]. These computations resolve three well separated scales of the process that emerge dynamically. Third, this scale separation motivates a hierarchy of models on different length scales. In particular, I will discuss a moving boundary approximation for the ionization fronts that generate the conducting channel. The resulting moving boundary problem shows strong similarities with classical viscous fingering. For viscous fingering, it is known that the simplest model forms unphysical cusps within finite time that are suppressed by a regularizing condition on the moving boundary. For ionization fronts, we derive a new condition on the moving boundary of mixed Dirichlet-Neumann type (φ=ɛnφ) that indeed regularizes all structures investigated so far. In particular, we present compact analytical solutions with regularization, both for uniformly translating shapes and for their linear perturbations [3]. These solutions are so simple that they may acquire a paradigmatic role in the future. Within linear perturbation theory, they explicitly show the convective stabilization of a curved front while planar fronts are linearly unstable against perturbations of arbitrary wave length. [1] T.M.P. Briels, E.M. van Veldhuizen, U. Ebert, TU Eindhoven. [2] C. Montijn, J. Wackers, W. Hundsdorfer, U. Ebert, CWI Amsterdam. [3] B. Meulenbroek, U. Ebert, L. Schäfer, Phys. Rev. Lett. 95, 195004 (2005).
NASA Astrophysics Data System (ADS)
Loris, Ignace; Simons, Frederik J.; Daubechies, Ingrid; Nolet, Guust; Fornasier, Massimo; Vetter, Philip; Judd, Stephen; Voronin, Sergey; Vonesch, Cédric; Charléty, Jean
2010-05-01
Global seismic wavespeed models are routinely parameterized in terms of spherical harmonics, networks of tetrahedral nodes, rectangular voxels, or spherical splines. Up to now, Earth model parametrizations by wavelets on the three-dimensional ball remain uncommon. Here we propose such a procedure with the following three goals in mind: (1) The multiresolution character of a wavelet basis allows for the models to be represented with an effective spatial resolution that varies as a function of position within the Earth. (2) This property can be used to great advantage in the regularization of seismic inversion schemes by seeking the most sparse solution vector, in wavelet space, through iterative minimization of a combination of the ℓ2 (to fit the data) and ℓ1 norms (to promote sparsity in wavelet space). (3) With the continuing increase in high-quality seismic data, our focus is also on numerical efficiency and the ability to use parallel computing in reconstructing the model. In this presentation we propose a new wavelet basis to take advantage of these three properties. To form the numerical grid we begin with a surface tesselation known as the 'cubed sphere', a construction popular in fluid dynamics and computational seismology, coupled with an semi-regular radial subdivison that honors the major seismic discontinuities between the core-mantle boundary and the surface. This mapping first divides the volume of the mantle into six portions. In each 'chunk' two angular and one radial variable are used for parametrization. In the new variables standard 'cartesian' algorithms can more easily be used to perform the wavelet transform (or other common transforms). Edges between chunks are handled by special boundary filters. We highlight the benefits of this construction and use it to analyze the information present in several published seismic compressional-wavespeed models of the mantle, paying special attention to the statistics of wavelet and scaling coefficients across scales. We also focus on the likely gains of future inversions of finite-frequency seismic data using a sparsity promoting penalty in combination with our new wavelet approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, G; Xing, L
2016-06-15
Purpose: Cone beam X-ray luminescence computed tomography (CB-XLCT), which aims to achieve molecular and functional imaging by X-rays, has recently been proposed as a new imaging modality. However, the inverse problem of CB-XLCT is seriously ill-conditioned, hindering us to achieve good image quality. In this work, a novel reconstruction method based on Bayesian theory is proposed to tackle this problem Methods: Bayesian theory provides a natural framework for utilizing various kinds of available prior information to improve the reconstruction image quality. A generalized Gaussian Markov random field (GGMRF) model is proposed here to construct the prior model of the Bayesianmore » theory. The most important feature of GGMRF model is the adjustable shape parameter p, which can be continuously adjusted from 1 to 2. The reconstruction image tends to have more edge-preserving property when p is slide to 1, while having more noise tolerance property when p is slide to 2, just like the behavior of L1 and L2 regularization methods, respectively. The proposed method provides a flexible regularization framework to adapt to a wide range of applications. Results: Numerical simulations were implemented to test the performance of the proposed method. The Digimouse atlas were employed to construct a three-dimensional mouse model, and two small cylinders were placed inside to serve as the targets. Reconstruction results show that the proposed method tends to obtain better spatial resolution with a smaller shape parameter, while better signal-to-noise image with a larger shape parameter. Quantitative indexes, contrast-to-noise ratio (CNR) and full-width at half-maximum (FWHM), were used to assess the performance of the proposed method, and confirmed its effectiveness in CB-XLCT reconstruction. Conclusion: A novel reconstruction method for CB-XLCT is proposed based on GGMRF model, which enables an adjustable performance tradeoff between L1 and L2 regularization methods. Numerical simulations were conducted to demonstrate its performance.« less
A primer for biomedical scientists on how to execute model II linear regression analysis.
Ludbrook, John
2012-04-01
1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.
29 CFR 548.500 - Methods of computation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... AUTHORIZATION OF ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Computation of Overtime Pay § 548.500 Methods of computation. The methods of computing overtime pay on the basic rates for piece... pay at the regular rate. Example 1. Under an employment agreement the basic rate to be used in...
Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.
Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong
2015-11-01
In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.
Failure Analysis in Platelet Molded Composite Systems
NASA Astrophysics Data System (ADS)
Kravchenko, Sergii G.
Long-fiber discontinuous composite systems in the form of chopped prepreg tapes provide an advanced, structural grade, molding compound allowing for fabrication of complex three-dimensional components. Understanding of process-structure-property relationship is essential for application of prerpeg platelet molded components, especially because of their possible irregular disordered heterogeneous morphology. Herein, a structure-property relationship was analyzed in the composite systems of many platelets. Regular and irregular morphologies were considered. Platelet-based systems with more ordered morphology possess superior mechanical performance. While regular morphologies allow for a careful inspection of failure mechanisms derived from the morphological characteristics, irregular morphologies are representative of the composite architectures resulting from uncontrolled deposition and molding with chopped prerpegs. Progressive failure analysis (PFA) was used to study the damaged deformation up to ultimate failure in a platelet-based composite system. Computational damage mechanics approaches were utilized to conduct the PFA. The developed computational models granted understanding of how the composite structure details, meaning the platelet geometry and system morphology (geometrical arrangement and orientation distribution of platelets), define the effective mechanical properties of a platelet-molded composite system, its stiffness, strength and variability in properties.
NASA Technical Reports Server (NTRS)
Homemdemello, Luiz S.
1992-01-01
An assembly planner for tetrahedral truss structures is presented. To overcome the difficulties due to the large number of parts, the planner exploits the simplicity and uniformity of the shapes of the parts and the regularity of their interconnection. The planning automation is based on the computational formalism known as production system. The global data base consists of a hexagonal grid representation of the truss structure. This representation captures the regularity of tetrahedral truss structures and their multiple hierarchies. It maps into quadratic grids and can be implemented in a computer by using a two-dimensional array data structure. By maintaining the multiple hierarchies explicitly in the model, the choice of a particular hierarchy is only made when needed, thus allowing a more informed decision. Furthermore, testing the preconditions of the production rules is simple because the patterned way in which the struts are interconnected is incorporated into the topology of the hexagonal grid. A directed graph representation of assembly sequences allows the use of both graph search and backtracking control strategies.
GPU-accelerated element-free reverse-time migration with Gauss points partition
NASA Astrophysics Data System (ADS)
Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong
2018-06-01
An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.
NASA Astrophysics Data System (ADS)
Billard, Aude
2000-10-01
This paper summarizes a number of experiments in biologically inspired robotics. The common feature to all experiments is the use of artificial neural networks as the building blocks for the controllers. The experiments speak in favor of using a connectionist approach for designing adaptive and flexible robot controllers, and for modeling neurological processes. I present 1) DRAMA, a novel connectionist architecture, which has general property for learning time series and extracting spatio-temporal regularities in multi-modal and highly noisy data; 2) Robota, a doll-shaped robot, which imitates and learns a proto-language; 3) an experiment in collective robotics, where a group of 4 to 15 Khepera robots learn dynamically the topography of an environment whose features change frequently; 4) an abstract, computational model of primate ability to learn by imitation; 5) a model for the control of locomotor gaits in a quadruped legged robot.
Computational modeling and prototyping of a pediatric airway management instrument.
Gonzalez-Cota, Alan; Kruger, Grant H; Raghavan, Padmaja; Reynolds, Paul I
2010-09-01
Anterior retraction of the tongue is used to enhance upper airway patency during pediatric fiberoptic intubation. This can be achieved by the use of Magill forceps as a tongue retractor, but lingual grip can become unsteady and traumatic. Our objective was to modify this instrument using computer-aided engineering for the purpose of stable tongue retraction. We analyzed the geometry and mechanical properties of standard Magill forceps with a combination of analytical and empirical methods. This design was captured using computer-aided design techniques to obtain a 3-dimensional model allowing further geometric refinements and mathematical testing for rapid prototyping. On the basis of our experimental findings we adjusted the design constraints to optimize the device for tongue retraction. Stereolithography prototyping was used to create a partially functional plastic model to further assess the functional and ergonomic effectiveness of the design changes. To reduce pressure on the tongue by regular Magill forceps, we incorporated (1) a larger diameter tip for better lingual tissue pressure profile, (2) a ratchet to stabilize such pressure, and (3) a soft molded tip with roughened surface to improve grip. Computer-aided engineering can be used to redesign and prototype a popular instrument used in airway management. On a computational model, our modified Magill forceps demonstrated stable retraction forces, while maintaining the original geometry and versatility. Its application in humans and utility during pediatric fiberoptic intubation are yet to be studied.
NASA Astrophysics Data System (ADS)
Hoch, Jannis M.; Neal, Jeffrey C.; Baart, Fedor; van Beek, Rens; Winsemius, Hessel C.; Bates, Paul D.; Bierkens, Marc F. P.
2017-10-01
We here present GLOFRIM, a globally applicable computational framework for integrated hydrological-hydrodynamic modelling. GLOFRIM facilitates spatially explicit coupling of hydrodynamic and hydrologic models and caters for an ensemble of models to be coupled. It currently encompasses the global hydrological model PCR-GLOBWB as well as the hydrodynamic models Delft3D Flexible Mesh (DFM; solving the full shallow-water equations and allowing for spatially flexible meshing) and LISFLOOD-FP (LFP; solving the local inertia equations and running on regular grids). The main advantages of the framework are its open and free access, its global applicability, its versatility, and its extensibility with other hydrological or hydrodynamic models. Before applying GLOFRIM to an actual test case, we benchmarked both DFM and LFP for a synthetic test case. Results show that for sub-critical flow conditions, discharge response to the same input signal is near-identical for both models, which agrees with previous studies. We subsequently applied the framework to the Amazon River basin to not only test the framework thoroughly, but also to perform a first-ever benchmark of flexible and regular grids on a large-scale. Both DFM and LFP produce comparable results in terms of simulated discharge with LFP exhibiting slightly higher accuracy as expressed by a Kling-Gupta efficiency of 0.82 compared to 0.76 for DFM. However, benchmarking inundation extent between DFM and LFP over the entire study area, a critical success index of 0.46 was obtained, indicating that the models disagree as often as they agree. Differences between models in both simulated discharge and inundation extent are to a large extent attributable to the gridding techniques employed. In fact, the results show that both the numerical scheme of the inundation model and the gridding technique can contribute to deviations in simulated inundation extent as we control for model forcing and boundary conditions. This study shows that the presented computational framework is robust and widely applicable. GLOFRIM is designed as open access and easily extendable, and thus we hope that other large-scale hydrological and hydrodynamic models will be added. Eventually, more locally relevant processes would be captured and more robust model inter-comparison, benchmarking, and ensemble simulations of flood hazard on a large scale would be allowed for.
Luboz, Vincent; Chabanas, Matthieu; Swider, Pascal; Payan, Yohan
2005-08-01
This paper addresses an important issue raised for the clinical relevance of Computer-Assisted Surgical applications, namely the methodology used to automatically build patient-specific finite element (FE) models of anatomical structures. From this perspective, a method is proposed, based on a technique called the mesh-matching method, followed by a process that corrects mesh irregularities. The mesh-matching algorithm generates patient-specific volume meshes from an existing generic model. The mesh regularization process is based on the Jacobian matrix transform related to the FE reference element and the current element. This method for generating patient-specific FE models is first applied to computer-assisted maxillofacial surgery, and more precisely, to the FE elastic modelling of patient facial soft tissues. For each patient, the planned bone osteotomies (mandible, maxilla, chin) are used as boundary conditions to deform the FE face model, in order to predict the aesthetic outcome of the surgery. Seven FE patient-specific models were successfully generated by our method. For one patient, the prediction of the FE model is qualitatively compared with the patient's post-operative appearance, measured from a computer tomography scan. Then, our methodology is applied to computer-assisted orbital surgery. It is, therefore, evaluated for the generation of 11 patient-specific FE poroelastic models of the orbital soft tissues. These models are used to predict the consequences of the surgical decompression of the orbit. More precisely, an average law is extrapolated from the simulations carried out for each patient model. This law links the size of the osteotomy (i.e. the surgical gesture) and the backward displacement of the eyeball (the consequence of the surgical gesture).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn
Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivativesmore » of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.« less
Gleeson, Fergus V.; Brady, Michael; Schnabel, Julia A.
2018-01-01
Abstract. Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset. PMID:29662918
Papież, Bartłomiej W; Franklin, James M; Heinrich, Mattias P; Gleeson, Fergus V; Brady, Michael; Schnabel, Julia A
2018-04-01
Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.
Modeling the Cerebellar Microcircuit: New Strategies for a Long-Standing Issue.
D'Angelo, Egidio; Antonietti, Alberto; Casali, Stefano; Casellato, Claudia; Garrido, Jesus A; Luque, Niceto Rafael; Mapelli, Lisa; Masoli, Stefano; Pedrocchi, Alessandra; Prestori, Francesca; Rizza, Martina Francesca; Ros, Eduardo
2016-01-01
The cerebellar microcircuit has been the work bench for theoretical and computational modeling since the beginning of neuroscientific research. The regular neural architecture of the cerebellum inspired different solutions to the long-standing issue of how its circuitry could control motor learning and coordination. Originally, the cerebellar network was modeled using a statistical-topological approach that was later extended by considering the geometrical organization of local microcircuits. However, with the advancement in anatomical and physiological investigations, new discoveries have revealed an unexpected richness of connections, neuronal dynamics and plasticity, calling for a change in modeling strategies, so as to include the multitude of elementary aspects of the network into an integrated and easily updatable computational framework. Recently, biophysically accurate "realistic" models using a bottom-up strategy accounted for both detailed connectivity and neuronal non-linear membrane dynamics. In this perspective review, we will consider the state of the art and discuss how these initial efforts could be further improved. Moreover, we will consider how embodied neurorobotic models including spiking cerebellar networks could help explaining the role and interplay of distributed forms of plasticity. We envisage that realistic modeling, combined with closed-loop simulations, will help to capture the essence of cerebellar computations and could eventually be applied to neurological diseases and neurorobotic control systems.
Modeling the Cerebellar Microcircuit: New Strategies for a Long-Standing Issue
D’Angelo, Egidio; Antonietti, Alberto; Casali, Stefano; Casellato, Claudia; Garrido, Jesus A.; Luque, Niceto Rafael; Mapelli, Lisa; Masoli, Stefano; Pedrocchi, Alessandra; Prestori, Francesca; Rizza, Martina Francesca; Ros, Eduardo
2016-01-01
The cerebellar microcircuit has been the work bench for theoretical and computational modeling since the beginning of neuroscientific research. The regular neural architecture of the cerebellum inspired different solutions to the long-standing issue of how its circuitry could control motor learning and coordination. Originally, the cerebellar network was modeled using a statistical-topological approach that was later extended by considering the geometrical organization of local microcircuits. However, with the advancement in anatomical and physiological investigations, new discoveries have revealed an unexpected richness of connections, neuronal dynamics and plasticity, calling for a change in modeling strategies, so as to include the multitude of elementary aspects of the network into an integrated and easily updatable computational framework. Recently, biophysically accurate “realistic” models using a bottom-up strategy accounted for both detailed connectivity and neuronal non-linear membrane dynamics. In this perspective review, we will consider the state of the art and discuss how these initial efforts could be further improved. Moreover, we will consider how embodied neurorobotic models including spiking cerebellar networks could help explaining the role and interplay of distributed forms of plasticity. We envisage that realistic modeling, combined with closed-loop simulations, will help to capture the essence of cerebellar computations and could eventually be applied to neurological diseases and neurorobotic control systems. PMID:27458345
NASA Astrophysics Data System (ADS)
Edjlali, Ehsan; Bérubé-Lauzière, Yves
2018-01-01
We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.
A trade-off solution between model resolution and covariance in surface-wave inversion
Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.
2010-01-01
Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.
NASA Astrophysics Data System (ADS)
Yaparova, N.
2017-10-01
We consider the problem of heating a cylindrical body with an internal thermal source when the main characteristics of the material such as specific heat, thermal conductivity and material density depend on the temperature at each point of the body. We can control the surface temperature and the heat flow from the surface inside the cylinder, but it is impossible to measure the temperature on axis and the initial temperature in the entire body. This problem is associated with the temperature measurement challenge and appears in non-destructive testing, in thermal monitoring of heat treatment and technical diagnostics of operating equipment. The mathematical model of heating is represented as nonlinear parabolic PDE with the unknown initial condition. In this problem, both the Dirichlet and Neumann boundary conditions are given and it is required to calculate the temperature values at the internal points of the body. To solve this problem, we propose the numerical method based on using of finite-difference equations and a regularization technique. The computational scheme involves solving the problem at each spatial step. As a result, we obtain the temperature function at each internal point of the cylinder beginning from the surface down to the axis. The application of the regularization technique ensures the stability of the scheme and allows us to significantly simplify the computational procedure. We investigate the stability of the computational scheme and prove the dependence of the stability on the discretization steps and error level of the measurement results. To obtain the experimental temperature error estimates, computational experiments were carried out. The computational results are consistent with the theoretical error estimates and confirm the efficiency and reliability of the proposed computational scheme.
The Computational Infrastructure for Geodynamics as a Community of Practice
NASA Astrophysics Data System (ADS)
Hwang, L.; Kellogg, L. H.
2016-12-01
Computational Infrastructure for Geodynamics (CIG), geodynamics.org, originated in 2005 out of community recognition that the efforts of individual or small groups of researchers to develop scientifically-sound software is impossible to sustain, duplicates effort, and makes it difficult for scientists to adopt state-of-the art computational methods that promote new discovery. As a community of practice, participants in CIG share an interest in computational modeling in geodynamics and work together on open source software to build the capacity to support complex, extensible, scalable, interoperable, reliable, and reusable software in an effort to increase the return on investment in scientific software development and increase the quality of the resulting software. The group interacts regularly to learn from each other and better their practices formally through webinar series, workshops, and tutorials and informally through listservs and hackathons. Over the past decade, we have learned that successful scientific software development requires at a minimum: collaboration between domain-expert researchers, software developers and computational scientists; clearly identified and committed lead developer(s); well-defined scientific and computational goals that are regularly evaluated and updated; well-defined benchmarks and testing throughout development; attention throughout development to usability and extensibility; understanding and evaluation of the complexity of dependent libraries; and managed user expectations through education, training, and support. CIG's code donation standards provide the basis for recently formalized best practices in software development (geodynamics.org/cig/dev/best-practices/). Best practices include use of version control; widely used, open source software libraries; extensive test suites; portable configuration and build systems; extensive documentation internal and external to the code; and structured, human readable input formats.
A time-efficient algorithm for implementing the Catmull-Clark subdivision method
NASA Astrophysics Data System (ADS)
Ioannou, G.; Savva, A.; Stylianou, V.
2015-10-01
Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.
Folding Proteins at 500 ns/hour with Work Queue.
Abdul-Wahid, Badi'; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A
2012-10-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour.
Folding Proteins at 500 ns/hour with Work Queue
Abdul-Wahid, Badi’; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A.
2014-01-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour. PMID:25540799
ERIC Educational Resources Information Center
Bailey, Suzanne Powers; Jeffers, Marcia
Eighteen interrelated, sequential lesson plans and supporting materials for teaching computer literacy at the elementary and secondary levels are presented. The activities, intended to be infused into the regular curriculum, do not require the use of a computer. The introduction presents background information on computer literacy, suggests a…
Computational Investigation of Effects of Grain Size on Ballistic Performance of Copper
NASA Astrophysics Data System (ADS)
He, Ge; Dou, Yangqing; Guo, Xiang; Liu, Yucheng
2018-01-01
Numerical simulations were conducted to compare ballistic performance and penetration mechanism of copper (Cu) with four representative grain sizes. Ballistic limit velocities for coarse-grained (CG) copper (grain size ≈ 90 µm), regular copper (grain size ≈ 30 µm), fine-grained (FG) copper (grain size ≈ 890 nm), and ultrafine-grained (UG) copper (grain size ≈ 200 nm) were determined for the first time through the simulations. It was found that the copper with reduced grain size would offer higher strength and better ductility, and therefore renders improved ballistic performance than the CG and regular copper. High speed impact and penetration behavior of the FG and UG copper was also compared with the CG coppers strengthened by nanotwinned (NT) regions. The comparison results showed the impact and penetration resistance of UG copper is comparable to the CG copper strengthened by NT regions with the minimum twin spacing. Therefore, besides the NT-strengthened copper, the single phase copper with nanoscale grain size could also be a strong candidate material for better ballistic protection. A computational modeling and simulation framework was proposed for this study, in which Johnson-Cook (JC) constitutive model is used to predict the plastic deformation of Cu; the JC damage model is to capture the penetration and fragmentation behavior of Cu; Bao-Wierzbicki (B-W) failure criterion defines the material's failure mechanisms; and temperature increase during this adiabatic penetration process is given by the Taylor-Quinney method.
Norton, Tomás; Sun, Da-Wen; Grant, Jim; Fallon, Richard; Dodd, Vincent
2007-09-01
The application of computational fluid dynamics (CFD) in the agricultural industry is becoming ever more important. Over the years, the versatility, accuracy and user-friendliness offered by CFD has led to its increased take-up by the agricultural engineering community. Now CFD is regularly employed to solve environmental problems of greenhouses and animal production facilities. However, due to a combination of increased computer efficacy and advanced numerical techniques, the realism of these simulations has only been enhanced in recent years. This study provides a state-of-the-art review of CFD, its current applications in the design of ventilation systems for agricultural production systems, and the outstanding challenging issues that confront CFD modellers. The current status of greenhouse CFD modelling was found to be at a higher standard than that of animal housing, owing to the incorporation of user-defined routines that simulate crop biological responses as a function of local environmental conditions. Nevertheless, the most recent animal housing simulations have addressed this issue and in turn have become more physically realistic.
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
de la Hunty, Anne; Gibson, Sigrid; Ashwell, Margaret
2013-01-01
Objective To review systematically the evidence on breakfast cereal consumption and obesity in children and adolescents and assess whether the regular consumption of breakfast cereals could help to prevent excessive weight gain. Methods A systematic review and meta-analysis of studies relating breakfast cereal consumption to BMI, BMI z-scores and prevalence of obesity as the outcomes. Results 14 papers met the inclusion criteria. The computed effect size for mean BMI between high consumers and low or non-consumers over all 25 study subgroups was −1.13 kg/m2 (95% CI −0.81, −1.46, p ℋ 0.0001) in the random effects model, which is equivalent to a standardised mean difference of 0.24. Adjustment for age and publication bias attenuated the effect sizes somewhat but they remained statistically significant. The prevalence and risk of overweight was lower in children and adolescents who consume breakfast cereals regularly compared to those who consume them infrequently. Energy intakes tended to be higher in regular breakfast cereal consumers. Conclusion Overall, the evidence reviewed is suggestive that regular consumption of breakfast cereals results in a lower BMI and a reduced likelihood of being overweight in children and adolescents. However, more evidence from long-term trials and investigations into mechanisms is needed to eliminate possible confounding factors and determine causality. PMID:23466487
Markov random field model-based edge-directed image interpolation.
Li, Min; Nguyen, Truong Q
2008-07-01
This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.
Total variation-based method for radar coincidence imaging with model mismatch for extended target
NASA Astrophysics Data System (ADS)
Cao, Kaicheng; Zhou, Xiaoli; Cheng, Yongqiang; Fan, Bo; Qin, Yuliang
2017-11-01
Originating from traditional optical coincidence imaging, radar coincidence imaging (RCI) is a staring/forward-looking imaging technique. In RCI, the reference matrix must be computed precisely to reconstruct the image as preferred; unfortunately, such precision is almost impossible due to the existence of model mismatch in practical applications. Although some conventional sparse recovery algorithms are proposed to solve the model-mismatch problem, they are inapplicable to nonsparse targets. We therefore sought to derive the signal model of RCI with model mismatch by replacing the sparsity constraint item with total variation (TV) regularization in the sparse total least squares optimization problem; in this manner, we obtain the objective function of RCI with model mismatch for an extended target. A more robust and efficient algorithm called TV-TLS is proposed, in which the objective function is divided into two parts and the perturbation matrix and scattering coefficients are updated alternately. Moreover, due to the ability of TV regularization to recover sparse signal or image with sparse gradient, TV-TLS method is also applicable to sparse recovering. Results of numerical experiments demonstrate that, for uniform extended targets, sparse targets, and real extended targets, the algorithm can achieve preferred imaging performance both in suppressing noise and in adapting to model mismatch.
Sharma, Nandita; Gedeon, Tom
2012-12-01
Stress is a major growing concern in our day and age adversely impacting both individuals and society. Stress research has a wide range of benefits from improving personal operations, learning, and increasing work productivity to benefiting society - making it an interesting and socially beneficial area of research. This survey reviews sensors that have been used to measure stress and investigates techniques for modelling stress. It discusses non-invasive and unobtrusive sensors for measuring computed stress, a term we coin in the paper. Sensors that do not impede everyday activities that could be used by those who would like to monitor stress levels on a regular basis (e.g. vehicle drivers, patients with illnesses linked to stress) is the focus of the discussion. Computational techniques have the capacity to determine optimal sensor fusion and automate data analysis for stress recognition and classification. Several computational techniques have been developed to model stress based on techniques such as Bayesian networks, artificial neural networks, and support vector machines, which this survey investigates. The survey concludes with a summary and provides possible directions for further computational stress research. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Statistical regularities of art images and natural scenes: spectra, sparseness and nonlinearities.
Graham, Daniel J; Field, David J
2007-01-01
Paintings are the product of a process that begins with ordinary vision in the natural world and ends with manipulation of pigments on canvas. Because artists must produce images that can be seen by a visual system that is thought to take advantage of statistical regularities in natural scenes, artists are likely to replicate many of these regularities in their painted art. We have tested this notion by computing basic statistical properties and modeled cell response properties for a large set of digitized paintings and natural scenes. We find that both representational and non-representational (abstract) paintings from our sample (124 images) show basic similarities to a sample of natural scenes in terms of their spatial frequency amplitude spectra, but the paintings and natural scenes show significantly different mean amplitude spectrum slopes. We also find that the intensity distributions of paintings show a lower skewness and sparseness than natural scenes. We account for this by considering the range of luminances found in the environment compared to the range available in the medium of paint. A painting's range is limited by the reflective properties of its materials. We argue that artists do not simply scale the intensity range down but use a compressive nonlinearity. In our studies, modeled retinal and cortical filter responses to the images were less sparse for the paintings than for the natural scenes. But when a compressive nonlinearity was applied to the images, both the paintings' sparseness and the modeled responses to the paintings showed the same or greater sparseness compared to the natural scenes. This suggests that artists achieve some degree of nonlinear compression in their paintings. Because paintings have captivated humans for millennia, finding basic statistical regularities in paintings' spatial structure could grant insights into the range of spatial patterns that humans find compelling.
The origins of computer weather prediction and climate modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lynch, Peter
2008-03-20
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. Amore » fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.« less
The origins of computer weather prediction and climate modeling
NASA Astrophysics Data System (ADS)
Lynch, Peter
2008-03-01
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.
Modeling topology formation during laser ablation
NASA Astrophysics Data System (ADS)
Hodapp, T. W.; Fleming, P. R.
1998-07-01
Micromachining high aspect-ratio structures can be accomplished through ablation of surfaces with high-powered lasers. Industrial manufacturers now use these methods to form complex and regular surfaces at the 10-1000 μm feature size range. Despite its increasingly wide acceptance on the manufacturing floor, the underlying photochemistry of the ablation mechanism, and hence the dynamics of the machining process, is still a question of considerable debate. We have constructed a computer model to investigate and predict the topological formation of ablated structures. Qualitative as well as quantitative agreement with excimer-laser machined polyimide substrates has been demonstrated. This model provides insights into the drilling process for high-aspect-ratio holes.
Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy
NASA Astrophysics Data System (ADS)
Nabizadeh, Nooshin; John, Nigel
2014-03-01
Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.
NASA Astrophysics Data System (ADS)
Chiarucci, Simone; Wijnholds, Stefan J.
2018-02-01
Blind calibration, i.e. calibration without a priori knowledge of the source model, is robust to the presence of unknown sources such as transient phenomena or (low-power) broad-band radio frequency interference that escaped detection. In this paper, we present a novel method for blind calibration of a radio interferometric array assuming that the observed field only contains a small number of discrete point sources. We show the huge computational advantage over previous blind calibration methods and we assess its statistical efficiency and robustness to noise and the quality of the initial estimate. We demonstrate the method on actual data from a Low-Frequency Array low-band antenna station showing that our blind calibration is able to recover the same gain solutions as the regular calibration approach, as expected from theory and simulations. We also discuss the implications of our findings for the robustness of regular self-calibration to poor starting models.
EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.
Hadinia, M; Jafari, R; Soleimani, M
2016-06-01
This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.
An irregular lattice method for elastic wave propagation
NASA Astrophysics Data System (ADS)
O'Brien, Gareth S.; Bean, Christopher J.
2011-12-01
Lattice methods are a class of numerical scheme which represent a medium as a connection of interacting nodes or particles. In the case of modelling seismic wave propagation, the interaction term is determined from Hooke's Law including a bond-bending term. This approach has been shown to model isotropic seismic wave propagation in an elastic or viscoelastic medium by selecting the appropriate underlying lattice structure. To predetermine the material constants, this methodology has been restricted to regular grids, hexagonal or square in 2-D or cubic in 3-D. Here, we present a method for isotropic elastic wave propagation where we can remove this lattice restriction. The methodology is outlined and a relationship between the elastic material properties and an irregular lattice geometry are derived. The numerical method is compared with an analytical solution for wave propagation in an infinite homogeneous body along with comparing the method with a numerical solution for a layered elastic medium. The dispersion properties of this method are derived from a plane wave analysis showing the scheme is more dispersive than a regular lattice method. Therefore, the computational costs of using an irregular lattice are higher. However, by removing the regular lattice structure the anisotropic nature of fracture propagation in such methods can be removed.
NCAR global model topography generation software for unstructured grids
NASA Astrophysics Data System (ADS)
Lauritzen, P. H.; Bacmeister, J. T.; Callaghan, P. F.; Taylor, M. A.
2015-06-01
It is the purpose of this paper to document the NCAR global model topography generation software for unstructured grids. Given a model grid, the software computes the fraction of the grid box covered by land, the gridbox mean elevation, and associated sub-grid scale variances commonly used for gravity wave and turbulent mountain stress parameterizations. The software supports regular latitude-longitude grids as well as unstructured grids; e.g. icosahedral, Voronoi, cubed-sphere and variable resolution grids. As an example application and in the spirit of documenting model development, exploratory simulations illustrating the impacts of topographic smoothing with the NCAR-DOE CESM (Community Earth System Model) CAM5.2-SE (Community Atmosphere Model version 5.2 - Spectral Elements dynamical core) are shown.
On a turbulent wall model to predict hemolysis numerically in medical devices
NASA Astrophysics Data System (ADS)
Lee, Seunghun; Chang, Minwook; Kang, Seongwon; Hur, Nahmkeon; Kim, Wonjung
2017-11-01
Analyzing degradation of red blood cells is very important for medical devices with blood flows. The blood shear stress has been recognized as the most dominant factor for hemolysis in medical devices. Compared to laminar flows, turbulent flows have higher shear stress values in the regions near the wall. In case of predicting hemolysis numerically, this phenomenon can require a very fine mesh and large computational resources. In order to resolve this issue, the purpose of this study is to develop a turbulent wall model to predict the hemolysis more efficiently. In order to decrease the numerical error of hemolysis prediction in a coarse grid resolution, we divided the computational domain into two regions and applied different approaches to each region. In the near-wall region with a steep velocity gradient, an analytic approach using modeled velocity profile is applied to reduce a numerical error to allow a coarse grid resolution. We adopt the Van Driest law as a model for the mean velocity profile. In a region far from the wall, a regular numerical discretization is applied. The proposed turbulent wall model is evaluated for a few turbulent flows inside a cannula and centrifugal pumps. The results present that the proposed turbulent wall model for hemolysis improves the computational efficiency significantly for engineering applications. Corresponding author.
Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations
NASA Astrophysics Data System (ADS)
Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.
2018-07-01
Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalized waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalized model for extreme-mass-ratio inspirals constructed on deformed black hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.
Majorization Minimization by Coordinate Descent for Concave Penalized Generalized Linear Models
Jiang, Dingfeng; Huang, Jian
2013-01-01
Recent studies have demonstrated theoretical attractiveness of a class of concave penalties in variable selection, including the smoothly clipped absolute deviation and minimax concave penalties. The computation of the concave penalized solutions in high-dimensional models, however, is a difficult task. We propose a majorization minimization by coordinate descent (MMCD) algorithm for computing the concave penalized solutions in generalized linear models. In contrast to the existing algorithms that use local quadratic or local linear approximation to the penalty function, the MMCD seeks to majorize the negative log-likelihood by a quadratic loss, but does not use any approximation to the penalty. This strategy makes it possible to avoid the computation of a scaling factor in each update of the solutions, which improves the efficiency of coordinate descent. Under certain regularity conditions, we establish theoretical convergence property of the MMCD. We implement this algorithm for a penalized logistic regression model using the SCAD and MCP penalties. Simulation studies and a data example demonstrate that the MMCD works sufficiently fast for the penalized logistic regression in high-dimensional settings where the number of covariates is much larger than the sample size. PMID:25309048
Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations
NASA Astrophysics Data System (ADS)
Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.
2018-04-01
Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black-hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalised waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalised model for extreme-mass-ratio inspirals constructed on deformed black-hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.
Enthalpy of Mixing in Al–Tb Liquid
Zhou, Shihuai; Tackes, Carl; Napolitano, Ralph
2017-06-21
The liquid-phase enthalpy of mixing for Al$-$Tb alloys is measured for 3, 5, 8, 10, and 20 at% Tb at selected temperatures in the range from 1364 to 1439 K. Methods include isothermal solution calorimetry and isoperibolic electromagnetic levitation drop calorimetry. Mixing enthalpy is determined relative to the unmixed pure (Al and Tb) components. The required formation enthalpy for the Al3Tb phase is computed from first-principles calculations. Finally, based on our measurements, three different semi-empirical solution models are offered for the excess free energy of the liquid, including regular, subregular, and associate model formulations. These models are also compared withmore » the Miedema model prediction of mixing enthalpy.« less
NASA Astrophysics Data System (ADS)
Verma, Aman; Mahesh, Krishnan
2012-08-01
The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410
Fukumitsu, Nobuyoshi; Ishida, Masaya; Terunuma, Toshiyuki; Mizumoto, Masashi; Hashimoto, Takayuki; Moritake, Takashi; Okumura, Toshiyuki; Sakae, Takeji; Tsuboi, Koji; Sakurai, Hideyuki
2012-01-01
To investigate the reproducibility of computed tomography (CT) imaging quality in respiratory-gated radiation treatment planning is essential in radiotherapy of movable tumors. Seven series of regular and six series of irregular respiratory motions were performed using a thorax dynamic phantom. For the regular respiratory motions, the respiratory cycle was changed from 2.5 to 4 s and the amplitude was changed from 4 to 10 mm. For the irregular respiratory motions, a cycle of 2.5 to 4 or an amplitude of 4 to 10 mm was added to the base data (i.e. 3.5-s cycle, 6-mm amplitude) every three cycles. Images of the object were acquired six times using respiratory-gated data acquisition. The volume of the object was calculated and the reproducibility of the volume was decided based on the variety. The registered image of the object was added and the reproducibility of the shape was decided based on the degree of overlap of objects. The variety in the volumes and shapes differed significantly as the respiratory cycle changed according to regular respiratory motions. In irregular respiratory motion, shape reproducibility was further inferior, and the percentage of overlap among the six images was 35.26% in the 2.5- and 3.5-s cycle mixed group. Amplitude changes did not produce significant differences in the variety of the volumes and shapes. Respiratory cycle changes reduced the reproducibility of the image quality in respiratory-gated CT. PMID:22966173
How does the Structure of Spherical Dark Matter Halos Affect the Types of Orbits in Disk Galaxies?
NASA Astrophysics Data System (ADS)
Zotos, Euaggelos E.
The main objective of this work is to determine the character of orbits of stars moving in the meridional (R,z) plane of an axially symmetric time-independent disk galaxy model with a central massive nucleus and an additional spherical dark matter halo component. In particular, we try to reveal the influence of the scale length of the dark matter halo on the different families of orbits of stars, by monitoring how the percentage of chaotic orbits, as well as the percentages of orbits of the main regular resonant families evolve when this parameter varies. The smaller alignment index (SALI) was computed by numerically integrating the equations of motion as well as the variational equations to extensive samples of orbits in order to distinguish safely bet ween ordered and chaotic motion. In addition, a method based on the concept of spectral dynamics that utilizes the Fourier transform of the time series of each coordinate is used to identify the various families of regular orbits and also to recognize the secondary resonances that bifurcate from them. Our numerical computations reveal that when the dark matter halo is highly concentrated, that is when the scale length has low values the vast majority of star orbits move in regular orbits, while on the oth er hand in less concentrated dark matter halos the percentage of chaos increases significantly. We also compared our results with early related work.
Niu, Shanzhou; Zhang, Shanli; Huang, Jing; Bian, Zhaoying; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua
2016-01-01
Cerebral perfusion x-ray computed tomography (PCT) is an important functional imaging modality for evaluating cerebrovascular diseases and has been widely used in clinics over the past decades. However, due to the protocol of PCT imaging with repeated dynamic sequential scans, the associative radiation dose unavoidably increases as compared with that used in conventional CT examinations. Minimizing the radiation exposure in PCT examination is a major task in the CT field. In this paper, considering the rich similarity redundancy information among enhanced sequential PCT images, we propose a low-dose PCT image restoration model by incorporating the low-rank and sparse matrix characteristic of sequential PCT images. Specifically, the sequential PCT images were first stacked into a matrix (i.e., low-rank matrix), and then a non-convex spectral norm/regularization and a spatio-temporal total variation norm/regularization were then built on the low-rank matrix to describe the low rank and sparsity of the sequential PCT images, respectively. Subsequently, an improved split Bregman method was adopted to minimize the associative objective function with a reasonable convergence rate. Both qualitative and quantitative studies were conducted using a digital phantom and clinical cerebral PCT datasets to evaluate the present method. Experimental results show that the presented method can achieve images with several noticeable advantages over the existing methods in terms of noise reduction and universal quality index. More importantly, the present method can produce more accurate kinetic enhanced details and diagnostic hemodynamic parameter maps. PMID:27440948
On the inverse Magnus effect in free molecular flow
NASA Astrophysics Data System (ADS)
Weidman, Patrick D.; Herczynski, Andrzej
2004-02-01
A Newton-inspired particle interaction model is introduced to compute the sideways force on spinning projectiles translating through a rarefied gas. The simple model reproduces the inverse Magnus force on a sphere reported by Borg, Söderholm and Essén [Phys. Fluids 15, 736 (2003)] using probability theory. Further analyses given for cylinders and parallelepipeds of rectangular and regular polygon section point to a universal law for this class of geometric shapes: when the inverse Magnus force is steady, it is proportional to one-half the mass M of gas displaced by the body.
2012-01-01
Background Symmetry and regularity of gait are essential outcomes of gait retraining programs, especially in lower-limb amputees. This study aims presenting an algorithm to automatically compute symmetry and regularity indices, and assessing the minimum number of strides for appropriate evaluation of gait symmetry and regularity through autocorrelation of acceleration signals. Methods Ten transfemoral amputees (AMP) and ten control subjects (CTRL) were studied. Subjects wore an accelerometer and were asked to walk for 70 m at their natural speed (twice). Reference values of step and stride regularity indices (Ad1 and Ad2) were obtained by autocorrelation analysis of the vertical and antero-posterior acceleration signals, excluding initial and final strides. The Ad1 and Ad2 coefficients were then computed at different stages by analyzing increasing portions of the signals (considering both the signals cleaned by initial and final strides, and the whole signals). At each stage, the difference between Ad1 and Ad2 values and the corresponding reference values were compared with the minimum detectable difference, MDD, of the index. If that difference was less than MDD, it was assumed that the portion of signal used in the analysis was of sufficient length to allow reliable estimation of the autocorrelation coefficient. Results All Ad1 and Ad2 indices were lower in AMP than in CTRL (P < 0.0001). Excluding initial and final strides from the analysis, the minimum number of strides needed for reliable computation of step symmetry and stride regularity was about 2.2 and 3.5, respectively. Analyzing the whole signals, the minimum number of strides increased to about 15 and 20, respectively. Conclusions Without the need to identify and eliminate the phases of gait initiation and termination, twenty strides can provide a reasonable amount of information to reliably estimate gait regularity in transfemoral amputees. PMID:22316184
29 CFR 541.701 - Customarily and regularly.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Customarily and regularly. 541.701 Section 541.701 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR REGULATIONS DEFINING AND DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES...
Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David
2013-01-01
Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization
Zhu, Qingxin; Niu, Xinzheng
2016-01-01
By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996
Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng
2016-01-01
By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms.
A new computational growth model for sea urchin skeletons.
Zachos, Louis G
2009-08-07
A new computational model has been developed to simulate growth of regular sea urchin skeletons. The model incorporates the processes of plate addition and individual plate growth into a composite model of whole-body (somatic) growth. A simple developmental model based on hypothetical morphogens underlies the assumptions used to define the simulated growth processes. The data model is based on a Delaunay triangulation of plate growth center points, using the dual Voronoi polygons to define plate topologies. A spherical frame of reference is used for growth calculations, with affine deformation of the sphere (based on a Young-Laplace membrane model) to result in an urchin-like three-dimensional form. The model verifies that the patterns of coronal plates in general meet the criteria of Voronoi polygonalization, that a morphogen/threshold inhibition model for plate addition results in the alternating plate addition pattern characteristic of sea urchins, and that application of the Bertalanffy growth model to individual plates results in simulated somatic growth that approximates that seen in living urchins. The model suggests avenues of research that could explain some of the distinctions between modern sea urchins and the much more disparate groups of forms that characterized the Paleozoic Era.
Casimir self-entropy of a spherical electromagnetic δ -function shell
NASA Astrophysics Data System (ADS)
Milton, Kimball A.; Kalauni, Pushpa; Parashar, Prachi; Li, Yang
2017-10-01
In this paper we continue our program of computing Casimir self-entropies of idealized electrical bodies. Here we consider an electromagnetic δ -function sphere ("semitransparent sphere") whose electric susceptibility has a transverse polarization with arbitrary strength. Dispersion is incorporated by a plasma-like model. In the strong-coupling limit, a perfectly conducting spherical shell is realized. We compute the entropy for both low and high temperatures. The transverse electric self-entropy is negative as expected, but the transverse magnetic self-entropy requires ultraviolet and infrared renormalization (subtraction), and, surprisingly, is only positive for sufficiently strong coupling. Results are robust under different regularization schemes. These rather surprising findings require further investigation.
Conformal blocks from Wilson lines with loop corrections
NASA Astrophysics Data System (ADS)
Hikida, Yasuaki; Uetoko, Takahiro
2018-04-01
We compute the conformal blocks of the Virasoro minimal model or its WN extension with large central charge from Wilson line networks in a Chern-Simons theory including loop corrections. In our previous work, we offered a prescription to regularize divergences from loops attached to Wilson lines. In this paper, we generalize our method with the prescription by dealing with more general operators for N =3 and apply it to the identity W3 block. We further compute general light-light blocks and heavy-light correlators for N =2 with the Wilson line method and compare the results with known ones obtained using a different prescription. We briefly discuss general W3 blocks.
Hermite regularization of the lattice Boltzmann method for open source computational aeroacoustics.
Brogi, F; Malaspinas, O; Chopard, B; Bonadonna, C
2017-10-01
The lattice Boltzmann method (LBM) is emerging as a powerful engineering tool for aeroacoustic computations. However, the LBM has been shown to present accuracy and stability issues in the medium-low Mach number range, which is of interest for aeroacoustic applications. Several solutions have been proposed but are often too computationally expensive, do not retain the simplicity and the advantages typical of the LBM, or are not described well enough to be usable by the community due to proprietary software policies. An original regularized collision operator is proposed, based on the expansion of Hermite polynomials, that greatly improves the accuracy and stability of the LBM without significantly altering its algorithm. The regularized LBM can be easily coupled with both non-reflective boundary conditions and a multi-level grid strategy, essential ingredients for aeroacoustic simulations. Excellent agreement was found between this approach and both experimental and numerical data on two different benchmarks: the laminar, unsteady flow past a 2D cylinder and the 3D turbulent jet. Finally, most of the aeroacoustic computations with LBM have been done with commercial software, while here the entire theoretical framework is implemented using an open source library (palabos).
R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization
Dazard, Jean-Eudes; Xu, Hua; Rao, J. Sunil
2015-01-01
We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets (p ≫ n paradigm), such as in ‘omics’-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real ‘omics’ test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR (‘Mean-Variance Regularization’), downloadable from the CRAN. PMID:26819572
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senesi, Andrew; Lee, Byeongdu
Herein, a general method to calculate the scattering functions of polyhedra, including both regular and semi-regular polyhedra, is presented. These calculations may be achieved by breaking a polyhedron into sets of congruent pieces, thereby reducing computation time by taking advantage of Fourier transforms and inversion symmetry. Each piece belonging to a set or subunit can be generated by either rotation or translation. Further, general strategies to compute truncated, concave and stellated polyhedra are provided. Using this method, the asymptotic behaviors of the polyhedral scattering functions are compared with that of a sphere. It is shown that, for a regular polyhedron,more » the form factor oscillation at highqis correlated with the face-to-face distance. In addition, polydispersity affects the Porod constant. The ideas presented herein will be important for the characterization of nanomaterials using small-angle scattering.« less
Using Adjoint Methods to Improve 3-D Velocity Models of Southern California
NASA Astrophysics Data System (ADS)
Liu, Q.; Tape, C.; Maggi, A.; Tromp, J.
2006-12-01
We use adjoint methods popular in climate and ocean dynamics to calculate Fréchet derivatives for tomographic inversions in southern California. The Fréchet derivative of an objective function χ(m), where m denotes the Earth model, may be written in the generic form δχ=int Km(x) δln m(x) d3x, where δln m=δ m/m denotes the relative model perturbation. For illustrative purposes, we construct the 3-D finite-frequency banana-doughnut kernel Km, corresponding to the misfit of a single traveltime measurement, by simultaneously computing the 'adjoint' wave field s† forward in time and reconstructing the regular wave field s backward in time. The adjoint wave field is produced by using the time-reversed velocity at the receiver as a fictitious source, while the regular wave field is reconstructed on the fly by propagating the last frame of the wave field saved by a previous forward simulation backward in time. The approach is based upon the spectral-element method, and only two simulations are needed to produce density, shear-wave, and compressional-wave sensitivity kernels. This method is applied to the SCEC southern California velocity model. Various density, shear-wave, and compressional-wave sensitivity kernels are presented for different phases in the seismograms. We also generate 'event' kernels for Pnl, S and surface waves, which are the Fréchet kernels of misfit functions that measure the P, S or surface wave traveltime residuals at all the receivers simultaneously for one particular event. Effectively, an event kernel is a sum of weighted Fréchet kernels, with weights determined by the associated traveltime anomalies. By the nature of the 3-D simulation, every event kernel is also computed based upon just two simulations, i.e., its construction costs the same amount of computation time as an individual banana-doughnut kernel. One can think of the sum of the event kernels for all available earthquakes, called the 'misfit' kernel, as a graphical representation of the gradient of the misfit function. With the capability of computing both the value of the misfit function and its gradient, which assimilates the traveltime anomalies, we are ready to use a non-linear conjugate gradient algorithm to iteratively improve velocity models of southern California.
Design, Development and Implementation of a Middle School Computer Applications Curriculum.
ERIC Educational Resources Information Center
Pina, Anthony A.
This report documents the design, development, and implementation of computer applications curricula in a pilot program augmenting the regular curriculum for eighth graders at a private middle school. In assessing the needs of the school, a shift in focus was made from computer programming to computer application. The basic objectives of the…
NASA Astrophysics Data System (ADS)
das Neves, Flávio Domingues; de Almeida Prado Naves Carneiro, Thiago; do Prado, Célio Jesus; Prudente, Marcel Santana; Zancopé, Karla; Davi, Letícia Resende; Mendonça, Gustavo; Soares, Carlos José
2014-08-01
The current study evaluated prosthetic dental crowns obtained by optical scanning and a computer-aided designing/computer-aided manufacturing system using micro-computed tomography to compare the marginal fit. The virtual models were obtained with four different scanning surfaces: typodont (T), regular impressions (RI), master casts (MC), and powdered master casts (PMC). Five virtual models were obtained for each group. For each model, a crown was designed on the software and milled from feldspathic ceramic blocks. Micro-CT images were obtained for marginal gap measurements and the data were statistically analyzed by one-way analysis of variance followed by Tukey's test. The mean vertical misfit was T=62.6±65.2 μm; MC=60.4±38.4 μm; PMC=58.1±38.0 μm, and RI=89.8±62.8 μm. Considering a percentage of vertical marginal gap of up to 75 μm, the results were T=71.5%, RI=49.2%, MC=69.6%, and PMC=71.2%. The percentages of horizontal overextension were T=8.5%, RI=0%, MC=0.8%, and PMC=3.8%. Based on the results, virtual model acquisition by scanning the typodont (simulated mouth) or MC, with or without powder, showed acceptable values for the marginal gap. The higher result of marginal gap of the RI group suggests that it is preferable to scan this directly from the mouth or from MC.
Deep learning for computational chemistry.
Goh, Garrett B; Hodas, Nathan O; Vishnu, Abhinav
2017-06-15
The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including quantitative structure activity relationship, virtual screening, protein structure prediction, quantum chemistry, materials design, and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network-based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
New Developments in Modeling MHD Systems on High Performance Computing Architectures
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D. J.; Bhattacharjee, A.
2009-04-01
Modeling the wide range of time and length scales present even in fluid models of plasmas like MHD and X-MHD (Extended MHD including two fluid effects like Hall term, electron inertia, electron pressure gradient) is challenging even on state-of-the-art supercomputers. In the last years, HPC capacity has continued to grow exponentially, but at the expense of making the computer systems more and more difficult to program in order to get maximum performance. In this paper, we will present a new approach to managing the complexity caused by the need to write efficient codes: Separating the numerical description of the problem, in our case a discretized right hand side (r.h.s.), from the actual implementation of efficiently evaluating it. An automatic code generator is used to describe the r.h.s. in a quasi-symbolic form while leaving the translation into efficient and parallelized code to a computer program itself. We implemented this approach for OpenGGCM (Open General Geospace Circulation Model), a model of the Earth's magnetosphere, which was accelerated by a factor of three on regular x86 architecture and a factor of 25 on the Cell BE architecture (commonly known for its deployment in Sony's PlayStation 3).
New Computational Approach to Electron Transport in Irregular Graphene Nanostructures
NASA Astrophysics Data System (ADS)
Mason, Douglas; Heller, Eric; Prendergast, David; Neaton, Jeffrey
2009-03-01
For novel graphene devices of nanoscale-to-macroscopic scale, many aspects of their transport properties are not easily understood due to difficulties in fabricating devices with regular edges. Here we develop a framework to efficiently calculate and potentially screen electronic transport properties of arbitrary nanoscale graphene device structures. A generalization of the established recursive Green's function method is presented, providing access to arbitrary device and lead geometries with substantial computer-time savings. Using single-orbital nearest-neighbor tight-binding models and the Green's function-Landauer scattering formalism, we will explore the transmission function of irregular two-dimensional graphene-based nanostructures with arbitrary lead orientation. Prepared by LBNL under contract DE-AC02-05CH11231 and supported by the U.S. Dept. of Energy Computer Science Graduate Fellowship under grant DE-FG02-97ER25308.
On explicit algebraic stress models for complex turbulent flows
NASA Technical Reports Server (NTRS)
Gatski, T. B.; Speziale, C. G.
1992-01-01
Explicit algebraic stress models that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure models. This represents a generalization of the model derived by Pope who based his analysis on the Launder, Reece, and Rodi model restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new models and traditional algebraic stress models -- as well as anistropic eddy visosity models -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress models have failed in complex flows. It is also shown that these explicit algebraic stress models can shed new light on what second-order closure models predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.
Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea
2015-09-01
The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less
Characterizing the functional MRI response using Tikhonov regularization.
Vakorin, Vasily A; Borowsky, Ron; Sarty, Gordon E
2007-09-20
The problem of evaluating an averaged functional magnetic resonance imaging (fMRI) response for repeated block design experiments was considered within a semiparametric regression model with autocorrelated residuals. We applied functional data analysis (FDA) techniques that use a least-squares fitting of B-spline expansions with Tikhonov regularization. To deal with the noise autocorrelation, we proposed a regularization parameter selection method based on the idea of combining temporal smoothing with residual whitening. A criterion based on a generalized chi(2)-test of the residuals for white noise was compared with a generalized cross-validation scheme. We evaluated and compared the performance of the two criteria, based on their effect on the quality of the fMRI response. We found that the regularization parameter can be tuned to improve the noise autocorrelation structure, but the whitening criterion provides too much smoothing when compared with the cross-validation criterion. The ultimate goal of the proposed smoothing techniques is to facilitate the extraction of temporal features in the hemodynamic response for further analysis. In particular, these FDA methods allow us to compute derivatives and integrals of the fMRI signal so that fMRI data may be correlated with behavioral and physiological models. For example, positive and negative hemodynamic responses may be easily and robustly identified on the basis of the first derivative at an early time point in the response. Ultimately, these methods allow us to verify previously reported correlations between the hemodynamic response and the behavioral measures of accuracy and reaction time, showing the potential to recover new information from fMRI data. 2007 John Wiley & Sons, Ltd
NASA Astrophysics Data System (ADS)
Assi, Kondo Claude; Gay, Etienne; Chnafa, Christophe; Mendez, Simon; Nicoud, Franck; Abascal, Juan F. P. J.; Lantelme, Pierre; Tournoux, François; Garcia, Damien
2017-09-01
We propose a regularized least-squares method for reconstructing 2D velocity vector fields within the left ventricular cavity from single-view color Doppler echocardiographic images. Vector flow mapping is formulated as a quadratic optimization problem based on an {{\\ell }2} -norm minimization of a cost function composed of a Doppler data-fidelity term and a regularizer. The latter contains three physically interpretable expressions related to 2D mass conservation, Dirichlet boundary conditions, and smoothness. A finite difference discretization of the continuous problem was adopted in a polar coordinate system, leading to a sparse symmetric positive-definite system. The three regularization parameters were determined automatically by analyzing the L-hypersurface, a generalization of the L-curve. The performance of the proposed method was numerically evaluated using (1) a synthetic flow composed of a mixture of divergence-free and curl-free flow fields and (2) simulated flow data from a patient-specific CFD (computational fluid dynamics) model of a human left heart. The numerical evaluations showed that the vector flow fields reconstructed from the Doppler components were in good agreement with the original velocities, with a relative error less than 20%. It was also demonstrated that a perturbation of the domain contour has little effect on the rebuilt velocity fields. The capability of our intraventricular vector flow mapping (iVFM) algorithm was finally illustrated on in vivo echocardiographic color Doppler data acquired in patients. The vortex that forms during the rapid filling was clearly deciphered. This improved iVFM algorithm is expected to have a significant clinical impact in the assessment of diastolic function.
Delamination detection using methods of computational intelligence
NASA Astrophysics Data System (ADS)
Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata
2012-11-01
Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.
Exploring the spectrum of regularized bosonic string theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ambjørn, J., E-mail: ambjorn@nbi.dk; Makeenko, Y., E-mail: makeenko@nbi.dk
2015-03-15
We implement a UV regularization of the bosonic string by truncating its mode expansion and keeping the regularized theory “as diffeomorphism invariant as possible.” We compute the regularized determinant of the 2d Laplacian for the closed string winding around a compact dimension, obtaining the effective action in this way. The minimization of the effective action reliably determines the energy of the string ground state for a long string and/or for a large number of space-time dimensions. We discuss the possibility of a scaling limit when the cutoff is taken to infinity.
Algorithms for Hidden Markov Models Restricted to Occurrences of Regular Expressions
Tataru, Paula; Sand, Andreas; Hobolth, Asger; Mailund, Thomas; Pedersen, Christian N. S.
2013-01-01
Hidden Markov Models (HMMs) are widely used probabilistic models, particularly for annotating sequential data with an underlying hidden structure. Patterns in the annotation are often more relevant to study than the hidden structure itself. A typical HMM analysis consists of annotating the observed data using a decoding algorithm and analyzing the annotation to study patterns of interest. For example, given an HMM modeling genes in DNA sequences, the focus is on occurrences of genes in the annotation. In this paper, we define a pattern through a regular expression and present a restriction of three classical algorithms to take the number of occurrences of the pattern in the hidden sequence into account. We present a new algorithm to compute the distribution of the number of pattern occurrences, and we extend the two most widely used existing decoding algorithms to employ information from this distribution. We show experimentally that the expectation of the distribution of the number of pattern occurrences gives a highly accurate estimate, while the typical procedure can be biased in the sense that the identified number of pattern occurrences does not correspond to the true number. We furthermore show that using this distribution in the decoding algorithms improves the predictive power of the model. PMID:24833225
Sythesis of MCMC and Belief Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Sungsoo; Chertkov, Michael; Shin, Jinwoo
Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus (LC) approach whichmore » allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes.« less
One-dimensional QCD in thimble regularization
NASA Astrophysics Data System (ADS)
Di Renzo, F.; Eruzzi, G.
2018-01-01
QCD in 0 +1 dimensions is numerically solved via thimble regularization. In the context of this toy model, a general formalism is presented for S U (N ) theories. The sign problem that the theory displays is a genuine one, stemming from a (quark) chemical potential. Three stationary points are present in the original (real) domain of integration, so that contributions from all the thimbles associated to them are to be taken into account: we show how semiclassical computations can provide hints on the regions of parameter space where this is absolutely crucial. Known analytical results for the chiral condensate and the Polyakov loop are correctly reproduced: this is in particular trivial at high values of the number of flavors Nf. In this regime we notice that the single thimble dominance scenario takes place (the dominant thimble is the one associated to the identity). At low values of Nf computations can be more difficult. It is important to stress that this is not at all a consequence of the original sign problem (not even via the residual phase). The latter is always under control, while accidental, delicate cancelations of contributions coming from different thimbles can be in place in (restricted) regions of the parameter space.
Infrared radiative transfer through a regular array of cuboidal clouds
NASA Technical Reports Server (NTRS)
HARSHVARDHAN; Weinman, J. A.
1981-01-01
Infrared radiative transfer through a regular array of cuboidal clouds is studied and the interaction of the sides of the clouds with each other and the ground is considered. The theory is developed for black clouds and is extended to scattering clouds using a variable azimuth two-stream approximation. It is shown that geometrical considerations often dominate over the microphysical aspects of radiative transfer through the clouds. For example, the difference in simulated 10 micron brightness temperature between black isothermal cubic clouds and cubic clouds of optical depth 10, is less than 2 deg for zenith angles less than 50 deg for all cloud fractions when viewed parallel to the array. The results show that serious errors are made in flux and cooling rate computations if broken clouds are modeled as planiform. Radiances computed by the usual practice of area-weighting cloudy and clear sky radiances are in error by 2 to 8 K in brightness temperature for cubic clouds over a wide range of cloud fractions and zenith angles. It is also shown that the lapse rate does not markedly affect the exiting radiances for cuboidal clouds of unit aspect ratio and optical depth 10.
Systematic size study of an insect antifreeze protein and its interaction with ice.
Liu, Kai; Jia, Zongchao; Chen, Guangju; Tung, Chenho; Liu, Ruozhuang
2005-02-01
Because of their remarkable ability to depress the freezing point of aqueous solutions, antifreeze proteins (AFPs) play a critical role in helping many organisms survive subzero temperatures. The beta-helical insect AFP structures solved to date, consisting of multiple repeating circular loops or coils, are perhaps the most regular protein structures discovered thus far. Taking an exceptional advantage of the unusually high structural regularity of insect AFPs, we have employed both semiempirical and quantum mechanics computational approaches to systematically investigate the relationship between the number of AFP coils and the AFP-ice interaction energy, an indicator of antifreeze activity. We generated a series of AFP models with varying numbers of 12-residue coils (sequence TCTxSxxCxxAx) and calculated their interaction energies with ice. Using several independent computational methods, we found that the AFP-ice interaction energy increased as the number of coils increased, until an upper bound was reached. The increase of interaction energy was significant for each of the first five coils, and there was a clear synergism that gradually diminished and even decreased with further increase of the number of coils. Our results are in excellent agreement with the recently reported experimental observations.
Systematic Size Study of an Insect Antifreeze Protein and Its Interaction with Ice
Liu, Kai; Jia, Zongchao; Chen, Guangju; Tung, Chenho; Liu, Ruozhuang
2005-01-01
Because of their remarkable ability to depress the freezing point of aqueous solutions, antifreeze proteins (AFPs) play a critical role in helping many organisms survive subzero temperatures. The β-helical insect AFP structures solved to date, consisting of multiple repeating circular loops or coils, are perhaps the most regular protein structures discovered thus far. Taking an exceptional advantage of the unusually high structural regularity of insect AFPs, we have employed both semiempirical and quantum mechanics computational approaches to systematically investigate the relationship between the number of AFP coils and the AFP-ice interaction energy, an indicator of antifreeze activity. We generated a series of AFP models with varying numbers of 12-residue coils (sequence TCTxSxxCxxAx) and calculated their interaction energies with ice. Using several independent computational methods, we found that the AFP-ice interaction energy increased as the number of coils increased, until an upper bound was reached. The increase of interaction energy was significant for each of the first five coils, and there was a clear synergism that gradually diminished and even decreased with further increase of the number of coils. Our results are in excellent agreement with the recently reported experimental observations. PMID:15713600
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
Reconfigurable Computing for Computational Science: A New Focus in High Performance Computing
2006-11-01
in the past decade. Researchers are regularly employing the power of large computing systems and parallel processing to tackle larger and more...complex problems in all of the physical sciences. For the past decade or so, most of this growth in computing power has been “free” with increased...the scientific computing community as a means to continued growth in computing capability. This paper offers a glimpse of the hardware and
s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography
Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai
2016-01-01
EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529
NASA Astrophysics Data System (ADS)
Basak, Subhash C.; Mills, Denise; Hawkins, Douglas M.
2008-06-01
A hierarchical classification study was carried out based on a set of 70 chemicals—35 which produce allergic contact dermatitis (ACD) and 35 which do not. This approach was implemented using a regular ridge regression computer code, followed by conversion of regression output to binary data values. The hierarchical descriptor classes used in the modeling include topostructural (TS), topochemical (TC), and quantum chemical (QC), all of which are based solely on chemical structure. The concordance, sensitivity, and specificity are reported. The model based on the TC descriptors was found to be the best, while the TS model was extremely poor.
MODIS Solar Diffuser: Modelled and Actual Performance
NASA Technical Reports Server (NTRS)
Waluschka, Eugene; Xiong, Xiao-Xiong; Esposito, Joe; Wang, Xin-Dong; Krebs, Carolyn (Technical Monitor)
2001-01-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument's solar diffuser is used in its radiometric calibration for the reflective solar bands (VIS, NTR, and SWIR) ranging from 0.41 to 2.1 micron. The sun illuminates the solar diffuser either directly or through a attenuation screen. The attenuation screen consists of a regular array of pin holes. The attenuated illumination pattern on the solar diffuser is not uniform, but consists of a multitude of pin-hole images of the sun. This non-uniform illumination produces small, but noticeable radiometric effects. A description of the computer model used to simulate the effects of the attenuation screen is given and the predictions of the model are compared with actual, on-orbit, calibration measurements.
Improvements in GRACE Gravity Fields Using Regularization
NASA Astrophysics Data System (ADS)
Save, H.; Bettadpur, S.; Tapley, B. D.
2008-12-01
The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or spatial smoothing.
Lennon, William; Hecht-Nielsen, Robert; Yamazaki, Tadashi
2014-01-01
While the anatomy of the cerebellar microcircuit is well-studied, how it implements cerebellar function is not understood. A number of models have been proposed to describe this mechanism but few emphasize the role of the vast network Purkinje cells (PKJs) form with the molecular layer interneurons (MLIs)—the stellate and basket cells. We propose a model of the MLI-PKJ network composed of simple spiking neurons incorporating the major anatomical and physiological features. In computer simulations, the model reproduces the irregular firing patterns observed in PKJs and MLIs in vitro and a shift toward faster, more regular firing patterns when inhibitory synaptic currents are blocked. In the model, the time between PKJ spikes is shown to be proportional to the amount of feedforward inhibition from an MLI on average. The two key elements of the model are: (1) spontaneously active PKJs and MLIs due to an endogenous depolarizing current, and (2) adherence to known anatomical connectivity along a parasagittal strip of cerebellar cortex. We propose this model to extend previous spiking network models of the cerebellum and for further computational investigation into the role of irregular firing and MLIs in cerebellar learning and function. PMID:25520646
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohrer, Brandon Robinson; Rothganger, Fredrick H.; Verzi, Stephen J.
2010-09-01
The neocortex is perhaps the highest region of the human brain, where audio and visual perception takes place along with many important cognitive functions. An important research goal is to describe the mechanisms implemented by the neocortex. There is an apparent regularity in the structure of the neocortex [Brodmann 1909, Mountcastle 1957] which may help simplify this task. The work reported here addresses the problem of how to describe the putative repeated units ('cortical circuits') in a manner that is easily understood and manipulated, with the long-term goal of developing a mathematical and algorithmic description of their function. The approachmore » is to reduce each algorithm to an enhanced perceptron-like structure and describe its computation using difference equations. We organize this algorithmic processing into larger structures based on physiological observations, and implement key modeling concepts in software which runs on parallel computing hardware.« less
NASA Astrophysics Data System (ADS)
Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide
2016-12-01
We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.
Eguchi, Akihiro; Mender, Bedeho M. W.; Evans, Benjamin D.; Humphreys, Glyn W.; Stringer, Simon M.
2015-01-01
Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognize the whole object. PMID:26300766
The Coastal Ocean Prediction Systems program: Understanding and managing our coastal ocean
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eden, H.F.; Mooers, C.N.K.
1990-06-01
The goal of COPS is to couple a program of regular observations to numerical models, through techniques of data assimilation, in order to provide a predictive capability for the US coastal ocean including the Great Lakes, estuaries, and the entire Exclusive Economic Zone (EEZ). The objectives of the program include: determining the predictability of the coastal ocean and the processes that govern the predictability; developing efficient prediction systems for the coastal ocean based on the assimilation of real-time observations into numerical models; and coupling the predictive systems for the physical behavior of the coastal ocean to predictive systems for biological,more » chemical, and geological processes to achieve an interdisciplinary capability. COPS will provide the basis for effective monitoring and prediction of coastal ocean conditions by optimizing the use of increased scientific understanding, improved observations, advanced computer models, and computer graphics to make the best possible estimates of sea level, currents, temperatures, salinities, and other properties of entire coastal regions.« less
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).
Species distribution modeling based on the automated identification of citizen observations.
Botella, Christophe; Joly, Alexis; Bonnet, Pierre; Monestiez, Pascal; Munoz, François
2018-02-01
A species distribution model computed with automatically identified plant observations was developed and evaluated to contribute to future ecological studies. We used deep learning techniques to automatically identify opportunistic plant observations made by citizens through a popular mobile application. We compared species distribution modeling of invasive alien plants based on these data to inventories made by experts. The trained models have a reasonable predictive effectiveness for some species, but they are biased by the massive presence of cultivated specimens. The method proposed here allows for fine-grained and regular monitoring of some species of interest based on opportunistic observations. More in-depth investigation of the typology of the observations and the sampling bias should help improve the approach in the future.
Identification of Vehicle Axle Loads from Bridge Dynamic Responses
NASA Astrophysics Data System (ADS)
ZHU, X. Q.; LAW, S. S.
2000-09-01
A method is presented to identify moving loads on a bridge deck modelled as an orthotropic rectangular plate. The dynamic behavior of the bridge deck under moving loads is analyzed using the orthotropic plate theory and modal superposition principle, and Tikhonov regularization procedure is applied to provide bounds to the identified forces in the time domain. The identified results using a beam model and a plate model of the bridge deck are compared, and the conditions under which the bridge deck can be simplified as an equivalent beam model are discussed. Computation simulation and laboratory tests show the effectiveness and the validity of the proposed method in identifying forces travelling along the central line or at an eccentric path on the bridge deck.
NASA Astrophysics Data System (ADS)
Deng, Shaohui; Wang, Xiaoling; Yu, Jia; Zhang, Yichi; Liu, Zhen; Zhu, Yushan
2018-06-01
Grouting plays a crucial role in dam safety. Due to the concealment of grouting activities, complexity of fracture distribution in rock masses and rheological properties of cement grout, it is difficult to analyze the effects of grouting. In this paper, a computational fluid dynamics (CFD) simulation approach of dam foundation grouting based on a 3D fracture network model is proposed. In this approach, the 3D fracture network model, which is based on an improved bootstrap sampling method and established by VisualGeo software, can provide a reliable and accurate geometric model for CFD simulation of dam foundation grouting. Based on the model, a CFD simulation is performed, in which the Papanastasiou regularized model is used to express the grout rheological properties, and the volume of fluid technique is utilized to capture the grout fronts. Two sets of tests are performed to verify the effectiveness of the Papanastasiou regularized model. When applying the CFD simulation approach for dam foundation grouting, three technical issues can be solved: (1) collapsing potential of the fracture samples, (2) inconsistencies in the geometric model in actual fractures under complex geological conditions, and (3) inappropriate method of characterizing the rheological properties of cement grout. The applicability of the proposed approach is demonstrated by an illustrative case study—a hydropower station dam foundation in southwestern China.
NASA Astrophysics Data System (ADS)
Annaby, M. H.; Asharabi, R. M.
2018-01-01
In a remarkable note of Chadan [Il Nuovo Cimento 39, 697-703 (1965)], the author expanded both the regular wave function and the Jost function of the quantum scattering problem using an interpolation theorem of Valiron [Bull. Sci. Math. 49, 181-192 (1925)]. These expansions have a very slow rate of convergence, and applying them to compute the zeros of the Jost function, which lead to the important bound states, gives poor convergence rates. It is our objective in this paper to introduce several efficient interpolation techniques to compute the regular wave solution as well as the Jost function and its zeros approximately. This work continues and improves the results of Chadan and other related studies remarkably. Several worked examples are given with illustrations and comparisons with existing methods.
CAT scan - sinus; Computed axial tomography scan - sinus; Computed tomography scan - sinus; CT scan - sinus ... Risks for a CT scan includes: Being exposed to radiation Allergic reaction to contrast dye CT scans expose you to more radiation than regular ...
Computational Models Reveal a Passive Mechanism for Cell Migration in the Crypt
Dunn, Sara-Jane; Näthke, Inke S.; Osborne, James M.
2013-01-01
Cell migration in the intestinal crypt is essential for the regular renewal of the epithelium, and the continued upward movement of cells is a key characteristic of healthy crypt dynamics. However, the driving force behind this migration is unknown. Possibilities include mitotic pressure, active movement driven by motility cues, or negative pressure arising from cell loss at the crypt collar. It is possible that a combination of factors together coordinate migration. Here, three different computational models are used to provide insight into the mechanisms that underpin cell movement in the crypt, by examining the consequence of eliminating cell division on cell movement. Computational simulations agree with existing experimental results, confirming that migration can continue in the absence of mitosis. Importantly, however, simulations allow us to infer mechanisms that are sufficient to generate cell movement, which is not possible through experimental observation alone. The results produced by the three models agree and suggest that cell loss due to apoptosis and extrusion at the crypt collar relieves cell compression below, allowing cells to expand and move upwards. This finding suggests that future experiments should focus on the role of apoptosis and cell extrusion in controlling cell migration in the crypt. PMID:24260407
ERIC Educational Resources Information Center
Joyner, Amy
2003-01-01
Handheld computers provide students tremendous computing and learning power at about a 10th the cost of a regular computer. Describes the evolution of handhelds; provides some examples of their uses; and cites research indicating they are effective classroom tools that can improve efficiency and instruction. A sidebar lists handheld resources.…
5 CFR 550.707 - Computation of severance pay fund.
Code of Federal Regulations, 2011 CFR
2011-01-01
... pay for standby duty regularly varies throughout the year, compute the average standby duty premium...), compute the weekly average percentage, and multiply that percentage by the weekly scheduled rate of pay in... hours in a pay status (excluding overtime hours) and multiply that average by the hourly rate of basic...
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tupek, Michael R.
2016-06-30
In recent years there has been a proliferation of modeling techniques for forward predictions of crack propagation in brittle materials, including: phase-field/gradient damage models, peridynamics, cohesive-zone models, and G/XFEM enrichment techniques. However, progress on the corresponding inverse problems has been relatively lacking. Taking advantage of key features of existing modeling approaches, we propose a parabolic regularization of Barenblatt cohesive models which borrows extensively from previous phase-field and gradient damage formulations. An efficient explicit time integration strategy for this type of nonlocal fracture model is then proposed and justified. In addition, we present a C++ computational framework for computing in- putmore » parameter sensitivities efficiently for explicit dynamic problems using the adjoint method. This capability allows for solving inverse problems involving crack propagation to answer interesting engineering questions such as: 1) what is the optimal design topology and material placement for a heterogeneous structure to maximize fracture resistance, 2) what loads must have been applied to a structure for it to have failed in an observed way, 3) what are the existing cracks in a structure given various experimental observations, etc. In this work, we focus on the first of these engineering questions and demonstrate a capability to automatically and efficiently compute optimal designs intended to minimize crack propagation in structures.« less
Identification of subsurface structures using electromagnetic data and shape priors
NASA Astrophysics Data System (ADS)
Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond
2015-03-01
We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.
Abdomen and spinal cord segmentation with augmented active shape models.
Xu, Zhoubing; Conrad, Benjamin N; Baucom, Rebeccah B; Smith, Seth A; Poulose, Benjamin K; Landman, Bennett A
2016-07-01
Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC.
Cone-beam x-ray luminescence computed tomography based on x-ray absorption dosage
NASA Astrophysics Data System (ADS)
Liu, Tianshuai; Rong, Junyan; Gao, Peng; Zhang, Wenli; Liu, Wenlei; Zhang, Yuanke; Lu, Hongbing
2018-02-01
With the advances of x-ray excitable nanophosphors, x-ray luminescence computed tomography (XLCT) has become a promising hybrid imaging modality. In particular, a cone-beam XLCT (CB-XLCT) system has demonstrated its potential in in vivo imaging with the advantage of fast imaging speed over other XLCT systems. Currently, the imaging models of most XLCT systems assume that nanophosphors emit light based on the intensity distribution of x-ray within the object, not completely reflecting the nature of the x-ray excitation process. To improve the imaging quality of CB-XLCT, an imaging model that adopts an excitation model of nanophosphors based on x-ray absorption dosage is proposed in this study. To solve the ill-posed inverse problem, a reconstruction algorithm that combines the adaptive Tikhonov regularization method with the imaging model is implemented for CB-XLCT reconstruction. Numerical simulations and phantom experiments indicate that compared with the traditional forward model based on x-ray intensity, the proposed dose-based model could improve the image quality of CB-XLCT significantly in terms of target shape, localization accuracy, and image contrast. In addition, the proposed model behaves better in distinguishing closer targets, demonstrating its advantage in improving spatial resolution.
Garrido, Marta I; Rowe, Elise G; Halász, Veronika; Mattingley, Jason B
2018-05-01
Predictive coding posits that the human brain continually monitors the environment for regularities and detects inconsistencies. It is unclear, however, what effect attention has on expectation processes, as there have been relatively few studies and the results of these have yielded contradictory findings. Here, we employed Bayesian model comparison to adjudicate between 2 alternative computational models. The "Opposition" model states that attention boosts neural responses equally to predicted and unpredicted stimuli, whereas the "Interaction" model assumes that attentional boosting of neural signals depends on the level of predictability. We designed a novel, audiospatial attention task that orthogonally manipulated attention and prediction by playing oddball sequences in either the attended or unattended ear. We observed sensory prediction error responses, with electroencephalography, across all attentional manipulations. Crucially, posterior probability maps revealed that, overall, the Opposition model better explained scalp and source data, suggesting that attention boosts responses to predicted and unpredicted stimuli equally. Furthermore, Dynamic Causal Modeling showed that these Opposition effects were expressed in plastic changes within the mismatch negativity network. Our findings provide empirical evidence for a computational model of the opposing interplay of attention and expectation in the brain.
Chemical interactions and thermodynamic studies in aluminum alloy/molten salt systems
NASA Astrophysics Data System (ADS)
Narayanan, Ramesh
The recycling of aluminum and aluminum alloys such as Used Beverage Container (UBC) is done under a cover of molten salt flux based on (NaCl-KCl+fluorides). The reactions of aluminum alloys with molten salt fluxes have been investigated. Thermodynamic calculations are performed in the alloy/salt flux systems which allow quantitative predictions of the equilibrium compositions. There is preferential reaction of Mg in Al-Mg alloy with molten salt fluxes, especially those containing fluorides like NaF. An exchange reaction between Al-Mg alloy and molten salt flux has been demonstrated. Mg from the Al-Mg alloy transfers into the salt flux while Na from the salt flux transfers into the metal. Thermodynamic calculations indicated that the amount of Na in metal increases as the Mg content in alloy and/or NaF content in the reacting flux increases. This is an important point because small amounts of Na have a detrimental effect on the mechanical properties of the Al-Mg alloy. The reactions of Al alloys with molten salt fluxes result in the formation of bluish purple colored "streamers". It was established that the streamer is liquid alkali metal (Na and K in the case of NaCl-KCl-NaF systems) dissipating into the melt. The melts in which such streamers were observed are identified. The metal losses occurring due to reactions have been quantified, both by thermodynamic calculations and experimentally. A computer program has been developed to calculate ternary phase diagrams in molten salt systems from the constituting binary phase diagrams, based on a regular solution model. The extent of deviation of the binary systems from regular solution has been quantified. The systems investigated in which good agreement was found between the calculated and experimental phase diagrams included NaF-KF-LiF, NaCl-NaF-NaI and KNOsb3-TINOsb3-LiNOsb3. Furthermore, an insight has been provided on the interrelationship between the regular solution parameters and the topology of the phase diagram. The isotherms are flat (i.e. no skewness) when the regular solution parameters are zero. When the regular solution parameters are non-zero, the isotherms are skewed. A regular solution model is not adequate to accurately model the molten salt systems used in recycling like NaCl-KCl-LiF and NaCl-KCl-NaF.
Planning, creating and documenting a NASTRAN finite element model of a modern helicopter
NASA Technical Reports Server (NTRS)
Gabal, R.; Reed, D.; Ricks, R.; Kesack, W.
1985-01-01
Mathematical models based on the finite element method of structural analysis as embodied in the NASTRAN computer code are widely used by the helicopter industry to calculate static internal loads and vibration of airframe structure. The internal loads are routinely used for sizing structural members. The vibration predictions are not yet relied on during design. NASA's Langley Research Center sponsored a program to conduct an application of the finite element method with emphasis on predicting structural vibration. The Army/Boeing CH-47D helicopter was used as the modeling subject. The objective was to engender the needed trust in vibration predictions using these models and establish a body of modeling guides which would enable confident future prediction of airframe vibration as part of the regular design process.
Lam, Desmond; Mizerski, Richard
2017-06-01
The objective of this study is to explore the gambling participations and game purchase duplication of light regular, heavy regular and pathological gamblers by applying the Duplication of Purchase Law. Current study uses data collected by the Australian Productivity Commission for eight different types of games. Key behavioral statistics on light regular, heavy regular, and pathological gamblers were computed and compared. The key finding is that pathological gambling, just like regular gambling, follows the Duplication of Purchase Law, which states that the dominant factor of purchase duplication between two brands is their market shares. This means that gambling between any two games at pathological level, like any regular consumer purchases, exhibits "law-like" regularity based on the pathological gamblers' participation rate of each game. Additionally, pathological gamblers tend to gamble more frequently across all games except lotteries and instant as well as make greater cross-purchases compared to heavy regular gamblers. A better understanding of the behavioral traits between regular (particularly heavy regular) and pathological gamblers can be useful to public policy makers and social marketers in order to more accurately identify such gamblers and better manage the negative impacts of gambling.
5 CFR 532.255 - Regular appropriated fund wage schedules in foreign areas.
Code of Federal Regulations, 2010 CFR
2010-01-01
... schedules shall provide rates of pay for nonsupervisory, leader, supervisory, and production facilitating employees. (b) Schedules shall be— (1) Computed on the basis of a simple average of all regular appropriated fund wage area schedules in effect on December 31; and (2) Effective on the first day of the first pay...
5 CFR 532.255 - Regular appropriated fund wage schedules in foreign areas.
Code of Federal Regulations, 2011 CFR
2011-01-01
... schedules shall provide rates of pay for nonsupervisory, leader, supervisory, and production facilitating employees. (b) Schedules shall be— (1) Computed on the basis of a simple average of all regular appropriated fund wage area schedules in effect on December 31; and (2) Effective on the first day of the first pay...
Student, Teacher, Professor: Three Perspectives on Online Education
ERIC Educational Resources Information Center
Pearcy, Mark
2014-01-01
Today, a third of American children regularly use computer tablets, while over 40% use smartphones and 53% regularly use laptops in their home. While this is encouraging there is still considerable debate about the shape and direction technology should take in school, particularly online education making it necessary for educators to change in…
Cortical basis of communication: local computation, coordination, attention.
Alexandre, Frederic
2009-03-01
Human communication emerges from cortical processing, known to be implemented on a regular repetitive neuronal substratum. The supposed genericity of cortical processing has elicited a series of modeling works in computational neuroscience that underline the information flows driven by the cortical circuitry. In the minimalist framework underlying the current theories for the embodiment of cognition, such a generic cortical processing is exploited for the coordination of poles of representation, as is reported in this paper for the case of visual attention. Interestingly, this case emphasizes how abstract internal referents are built to conform to memory requirements. This paper proposes that these referents are the basis for communication in humans, which is firstly a coordination and an attentional procedure with regard to their congeners.
Larios, Adam; Petersen, Mark R.; Titi, Edriss S.; ...
2017-04-29
We report the results of a computational investigation of two blow-up criteria for the 3D incompressible Euler equations. One criterion was proven in a previous work, and a related criterion is proved here. These criteria are based on an inviscid regularization of the Euler equations known as the 3D Euler-Voigt equations, which are known to be globally well-posed. Moreover, simulations of the 3D Euler-Voigt equations also require less resolution than simulations of the 3D Euler equations for xed values of the regularization parameter α > 0. Therefore, the new blow-up criteria allow one to gain information about possible singularity formationmore » in the 3D Euler equations indirectly; namely, by simulating the better-behaved 3D Euler-Voigt equations. The new criteria are only known to be suficient for blow-up. Therefore, to test the robustness of the inviscid-regularization approach, we also investigate analogous criteria for blow-up of the 1D Burgers equation, where blow-up is well-known to occur.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larios, Adam; Petersen, Mark R.; Titi, Edriss S.
We report the results of a computational investigation of two blow-up criteria for the 3D incompressible Euler equations. One criterion was proven in a previous work, and a related criterion is proved here. These criteria are based on an inviscid regularization of the Euler equations known as the 3D Euler-Voigt equations, which are known to be globally well-posed. Moreover, simulations of the 3D Euler-Voigt equations also require less resolution than simulations of the 3D Euler equations for xed values of the regularization parameter α > 0. Therefore, the new blow-up criteria allow one to gain information about possible singularity formationmore » in the 3D Euler equations indirectly; namely, by simulating the better-behaved 3D Euler-Voigt equations. The new criteria are only known to be suficient for blow-up. Therefore, to test the robustness of the inviscid-regularization approach, we also investigate analogous criteria for blow-up of the 1D Burgers equation, where blow-up is well-known to occur.« less
Influence of initial seed distribution on the pattern formation of the phase field crystals
NASA Astrophysics Data System (ADS)
Starodumov, Ilya; Galenko, Peter; Kropotin, Nikolai; Alexandrov, Dmitri V.
2017-11-01
The process of crystal growth can be expressed as a transition of atomic structure to a finally stable state or to a metastable state. In the Phase Field Crystal Model (PFC-model) these states are described by regular distributions of the atomic density. Getting the system into any metastable condition may be caused by the peculiarities of the computational domain, initial and boundary conditions. However, an important factor in the formation of the crystal structure can be the initial disturbance. In the report we show how different types of initial disturbance can change the finally stable state of crystal structure in equilibrium.
SU-E-T-278: Realization of Dose Verification Tool for IMRT Plan Based On DPM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Jinfeng; Cao, Ruifen; Dai, Yumei
Purpose: To build a Monte Carlo dose verification tool for IMRT Plan by implementing a irradiation source model into DPM code. Extend the ability of DPM to calculate any incident angles and irregular-inhomogeneous fields. Methods: With the virtual source and the energy spectrum which unfolded from the accelerator measurement data,combined with optimized intensity maps to calculate the dose distribution of the irradiation irregular-inhomogeneous field. The irradiation source model of accelerator was substituted by a grid-based surface source. The contour and the intensity distribution of the surface source were optimized by ARTS (Accurate/Advanced Radiotherapy System) optimization module based on the tumormore » configuration. The weight of the emitter was decided by the grid intensity. The direction of the emitter was decided by the combination of the virtual source and the emitter emitting position. The photon energy spectrum unfolded from the accelerator measurement data was adjusted by compensating the contaminated electron source. For verification, measured data and realistic clinical IMRT plan were compared with DPM dose calculation. Results: The regular field was verified by comparing with the measured data. It was illustrated that the differences were acceptable (<2% inside the field, 2–3mm in the penumbra). The dose calculation of irregular field by DPM simulation was also compared with that of FSPB (Finite Size Pencil Beam) and the passing rate of gamma analysis was 95.1% for peripheral lung cancer. The regular field and the irregular rotational field were all within the range of permitting error. The computing time of regular fields were less than 2h, and the test of peripheral lung cancer was 160min. Through parallel processing, the adapted DPM could complete the calculation of IMRT plan within half an hour. Conclusion: The adapted parallelized DPM code with irradiation source model is faster than classic Monte Carlo codes. Its computational accuracy and speed satisfy the clinical requirement, and it is expectable to be a Monte Carlo dose verification tool for IMRT Plan. Strategic Priority Research Program of the China Academy of Science(XDA03040000); National Natural Science Foundation of China (81101132)« less
Comparison Study of Regularizations in Spectral Computed Tomography Reconstruction
NASA Astrophysics Data System (ADS)
Salehjahromi, Morteza; Zhang, Yanbo; Yu, Hengyong
2018-12-01
The energy-resolving photon-counting detectors in spectral computed tomography (CT) can acquire projections of an object in different energy channels. In other words, they are able to reliably distinguish the received photon energies. These detectors lead to the emerging spectral CT, which is also called multi-energy CT, energy-selective CT, color CT, etc. Spectral CT can provide additional information in comparison with the conventional CT in which energy integrating detectors are used to acquire polychromatic projections of an object being investigated. The measurements obtained by X-ray CT detectors are noisy in reality, especially in spectral CT where the photon number is low in each energy channel. Therefore, some regularization should be applied to obtain a better image quality for this ill-posed problem in spectral CT image reconstruction. Quadratic-based regularizations are not often satisfactory as they blur the edges in the reconstructed images. As a result, different edge-preserving regularization methods have been adopted for reconstructing high quality images in the last decade. In this work, we numerically evaluate the performance of different regularizers in spectral CT, including total variation, non-local means and anisotropic diffusion. The goal is to provide some practical guidance to accurately reconstruct the attenuation distribution in each energy channel of the spectral CT data.
Stretching of passive tracers and implications for mantle mixing
NASA Astrophysics Data System (ADS)
Conjeepuram, N.; Kellogg, L. H.
2007-12-01
Mid ocean ridge basalts(MORB) and ocean island basalts(OIB) have fundamentally different geochemical signatures. Understanding this difference requires a fundamental knowledge of the mixing processes that led to their formation. Quantitative methods used to assess mixing include examining the distribution of passive tracers, attaching time-evolution information to simulate decay of radioactive isotopes, and, for chaotic flows, calculating the Lyapunov exponent, which characterizes whether two nearby particles diverge at an exponential rate. Although effective, these methods are indirect measures of the two fundamental processes associated with mixing namely, stretching and folding. Building on work done by Kellogg and Turcotte, we present a method to compute the stretching and thinning of a passive, ellipsoidal tracer in three orthogonal directions in isoviscous, incompressible three dimensional flows. We also compute the Lyapunov exponents associated with the given system based on the quantitative measures of stretching and thinning. We test our method with two analytical and three numerical flow fields which exhibit Lagrangian turbulence. The ABC and STF class of analytical flows are a three and two parameter class of flows respectively and have been well studied for fast dynamo action. Since they generate both periodic and chaotic particle paths depending either on the starting point or on the choice of the parameters, they provide a good foundation to understand mixing. The numerical flow fields are similar to the geometries used by Ferrachat and Ricard (1998) and emulate a ridge - transform system. We also compute the stable and unstable manifolds associated with the numerical flow fields to illustrate the directions of rapid and slow mixing. We find that stretching in chaotic flow fields is significantly more effective than regular or periodic flow fields. Consequently, chaotic mixing is far more efficient than regular mixing. We also find that in the numerical flow field, there is a fundamental topological difference in the regions exhibiting slow or regular mixing for different model geometries.
Holmes, William R; Huwe, Janice A; Williams, Barbara; Rowe, Michael H; Peterson, Ellengene H
2017-05-01
Vestibular bouton afferent terminals in turtle utricle can be categorized into four types depending on their location and terminal arbor structure: lateral extrastriolar (LES), striolar, juxtastriolar, and medial extrastriolar (MES). The terminal arbors of these afferents differ in surface area, total length, collecting area, number of boutons, number of bouton contacts per hair cell, and axon diameter (Huwe JA, Logan CJ, Williams B, Rowe MH, Peterson EH. J Neurophysiol 113: 2420-2433, 2015). To understand how differences in terminal morphology and the resulting hair cell inputs might affect afferent response properties, we modeled representative afferents from each region, using reconstructed bouton afferents. Collecting area and hair cell density were used to estimate hair cell-to-afferent convergence. Nonmorphological features were held constant to isolate effects of afferent structure and connectivity. The models suggest that all four bouton afferent types are electrotonically compact and that excitatory postsynaptic potentials are two to four times larger in MES afferents than in other afferents, making MES afferents more responsive to low input levels. The models also predict that MES and LES terminal structures permit higher spontaneous firing rates than those in striola and juxtastriola. We found that differences in spike train regularity are not a consequence of differences in peripheral terminal structure, per se, but that a higher proportion of multiple contacts between afferents and individual hair cells increases afferent firing irregularity. The prediction that afferents having primarily one bouton contact per hair cell will fire more regularly than afferents making multiple bouton contacts per hair cell has implications for spike train regularity in dimorphic and calyx afferents. NEW & NOTEWORTHY Bouton afferents in different regions of turtle utricle have very different morphologies and afferent-hair cell connectivities. Highly detailed computational modeling provides insights into how morphology impacts excitability and also reveals a new explanation for spike train irregularity based on relative numbers of multiple bouton contacts per hair cell. This mechanism is independent of other proposed mechanisms for spike train irregularity based on ionic conductances and can explain irregularity in dimorphic units and calyx endings. Copyright © 2017 the American Physiological Society.
Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.
NASA Astrophysics Data System (ADS)
Liu, Y.; Li, Y.
2016-12-01
We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.
Lattice Boltzmann Model of 3D Multiphase Flow in Artery Bifurcation Aneurysm Problem
Abas, Aizat; Mokhtar, N. Hafizah; Ishak, M. H. H.; Abdullah, M. Z.; Ho Tian, Ang
2016-01-01
This paper simulates and predicts the laminar flow inside the 3D aneurysm geometry, since the hemodynamic situation in the blood vessels is difficult to determine and visualize using standard imaging techniques, for example, magnetic resonance imaging (MRI). Three different types of Lattice Boltzmann (LB) models are computed, namely, single relaxation time (SRT), multiple relaxation time (MRT), and regularized BGK models. The results obtained using these different versions of the LB-based code will then be validated with ANSYS FLUENT, a commercially available finite volume- (FV-) based CFD solver. The simulated flow profiles that include velocity, pressure, and wall shear stress (WSS) are then compared between the two solvers. The predicted outcomes show that all the LB models are comparable and in good agreement with the FVM solver for complex blood flow simulation. The findings also show minor differences in their WSS profiles. The performance of the parallel implementation for each solver is also included and discussed in this paper. In terms of parallelization, it was shown that LBM-based code performed better in terms of the computation time required. PMID:27239221
Simplified Phase Diversity algorithm based on a first-order Taylor expansion.
Zhang, Dong; Zhang, Xiaobin; Xu, Shuyan; Liu, Nannan; Zhao, Luoxin
2016-10-01
We present a simplified solution to phase diversity when the observed object is a point source. It utilizes an iterative linearization of the point spread function (PSF) at two or more diverse planes by first-order Taylor expansion to reconstruct the initial wavefront. To enhance the influence of the PSF in the defocal plane which is usually very dim compared to that in the focal plane, we build a new model with the Tikhonov regularization function. The new model cannot only increase the computational speed, but also reduce the influence of the noise. By using the PSFs obtained from Zemax, we reconstruct the wavefront of the Hubble Space Telescope (HST) at the edge of the field of view (FOV) when the telescope is in either the nominal state or the misaligned state. We also set up an experiment, which consists of an imaging system and a deformable mirror, to validate the correctness of the presented model. The result shows that the new model can improve the computational speed with high wavefront detection accuracy.
Spectral partitioning in equitable graphs.
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
Spectral partitioning in equitable graphs
NASA Astrophysics Data System (ADS)
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.
Efficient robust doubly adaptive regularized regression with applications.
Karunamuni, Rohana J; Kong, Linglong; Tu, Wei
2018-01-01
We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.
Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan
2017-01-01
Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.
Ghaderi, Forouzan; Ghaderi, Amir H.; Ghaderi, Noushin; Najafi, Bijan
2017-01-01
Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose. PMID:29188217
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-01-01
Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831
Measuring, Enabling and Comparing Modularity, Regularity and Hierarchy in Evolutionary Design
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.
2005-01-01
For computer-automated design systems to scale to complex designs they must be able to produce designs that exhibit the characteristics of modularity, regularity and hierarchy - characteristics that are found both in man-made and natural designs. Here we claim that these characteristics are enabled by implementing the attributes of combination, control-flow and abstraction in the representation. To support this claim we use an evolutionary algorithm to evolve solutions to different sizes of a table design problem using five different representations, each with different combinations of modularity, regularity and hierarchy enabled and show that the best performance happens when all three of these attributes are enabled. We also define metrics for modularity, regularity and hierarchy in design encodings and demonstrate that high fitness values are achieved with high values of modularity, regularity and hierarchy and that there is a positive correlation between increases in fitness and increases in modularity. regularity and hierarchy.
Fast Algorithms for Earth Mover Distance Based on Optimal Transport and L1 Regularization II
2016-09-01
of optimal transport, the EMD problem can be reformulated as a familiar L1 minimization. We use a regularization which gives us a unique solution for...plays a central role in many applications, including image processing, computer vision and statistics etc. [13, 17, 20, 24]. The EMD is a metric defined
Fast Algorithms for Earth Mover’s Distance Based on Optimal Transport and L1 Type Regularization I
2016-09-01
which EMD can be reformulated as a familiar homogeneous degree 1 regularized minimization. The new minimization problem is very similar to problems which...which is also named the Monge problem or the Wasserstein metric, plays a central role in many applications, including image processing, computer vision
When Do Memory Limitations Lead to Regularization? An Experimental and Computational Investigation
ERIC Educational Resources Information Center
Perfors, Amy
2012-01-01
The Less is More hypothesis suggests that one reason adults and children differ in their ability to learn language is that they also differ in other cognitive capacities. According to one version of this hypothesis, children's relatively poor memory may make them more likely to regularize inconsistent input (Hudson Kam & Newport, 2005, 2009). This…
29 CFR 778.115 - Employees working at two or more rates.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Overtime Pay Requirements Principles for Computing Overtime Pay Based on the âregular Rateâ § 778.115... different types of work for which different nonovertime rates of pay (of not less than the applicable minimum wage) have been established, his regular rate for that week is the weighted average of such rates...
Selection of regularization parameter in total variation image restoration.
Liao, Haiyong; Li, Fang; Ng, Michael K
2009-11-01
We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.
Accurate orbit propagation in the presence of planetary close encounters
NASA Astrophysics Data System (ADS)
Amato, Davide; Baù, Giulio; Bombardelli, Claudio
2017-09-01
We present an efficient strategy for the numerical propagation of small Solar system objects undergoing close encounters with massive bodies. The trajectory is split into several phases, each of them being the solution of a perturbed two-body problem. Formulations regularized with respect to different primaries are employed in two subsequent phases. In particular, we consider the Kustaanheimo-Stiefel regularization and a novel set of non-singular orbital elements pertaining to the Dromo family. In order to test the proposed strategy, we perform ensemble propagations in the Earth-Sun Circular Restricted 3-Body Problem (CR3BP) using a variable step size and order multistep integrator and an improved version of Everhart's radau solver of 15th order. By combining the trajectory splitting with regularized equations of motion in short-term propagations (1 year), we gain up to six orders of magnitude in accuracy with respect to the classical Cowell's method for the same computational cost. Moreover, in the propagation of asteroid (99942) Apophis through its 2029 Earth encounter, the position error stays within 100 metres after 100 years. In general, as to improve the performance of regularized formulations, the trajectory must be split between 1.2 and 3 Hill radii from the Earth. We also devise a robust iterative algorithm to stop the integration of regularized equations of motion at a prescribed physical time. The results rigorously hold in the CR3BP, and similar considerations may apply when considering more complex models. The methods and algorithms are implemented in the naples fortran 2003 code, which is available online as a GitHub repository.
The effect of aging on the brain network for exception word reading.
Provost, Jean-Sebastien; Brambati, Simona M; Chapleau, Marianne; Wilson, Maximiliano A
2016-11-01
Cognitive and computational models of reading aloud agree on the existence of two procedures for reading. Pseudowords (e.g., atendier) are correctly read through subword processes only while exception words (e.g., pint) are only correctly read via whole-words processes. Regular words can be correctly read by means of either way. Previous behavioral studies showed that older adults relied more on whole-word processing for reading. The aim of the present fMRI study was to verify whether this larger whole-word reliance for reading in older adults was reflected by changes in the pattern of brain activation. Both young and elderly participants read aloud pseudowords, exception and regular words in the scanner. Behavioral results reproduced those of previous studies showing that older adults made significantly less errors when reading exception words. Neuroimaging results showed significant activation of the left anterior temporal lobe (ATL), a key region implicated in whole-word reading for exception word reading in both young and elderly participants. Critically, ATL activation was also found for regular word reading in the elderly. No differences were observed in the pattern of activation between regular and pseudowords in the young. In conclusion, these results extend evidence on the critical role of the left ATL for exception word reading to elderly participants. Additionally, our study shows for the first time from a developmental point of view that the behavioral changes found in reading during normal aging also have a brain counterpart in the reading network changes that sustain exception and regular word reading in the elderly. Copyright © 2016 Elsevier Ltd. All rights reserved.
Exploring social structure effect on language evolution based on a computational model
NASA Astrophysics Data System (ADS)
Gong, Tao; Minett, James; Wang, William
2008-06-01
A compositionality-regularity coevolution model is adopted to explore the effect of social structure on language emergence and maintenance. Based on this model, we explore language evolution in three experiments, and discuss the role of a popular agent in language evolution, the relationship between mutual understanding and social hierarchy, and the effect of inter-community communications and that of simple linguistic features on convergence of communal languages in two communities. This work embodies several important interactions during social learning, and introduces a new approach that manipulates individuals' probabilities to participate in social interactions to study the effect of social structure. We hope it will stimulate further theoretical and empirical explorations on language evolution in a social environment.
Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms
NASA Astrophysics Data System (ADS)
Mohan, K. Aditya
2017-10-01
4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.
Incorporating linguistic knowledge for learning distributed word representations.
Wang, Yan; Liu, Zhiyuan; Sun, Maosong
2015-01-01
Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from large-scale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either link-based knowledge or preference-based knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from large-scale text corpora into a unified word representation model, which will benefit many tasks in text mining.
Incorporating Linguistic Knowledge for Learning Distributed Word Representations
Wang, Yan; Liu, Zhiyuan; Sun, Maosong
2015-01-01
Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from large-scale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either link-based knowledge or preference-based knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from large-scale text corpora into a unified word representation model, which will benefit many tasks in text mining. PMID:25874581
A Simplified Biosphere Model for Global Climate Studies.
NASA Astrophysics Data System (ADS)
Xue, Y.; Sellers, P. J.; Kinter, J. L.; Shukla, J.
1991-03-01
The Simple Biosphere Model (SiB) as described in Sellers et al. is a bio-physically based model of land surface-atmosphere interaction. For some general circulation model (GCM) climate studies, further simplifications are desirable to have greater computation efficiency, and more important, to consolidate the parametric representation. Three major reductions in the complexity of SiB have been achieved in the present study.The diurnal variation of surface albedo is computed in SiB by means of a comprehensive yet complex calculation. Since the diurnal cycle is quite regular for each vegetation type, this calculation can be simplified considerably. The effect of root zone soil moisture on stomatal resistance is substantial, but the computation in SiB is complicated and expensive. We have developed approximations, which simulate the effects of reduced soil moisture more simply, keeping the essence of the biophysical concepts used in SiB.The surface stress and the fluxes of heat and moisture between the top of the vegetation canopy and an atmospheric reference level have been parameterized in an off-line version of SiB based upon the studies by Businger et al. and Paulson. We have developed a linear relationship between Richardson number and aero-dynamic resistance. Finally, the second vegetation layer of the original model does not appear explicitly after simplification. Compared to the model of Sellers et al., we have reduced the number of input parameters from 44 to 21. A comparison of results using the reduced parameter biosphere with those from the original formulation in a GCM and a zero-dimensional model shows the simplified version to reproduce the original results quite closely. After simplification, the computational requirement of SiB was reduced by about 55%.
New insights into chromatin folding and dynamics from multi-scale modeling
NASA Astrophysics Data System (ADS)
Olson, Wilma
The dynamic organization of chromatin plays an essential role in the regulation of gene expression and in other fundamental cellular processes. The underlying physical basis of these activities lies in the sequential positioning, chemical composition, and intermolecular interactions of the nucleosomes-the familiar assemblies of roughly 150 DNA base pairs and eight histone proteins-found on chromatin fibers. We have developed a mesoscale model of short nucleosomal arrays and a computational framework that make it possible to incorporate detailed structural features of DNA and histones in simulations of short chromatin constructs with 3-25 evenly spaced nucleosomes. The correspondence between the predicted and observed effects of nucleosome composition, spacing, and numbers on long-range communication between regulatory proteins bound to the ends of designed nucleosome arrays lends credence to the model and to the molecular insights gleaned from the simulated structures. We have extracted effective nucleosome-nucleosome potentials from the mesoscale simulations and introduced the potentials in a larger scale computational treatment of regularly repeating chromatin fibers. Our results reveal a remarkable influence of nucleosome spacing on chromatin flexibility. Small changes in the length of the DNA fragments linking successive nucleosomes introduce marked changes in the local interactions of the nucleosomes and in the spatial configurations of the fiber as a whole. The changes in nucleosome positioning influence the statistical properties of longer chromatin constructs with 100-10,000 nucleosomes. We are investigating the extent to which the `local' interactions of regularly spaced nucleosomes contribute to the corresponding interactions in chains with mixed spacings as a step toward the treatment of fibers with nucleosomes positioned at the sites mapped at base-pair resolution on genomic sequences. Support of the work by USPHS R01 GM 34809 is gratefully acknowledged.
Friedel, M.J.
2008-01-01
A regularized joint inverse procedure is presented and used to estimate the magnitude of extreme rainfall events in ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. Since streamflow measurements reflect temporal and spatial rainfall information, peak-flow discharge is hypothesized to represent a similarity measure suitable for regionalization. To test this hypothesis, peak-flow discharge values determined from streamflow recurrence information (10-year, 25-year, and 100-year) collected outside the study basins are used to develop regional (country-wide) regression equations. Peak-flow discharge derived from these equations together with preferred spatial parameter relations as soft prior information are used to constrain the simultaneous calibration of 20 tributary basin models. The nonlinear range of uncertainty in estimated parameter values (1 curve number and 3 recurrent rainfall amounts for each model) is determined using an inverse calibration-constrained Monte Carlo approach. Cumulative probability distributions for rainfall amounts indicate differences among basins for a given return period and an increase in magnitude and range among basins with increasing return interval. Comparison of the estimated median rainfall amounts for all return periods were reasonable but larger (3.2-26%) than rainfall estimates computed using the frequency-duration (traditional) approach and individual rain gauge data. The observed 25-year recurrence rainfall amount at La Hachadura in the Paz River basin during Hurricane Mitch (1998) is similar in value to, but outside and slightly less than, the estimated rainfall confidence limits. The similarity in joint inverse and traditionally computed rainfall events, however, suggests that the rainfall observation may likely be due to under-catch and not model bias. ?? Springer Science+Business Media B.V. 2007.
NASA Astrophysics Data System (ADS)
Bosman, Peter A. N.; Alderliesten, Tanja
2016-03-01
We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.
Hensman, James; Lawrence, Neil D; Rattray, Magnus
2013-08-20
Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.
Numerical simulation on pollutant dispersion from vehicle exhaust in street configurations.
Yassin, Mohamed F; Kellnerová, R; Janour, Z
2009-09-01
The impact of the street configurations on pollutants dispersion from vehicles exhausts within urban canyons was numerically investigated using a computational fluid dynamics (CFD) model. Three-dimensional flow and dispersion of gaseous pollutants were modeled using standard kappa - epsilon turbulence model, which was numerically solved based on Reynolds-averaged Navier-Stokes equations by the commercial CFD code FLUENT. The concentration fields in the urban canyons were examined in three cases of street configurations: (1) a regular-shaped intersection, (2) a T-shaped intersection and (3) a Skew-shaped crossing intersection. Vehicle emissions were simulated as double line sources along the street. The numerical model was validated against wind tunnel results in order to optimize the turbulence model. Numerical predictions agreed reasonably well with wind tunnel results. The results obtained indicate that the mean horizontal velocity was very small in the center near the lower region of street canyon. The lowest turbulent kinetic energy was found at the separation and reattachment points associated with the corner of the down part of the upwind and downwind buildings in the street canyon. The pollutant concentration at the upwind side in the regular-shaped street intersection was higher than that in the T-shaped and Skew-shaped street intersections. Moreover, the results reveal that the street intersections are important factors to predict the flow patterns and pollutant dispersion in street canyon.
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.
3D CSEM inversion based on goal-oriented adaptive finite element method
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.
2016-12-01
We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.
ERIC Educational Resources Information Center
Nikolic, B.; Radivojevic, Z.; Djordjevic, J.; Milutinovic, V.
2009-01-01
Courses in Computer Architecture and Organization are regularly included in Computer Engineering curricula. These courses are usually organized in such a way that students obtain not only a purely theoretical experience, but also a practical understanding of the topics lectured. This practical work is usually done in a laboratory using simulators…
Nature-Computer Camp 1991. Chapter 2 Program Evaluation Report.
ERIC Educational Resources Information Center
District of Columbia Public Schools, Washington, DC. Dept. of Research and Evaluation.
The District of Columbia Public Schools Nature Computer Camp (NCC) is an environmental/computer program which has been operating in the Catoctin Mountain Park (Maryland) since 1983. The camp operates for five one-week sessions serving a total of 406 regular sixth-grade students representing 84 elementary schools with an average of 81 students per…
Robust Computation of Linear Models, or How to Find a Needle in a Haystack
2012-02-17
robustly, project it onto a sphere, and then apply standard PCA. This approach is due to [LMS+99]. Maronna et al . [MMY06] recommend it as a preferred...of this form is due to Chandrasekaran et al . [CSPW11]. Given an observed matrix X, they propose to solve the semidefinite problem minimize ‖P ‖S1 + γ...regularization parameter γ negotiates a tradeoff between the two goals. Candès et al . [CLMW11] study the performance of (2.1) for robust linear
NASA Astrophysics Data System (ADS)
Yao, Bing; Yang, Hui
2016-12-01
This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.
Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing
2013-09-15
For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.
Further investigation on "A multiplicative regularization for force reconstruction"
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.
NASA Astrophysics Data System (ADS)
Dasgupta, Bhaskar; Nakamura, Haruki; Higo, Junichi
2016-10-01
Virtual-system coupled adaptive umbrella sampling (VAUS) enhances sampling along a reaction coordinate by using a virtual degree of freedom. However, VAUS and regular adaptive umbrella sampling (AUS) methods are yet computationally expensive. To decrease the computational burden further, improvements of VAUS for all-atom explicit solvent simulation are presented here. The improvements include probability distribution calculation by a Markov approximation; parameterization of biasing forces by iterative polynomial fitting; and force scaling. These when applied to study Ala-pentapeptide dimerization in explicit solvent showed advantage over regular AUS. By using improved VAUS larger biological systems are amenable.
Coulomb Impurity Problem of Graphene in Strong Coupling Regime in Magnetic Fields.
Kim, S C; Yang, S-R Eric
2015-10-01
We investigate the Coulomb impurity problem of graphene in strong coupling limit in the presence of magnetic fields. When the strength of the Coulomb potential is sufficiently strong the electron of the lowest energy boundstate of the n = 0 Landau level may fall to the center of the potential. To prevent this spurious effect the Coulomb potential must be regularized. The scaling function for the inverse probability density of this state at the center of the impurity potential is computed in the strong coupling regime. The dependence of the computed scaling function on the regularization parameter changes significantly as the strong coupling regime is approached.
NASA Astrophysics Data System (ADS)
Kardan, Farshid; Cheng, Wai-Chi; Baverel, Olivier; Porté-Agel, Fernando
2016-04-01
Understanding, analyzing and predicting meteorological phenomena related to urban planning and built environment are becoming more essential than ever to architectural and urban projects. Recently, various version of RANS models have been established but more validation cases are required to confirm their capability for wind flows. In the present study, the performance of recently developed RANS models, including the RNG k-ɛ , SST BSL k-ω and SST ⪆mma-Reθ , have been evaluated for the flow past a single block (which represent the idealized architecture scale). For validation purposes, the velocity streamlines and the vertical profiles of the mean velocities and variances were compared with published LES and wind tunnel experiment results. Furthermore, other additional CFD simulations were performed to analyze the impact of regular/irregular mesh structures and grid resolutions based on selected turbulence model in order to analyze the grid independency. Three different grid resolutions (coarse, medium and fine) of Nx × Ny × Nz = 320 × 80 × 320, 160 × 40 × 160 and 80 × 20 × 80 for the computational domain and nx × nz = 26 × 32, 13 × 16 and 6 × 8, which correspond to number of grid points on the block edges, were chosen and tested. It can be concluded that among all simulated RANS models, the SST ⪆mma-Reθ model performed best and agreed fairly well to the LES simulation and experimental results. It can also be concluded that the SST ⪆mma-Reθ model provides a very satisfactory results in terms of grid dependency in the fine and medium grid resolutions in both regular and irregular structure meshes. On the other hand, despite a very good performance of the RNG k-ɛ model in the fine resolution and in the regular structure grids, a disappointing performance of this model in the coarse and medium grid resolutions indicates that the RNG k-ɛ model is highly dependent on grid structure and grid resolution. These quantitative validations are essential to access the accuracy of RANS models for the simulation of flow in urban environment.
Hybrid simplified spherical harmonics with diffusion equation for light propagation in tissues.
Chen, Xueli; Sun, Fangfang; Yang, Defu; Ren, Shenghan; Zhang, Qian; Liang, Jimin
2015-08-21
Aiming at the limitations of the simplified spherical harmonics approximation (SPN) and diffusion equation (DE) in describing the light propagation in tissues, a hybrid simplified spherical harmonics with diffusion equation (HSDE) based diffuse light transport model is proposed. In the HSDE model, the living body is first segmented into several major organs, and then the organs are divided into high scattering tissues and other tissues. DE and SPN are employed to describe the light propagation in these two kinds of tissues respectively, which are finally coupled using the established boundary coupling condition. The HSDE model makes full use of the advantages of SPN and DE, and abandons their disadvantages, so that it can provide a perfect balance between accuracy and computation time. Using the finite element method, the HSDE is solved for light flux density map on body surface. The accuracy and efficiency of the HSDE are validated with both regular geometries and digital mouse model based simulations. Corresponding results reveal that a comparable accuracy and much less computation time are achieved compared with the SPN model as well as a much better accuracy compared with the DE one.
Gibbon, John D; Pal, Nairita; Gupta, Anupam; Pandit, Rahul
2016-12-01
We consider the three-dimensional (3D) Cahn-Hilliard equations coupled to, and driven by, the forced, incompressible 3D Navier-Stokes equations. The combination, known as the Cahn-Hilliard-Navier-Stokes (CHNS) equations, is used in statistical mechanics to model the motion of a binary fluid. The potential development of singularities (blow-up) in the contours of the order parameter ϕ is an open problem. To address this we have proved a theorem that closely mimics the Beale-Kato-Majda theorem for the 3D incompressible Euler equations [J. T. Beale, T. Kato, and A. J. Majda, Commun. Math. Phys. 94, 61 (1984)CMPHAY0010-361610.1007/BF01212349]. By taking an L^{∞} norm of the energy of the full binary system, designated as E_{∞}, we have shown that ∫_{0}^{t}E_{∞}(τ)dτ governs the regularity of solutions of the full 3D system. Our direct numerical simulations (DNSs) of the 3D CHNS equations for (a) a gravity-driven Rayleigh Taylor instability and (b) a constant-energy-injection forcing, with 128^{3} to 512^{3} collocation points and over the duration of our DNSs confirm that E_{∞} remains bounded as far as our computations allow.
3D shape decomposition and comparison for gallbladder modeling
NASA Astrophysics Data System (ADS)
Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen
2011-03-01
This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.
Critical Behavior of the Annealed Ising Model on Random Regular Graphs
NASA Astrophysics Data System (ADS)
Can, Van Hao
2017-11-01
In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.
Computer screen photo-excited surface plasmon resonance imaging.
Filippini, Daniel; Winquist, Fredrik; Lundström, Ingemar
2008-09-12
Angle and spectra resolved surface plasmon resonance (SPR) images of gold and silver thin films with protein deposits is demonstrated using a regular computer screen as light source and a web camera as detector. The screen provides multiple-angle illumination, p-polarized light and controlled spectral radiances to excite surface plasmons in a Kretchmann configuration. A model of the SPR reflectances incorporating the particularities of the source and detector explain the observed signals and the generation of distinctive SPR landscapes is demonstrated. The sensitivity and resolution of the method, determined in air and solution, are 0.145 nm pixel(-1), 0.523 nm, 5.13x10(-3) RIU degree(-1) and 6.014x10(-4) RIU, respectively, encouraging results at this proof of concept stage and considering the ubiquity of the instrumentation.
NASA Technical Reports Server (NTRS)
Vonderhaar, T. H.; Stephens, G. L.; Campbell, G. G.
1980-01-01
The annual and seasonal averaged Earth atmosphere radiation budgets derived from the most complete set of satellite observations available are presented. The budgets were derived from a composite of 48 monthly mean radiation budget maps. Annually and seasonally averaged radiation budgets are presented as global averages and zonal averages. The geographic distribution of the various radiation budget quantities is described. The annual cycle of the radiation budget was analyzed and the annual variability of net flux was shown to be largely dominated by the regular semi and annual cycles forced by external Earth-Sun geometry variations. Radiative transfer calculations were compared to the observed budget quantities and surface budgets were additionally computed with particular emphasis on discrepancies that exist between the present computations and previous surface budget estimates.
RBOOST: RIEMANNIAN DISTANCE BASED REGULARIZED BOOSTING
Liu, Meizhu; Vemuri, Baba C.
2011-01-01
Boosting is a versatile machine learning technique that has numerous applications including but not limited to image processing, computer vision, data mining etc. It is based on the premise that the classification performance of a set of weak learners can be boosted by some weighted combination of them. There have been a number of boosting methods proposed in the literature, such as the AdaBoost, LPBoost, SoftBoost and their variations. However, the learning update strategies used in these methods usually lead to overfitting and instabilities in the classification accuracy. Improved boosting methods via regularization can overcome such difficulties. In this paper, we propose a Riemannian distance regularized LPBoost, dubbed RBoost. RBoost uses Riemannian distance between two square-root densities (in closed form) – used to represent the distribution over the training data and the classification error respectively – to regularize the error distribution in an iterative update formula. Since this distance is in closed form, RBoost requires much less computational cost compared to other regularized Boosting algorithms. We present several experimental results depicting the performance of our algorithm in comparison to recently published methods, LP-Boost and CAVIAR, on a variety of datasets including the publicly available OASIS database, a home grown Epilepsy database and the well known UCI repository. Results depict that the RBoost algorithm performs better than the competing methods in terms of accuracy and efficiency. PMID:21927643
A multi-resolution approach to electromagnetic modelling
NASA Astrophysics Data System (ADS)
Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu
2018-07-01
We present a multi-resolution approach for 3-D magnetotelluric forward modelling. Our approach is motivated by the fact that fine-grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. With a conventional structured finite difference grid, the fine discretization required to adequately represent rapid variations near the surface is continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modelling is especially important for solving regularized inversion problems. We implement a multi-resolution finite difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of subgrids, with each subgrid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modelling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modelling operators on interfaces between adjacent subgrids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models shows that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.
NASA Astrophysics Data System (ADS)
Marrero, Carlos Sosa; Aubert, Vivien; Ciferri, Nicolas; Hernández, Alfredo; de Crevoisier, Renaud; Acosta, Oscar
2017-11-01
Understanding the response to irradiation in cancer radiotherapy (RT) may help devising new strategies with improved tumor local control. Computational models may allow to unravel the underlying radiosensitive mechanisms intervening in the dose-response relationship. By using extensive simulations a wide range of parameters may be evaluated providing insights on tumor response thus generating useful data to plan modified treatments. We propose in this paper a computational model of tumor growth and radiation response which allows to simulate a whole RT protocol. Proliferation of tumor cells, cell life-cycle, oxygen diffusion, radiosensitivity, RT response and resorption of killed cells were implemented in a multiscale framework. The model was developed in C++, using the Multi-formalism Modeling and Simulation Library (M2SL). Radiosensitivity parameters extracted from literature enabled us to simulate in a regular grid (voxel-wise) a prostate cell tissue. Histopathological specimens with different aggressiveness levels extracted from patients after prostatectomy were used to initialize in silico simulations. Results on tumor growth exhibit a good agreement with data from in vitro studies. Moreover, standard fractionation of 2 Gy/fraction, with a total dose of 80 Gy as a real RT treatment was applied with varying radiosensitivity and oxygen diffusion parameters. As expected, the high influence of these parameters was observed by measuring the percentage of survival tumor cell after RT. This work paves the way to further models allowing to simulate increased doses in modified hypofractionated schemes and to develop new patient-specific combined therapies.
NASA Astrophysics Data System (ADS)
Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin
2018-02-01
In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.
Seismic waveform inversion best practices: regional, global and exploration test cases
NASA Astrophysics Data System (ADS)
Modrak, Ryan; Tromp, Jeroen
2016-09-01
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.
NASA Astrophysics Data System (ADS)
Tolba, Khaled Ibrahim; Morgenthal, Guido
2018-01-01
This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.
W-phase estimation of first-order rupture distribution for megathrust earthquakes
NASA Astrophysics Data System (ADS)
Benavente, Roberto; Cummins, Phil; Dettmer, Jan
2014-05-01
Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.
Verveer, P. J; Gemkow, M. J; Jovin, T. M
1999-01-01
We have compared different image restoration approaches for fluorescence microscopy. The most widely used algorithms were classified with a Bayesian theory according to the assumed noise model and the type of regularization imposed. We considered both Gaussian and Poisson models for the noise in combination with Tikhonov regularization, entropy regularization, Good's roughness and without regularization (maximum likelihood estimation). Simulations of fluorescence confocal imaging were used to examine the different noise models and regularization approaches using the mean squared error criterion. The assumption of a Gaussian noise model yielded only slightly higher errors than the Poisson model. Good's roughness was the best choice for the regularization. Furthermore, we compared simulated confocal and wide-field data. In general, restored confocal data are superior to restored wide-field data, but given sufficient higher signal level for the wide-field data the restoration result may rival confocal data in quality. Finally, a visual comparison of experimental confocal and wide-field data is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Z.; Bessa, M. A.; Liu, W.K.
A predictive computational theory is shown for modeling complex, hierarchical materials ranging from metal alloys to polymer nanocomposites. The theory can capture complex mechanisms such as plasticity and failure that span across multiple length scales. This general multiscale material modeling theory relies on sound principles of mathematics and mechanics, and a cutting-edge reduced order modeling method named self-consistent clustering analysis (SCA) [Zeliang Liu, M.A. Bessa, Wing Kam Liu, “Self-consistent clustering analysis: An efficient multi-scale scheme for inelastic heterogeneous materials,” Comput. Methods Appl. Mech. Engrg. 306 (2016) 319–341]. SCA reduces by several orders of magnitude the computational cost of micromechanical andmore » concurrent multiscale simulations, while retaining the microstructure information. This remarkable increase in efficiency is achieved with a data-driven clustering method. Computationally expensive operations are performed in the so-called offline stage, where degrees of freedom (DOFs) are agglomerated into clusters. The interaction tensor of these clusters is computed. In the online or predictive stage, the Lippmann-Schwinger integral equation is solved cluster-wise using a self-consistent scheme to ensure solution accuracy and avoid path dependence. To construct a concurrent multiscale model, this scheme is applied at each material point in a macroscale structure, replacing a conventional constitutive model with the average response computed from the microscale model using just the SCA online stage. A regularized damage theory is incorporated in the microscale that avoids the mesh and RVE size dependence that commonly plagues microscale damage calculations. The SCA method is illustrated with two cases: a carbon fiber reinforced polymer (CFRP) structure with the concurrent multiscale model and an application to fatigue prediction for additively manufactured metals. For the CFRP problem, a speed up estimated to be about 43,000 is achieved by using the SCA method, as opposed to FE2, enabling the solution of an otherwise computationally intractable problem. The second example uses a crystal plasticity constitutive law and computes the fatigue potency of extrinsic microscale features such as voids. This shows that local stress and strain are capture sufficiently well by SCA. This model has been incorporated in a process-structure-properties prediction framework for process design in additive manufacturing.« less
NASA Astrophysics Data System (ADS)
Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian
2017-07-01
Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.
Consistent Partial Least Squares Path Modeling via Regularization.
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
Oh, Eun-Yeong; Lerwill, Melinda F.; Brachtel, Elena F.; Jones, Nicholas C.; Knoblauch, Nicholas W.; Montaser-Kouhsari, Laleh; Johnson, Nicole B.; Rao, Luigi K. F.; Faulkner-Jones, Beverly; Wilbur, David C.; Schnitt, Stuart J.; Beck, Andrew H.
2014-01-01
The categorization of intraductal proliferative lesions of the breast based on routine light microscopic examination of histopathologic sections is in many cases challenging, even for experienced pathologists. The development of computational tools to aid pathologists in the characterization of these lesions would have great diagnostic and clinical value. As a first step to address this issue, we evaluated the ability of computational image analysis to accurately classify DCIS and UDH and to stratify nuclear grade within DCIS. Using 116 breast biopsies diagnosed as DCIS or UDH from the Massachusetts General Hospital (MGH), we developed a computational method to extract 392 features corresponding to the mean and standard deviation in nuclear size and shape, intensity, and texture across 8 color channels. We used L1-regularized logistic regression to build classification models to discriminate DCIS from UDH. The top-performing model contained 22 active features and achieved an AUC of 0.95 in cross-validation on the MGH data-set. We applied this model to an external validation set of 51 breast biopsies diagnosed as DCIS or UDH from the Beth Israel Deaconess Medical Center, and the model achieved an AUC of 0.86. The top-performing model contained active features from all color-spaces and from the three classes of features (morphology, intensity, and texture), suggesting the value of each for prediction. We built models to stratify grade within DCIS and obtained strong performance for stratifying low nuclear grade vs. high nuclear grade DCIS (AUC = 0.98 in cross-validation) with only moderate performance for discriminating low nuclear grade vs. intermediate nuclear grade and intermediate nuclear grade vs. high nuclear grade DCIS (AUC = 0.83 and 0.69, respectively). These data show that computational pathology models can robustly discriminate benign from malignant intraductal proliferative lesions of the breast and may aid pathologists in the diagnosis and classification of these lesions. PMID:25490766
Scargiali, F; Grisafi, F; Busciglio, A; Brucato, A
2011-12-15
The formation of toxic heavy clouds as a result of sudden accidental releases from mobile containers, such as road tankers or railway tank cars, may occur inside urban areas so the problem arises of their consequences evaluation. Due to the semi-confined nature of the dispersion site simplified models may often be inappropriate. As an alternative, computational fluid dynamics (CFD) has the potential to provide realistic simulations even for geometrically complex scenarios since the heavy gas dispersion process is described by basic conservation equations with a reduced number of approximations. In the present work a commercial general purpose CFD code (CFX 4.4 by Ansys(®)) is employed for the simulation of dense cloud dispersion in urban areas. The simulation strategy proposed involves a stationary pre-release flow field simulation followed by a dynamic after-release flow and concentration field simulations. In order to try a generalization of results, the computational domain is modeled as a simple network of straight roads with regularly distributed blocks mimicking the buildings. Results show that the presence of buildings lower concentration maxima and enlarge the side spread of the cloud. Dispersion dynamics is also found to be strongly affected by the quantity of heavy-gas released. Copyright © 2011 Elsevier B.V. All rights reserved.
Novel Scalable 3-D MT Inverse Solver
NASA Astrophysics Data System (ADS)
Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.
2016-12-01
We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.
Optimal gains for a single polar orbiting satellite
NASA Technical Reports Server (NTRS)
Banfield, Don; Ingersoll, A. P.; Keppenne, C. L.
1993-01-01
Gains are the spatial weighting of an observation in its neighborhood versus the local values of a model prediction. They are the key to data assimilation, as they are the direct measure of how the data are used to guide the model. As derived in the broad context of data assimilation by Kalman and in the context of meteorology, for example, by Rutherford, the optimal gains are functions of the prediction error covariances between the observation and analysis points. Kalman introduced a very powerful technique that allows one to calculate these optimal gains at the time of each observation. Unfortunately, this technique is both computationally expensive and often numerically unstable for dynamical systems of the magnitude of meteorological models, and thus is unsuited for use in PMIRR data assimilation. However, the optimal gains as calculated by a Kalman filter do reach a steady state for regular observing patterns like that of a satellite. In this steady state, the gains are constants in time, and thus could conceivably be computed off-line. These steady-state Kalman gains (i.e., Wiener gains) would yield optimal performance without the computational burden of true Kalman filtering. We proposed to use this type of constant-in-time Wiener gain for the assimilation of data from PMIRR and Mars Observer.
Staff, Michael; Roberts, Christopher; March, Lyn
2016-10-01
To describe the completeness of routinely collected primary care data that could be used by computer models to predict clinical outcomes among patients with Type 2 Diabetes (T2D). Data on blood pressure, weight, total cholesterol, HDL-cholesterol and glycated haemoglobin levels for regular patients were electronically extracted from the medical record software of 12 primary care practices in Australia for the period 2000-2012. The data was analysed for temporal trends and for associations between patient characteristics and completeness. General practitioners were surveyed to identify barriers to recording data and strategies to improve its completeness. Over the study period data completeness improved up to around 80% complete although the recording of weight remained poorer at 55%. T2D patients with Ischaemic Heart Disease were more likely to have their blood pressure recorded (OR 1.6, p=0.02). Practitioners reported not experiencing any major barriers to using their computer medical record system but did agree with some suggested strategies to improve record completeness. The completeness of routinely collected data suitable for input into computerised predictive models is improving although other dimensions of data quality need to be addressed. Copyright © 2016 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
Simulated annealing in networks for computing possible arrangements for red and green cones
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.
1987-01-01
Attention is given to network models in which each of the cones of the retina is given a provisional color at random, and then the cones are allowed to determine the colors of their neighbors through an iterative process. A symmetric-structure spin-glass model has allowed arrays to be generated from completely random arrangements of red and green to arrays with approximately as much disorder as the parafoveal cones. Simulated annealing has also been added to the process in an attempt to generate color arrangements with greater regularity and hence more revealing moirepatterns than than the arrangements yielded by quenched spin-glass processes. Attention is given to the perceptual implications of these results.
A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.
Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping
2017-03-01
Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.
NASA Astrophysics Data System (ADS)
Fitch, W. Tecumseh
2014-09-01
Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology.
Fitch, W Tecumseh
2014-09-01
Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology. Copyright © 2014. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, Sam R.; Barack, Leor
2011-01-15
To model the radiative evolution of extreme mass-ratio binary inspirals (a key target of the LISA mission), the community needs efficient methods for computation of the gravitational self-force (SF) on the Kerr spacetime. Here we further develop a practical 'm-mode regularization' scheme for SF calculations, and give the details of a first implementation. The key steps in the method are (i) removal of a singular part of the perturbation field with a suitable 'puncture' to leave a sufficiently regular residual within a finite worldtube surrounding the particle's worldline, (ii) decomposition in azimuthal (m) modes, (iii) numerical evolution of the mmore » modes in 2+1D with a finite-difference scheme, and (iv) reconstruction of the SF from the mode sum. The method relies on a judicious choice of puncture, based on the Detweiler-Whiting decomposition. We give a working definition for the ''order'' of the puncture, and show how it determines the convergence rate of the m-mode sum. The dissipative piece of the SF displays an exponentially convergent mode sum, while the m-mode sum for the conservative piece converges with a power law. In the latter case, the individual modal contributions fall off at large m as m{sup -n} for even n and as m{sup -n+1} for odd n, where n is the puncture order. We describe an m-mode implementation with a 4th-order puncture to compute the scalar-field SF along circular geodesics on Schwarzschild. In a forthcoming companion paper we extend the calculation to the Kerr spacetime.« less
38 CFR 3.271 - Computation of income.
Code of Federal Regulations, 2012 CFR
2012-07-01
... income. Recurring income means income which is received or anticipated in equal amounts and at regular... computation purposes. (2) Irregular income. Irregular income means income which is received or anticipated during a 12-month annualization period, but which is received in unequal amounts or at irregular...
Development of seismic tomography software for hybrid supercomputers
NASA Astrophysics Data System (ADS)
Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton
2015-04-01
Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on supercomputers using multicore CPUs only, with preliminary performance tests showing good parallel efficiency on large numerical grids. Porting of the algorithms to hybrid supercomputers is currently ongoing.
Competition of information channels in the spreading of innovations
NASA Astrophysics Data System (ADS)
Kocsis, Gergely; Kun, Ferenc
2011-08-01
We study the spreading of information on technological developments in socioeconomic systems where the social contacts of agents are represented by a network of connections. In the model, agents get informed about the existence and advantages of new innovations through advertising activities of producers, which are then followed by an interagent information transfer. Computer simulations revealed that varying the strength of external driving and of interagent coupling, furthermore, the topology of social contacts, the model presents a complex behavior with interesting novel features: On the macrolevel the system exhibits logistic behavior typical for the diffusion of innovations. The time evolution can be described analytically by an integral equation that captures the nucleation and growth of clusters of informed agents. On the microlevel, small clusters are found to be compact with a crossover to fractal structures with increasing size. The distribution of cluster sizes has a power-law behavior with a crossover to a higher exponent when long-range social contacts are present in the system. Based on computer simulations we construct an approximate phase diagram of the model on a regular square lattice of agents.
Arrangement and Applying of Movement Patterns in the Cerebellum Based on Semi-supervised Learning.
Solouki, Saeed; Pooyan, Mohammad
2016-06-01
Biological control systems have long been studied as a possible inspiration for the construction of robotic controllers. The cerebellum is known to be involved in the production and learning of smooth, coordinated movements. Therefore, highly regular structure of the cerebellum has been in the core of attention in theoretical and computational modeling. However, most of these models reflect some special features of the cerebellum without regarding the whole motor command computational process. In this paper, we try to make a logical relation between the most significant models of the cerebellum and introduce a new learning strategy to arrange the movement patterns: cerebellar modular arrangement and applying of movement patterns based on semi-supervised learning (CMAPS). We assume here the cerebellum like a big archive of patterns that has an efficient organization to classify and recall them. The main idea is to achieve an optimal use of memory locations by more than just a supervised learning and classification algorithm. Surely, more experimental and physiological researches are needed to confirm our hypothesis.
Competition of information channels in the spreading of innovations.
Kocsis, Gergely; Kun, Ferenc
2011-08-01
We study the spreading of information on technological developments in socioeconomic systems where the social contacts of agents are represented by a network of connections. In the model, agents get informed about the existence and advantages of new innovations through advertising activities of producers, which are then followed by an interagent information transfer. Computer simulations revealed that varying the strength of external driving and of interagent coupling, furthermore, the topology of social contacts, the model presents a complex behavior with interesting novel features: On the macrolevel the system exhibits logistic behavior typical for the diffusion of innovations. The time evolution can be described analytically by an integral equation that captures the nucleation and growth of clusters of informed agents. On the microlevel, small clusters are found to be compact with a crossover to fractal structures with increasing size. The distribution of cluster sizes has a power-law behavior with a crossover to a higher exponent when long-range social contacts are present in the system. Based on computer simulations we construct an approximate phase diagram of the model on a regular square lattice of agents.
Microwave processing of a dental ceramic used in computer-aided design/computer-aided manufacturing.
Pendola, Martin; Saha, Subrata
2015-01-01
Because of their favorable mechanical properties and natural esthetics, ceramics are widely used in restorative dentistry. The conventional ceramic sintering process required for their use is usually slow, however, and the equipment has an elevated energy consumption. Sintering processes that use microwaves have several advantages compared to regular sintering: shorter processing times, lower energy consumption, and the capacity for volumetric heating. The objective of this study was to test the mechanical properties of a dental ceramic used in computer-aided design/computer-aided manufacturing (CAD/CAM) after the specimens were processed with microwave hybrid sintering. Density, hardness, and bending strength were measured. When ceramic specimens were sintered with microwaves, the processing times were reduced and protocols were simplified. Hardness was improved almost 20% compared to regular sintering, and flexural strength measurements suggested that specimens were approximately 50% stronger than specimens sintered in a conventional system. Microwave hybrid sintering may preserve or improve the mechanical properties of dental ceramics designed for CAD/CAM processing systems, reducing processing and waiting times.
An interactive display system for large-scale 3D models
NASA Astrophysics Data System (ADS)
Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman
2018-04-01
With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.
Cone-beam x-ray luminescence computed tomography based on x-ray absorption dosage.
Liu, Tianshuai; Rong, Junyan; Gao, Peng; Zhang, Wenli; Liu, Wenlei; Zhang, Yuanke; Lu, Hongbing
2018-02-01
With the advances of x-ray excitable nanophosphors, x-ray luminescence computed tomography (XLCT) has become a promising hybrid imaging modality. In particular, a cone-beam XLCT (CB-XLCT) system has demonstrated its potential in in vivo imaging with the advantage of fast imaging speed over other XLCT systems. Currently, the imaging models of most XLCT systems assume that nanophosphors emit light based on the intensity distribution of x-ray within the object, not completely reflecting the nature of the x-ray excitation process. To improve the imaging quality of CB-XLCT, an imaging model that adopts an excitation model of nanophosphors based on x-ray absorption dosage is proposed in this study. To solve the ill-posed inverse problem, a reconstruction algorithm that combines the adaptive Tikhonov regularization method with the imaging model is implemented for CB-XLCT reconstruction. Numerical simulations and phantom experiments indicate that compared with the traditional forward model based on x-ray intensity, the proposed dose-based model could improve the image quality of CB-XLCT significantly in terms of target shape, localization accuracy, and image contrast. In addition, the proposed model behaves better in distinguishing closer targets, demonstrating its advantage in improving spatial resolution. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
NASA Astrophysics Data System (ADS)
Bucha, Blažej; Janák, Juraj
2013-07-01
We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariances matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.
Abdomen and spinal cord segmentation with augmented active shape models
Xu, Zhoubing; Conrad, Benjamin N.; Baucom, Rebeccah B.; Smith, Seth A.; Poulose, Benjamin K.; Landman, Bennett A.
2016-01-01
Abstract. Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC. PMID:27610400
Percolation in education and application in the 21st century
NASA Astrophysics Data System (ADS)
Adler, Joan; Elfenbaum, Shaked; Sharir, Liran
2017-03-01
Percolation, "so simple you could teach it to your wife" (Chuck Newman, last century) is an ideal system to introduce young students to phase transitions. Two recent projects in the Computational Physics group at the Technion make this easy. One is a set of analog models to be mounted on our walls and enable visitors to switch between samples to see which mixtures of glass and metal objects have a percolating current. The second is a website enabling the creation of stereo samples of two and three dimensional clusters (suited for viewing with Oculus rift) on desktops, tablets and smartphones. Although there have been many physical applications for regular percolation in the past, for Bootstrap Percolation, where only sites with sufficient occupied neighbours remain active, there have not been a surfeit of condensed matter applications. We have found that the creation of diamond membranes for quantum computers can be modeled with a bootstrap process of graphitization in diamond, enabling prediction of optimal processing procedures.
Mini-batch optimized full waveform inversion with geological constrained gradient filtering
NASA Astrophysics Data System (ADS)
Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai
2018-05-01
High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.
TRPM8-Dependent Dynamic Response in a Mathematical Model of Cold Thermoreceptor
Olivares, Erick; Salgado, Simón; Maidana, Jean Paul; Herrera, Gaspar; Campos, Matías; Madrid, Rodolfo; Orio, Patricio
2015-01-01
Cold-sensitive nerve terminals (CSNTs) encode steady temperatures with regular, rhythmic temperature-dependent firing patterns that range from irregular tonic firing to regular bursting (static response). During abrupt temperature changes, CSNTs show a dynamic response, transiently increasing their firing frequency as temperature decreases and silencing when the temperature increases (dynamic response). To date, mathematical models that simulate the static response are based on two depolarizing/repolarizing pairs of membrane ionic conductance (slow and fast kinetics). However, these models fail to reproduce the dynamic response of CSNTs to rapid changes in temperature and notoriously they lack a specific cold-activated conductance such as the TRPM8 channel. We developed a model that includes TRPM8 as a temperature-dependent conductance with a calcium-dependent desensitization. We show by computer simulations that it appropriately reproduces the dynamic response of CSNTs from mouse cornea, while preserving their static response behavior. In this model, the TRPM8 conductance is essential to display a dynamic response. In agreement with experimental results, TRPM8 is also needed for the ongoing activity in the absence of stimulus (i.e. neutral skin temperature). Free parameters of the model were adjusted by an evolutionary optimization algorithm, allowing us to find different solutions. We present a family of possible parameters that reproduce the behavior of CSNTs under different temperature protocols. The detection of temperature gradients is associated to a homeostatic mechanism supported by the calcium-dependent desensitization. PMID:26426259
ERIC Educational Resources Information Center
Chamberlain, Ed
A cost benefit study was conducted to determine the effectiveness of a computer assisted instruction/computer management system (CAI/CMS) as an alternative to conventional methods of teaching reading within Chapter 1 and DPPF funded programs of the Columbus (Ohio) Public Schools. The Chapter 1 funded Compensatory Language Experiences and Reading…
Mincholé, Ana; Martínez, Juan Pablo; Laguna, Pablo; Rodriguez, Blanca
2018-01-01
Widely developed for clinical screening, electrocardiogram (ECG) recordings capture the cardiac electrical activity from the body surface. ECG analysis can therefore be a crucial first step to help diagnose, understand and predict cardiovascular disorders responsible for 30% of deaths worldwide. Computational techniques, and more specifically machine learning techniques and computational modelling are powerful tools for classification, clustering and simulation, and they have recently been applied to address the analysis of medical data, especially ECG data. This review describes the computational methods in use for ECG analysis, with a focus on machine learning and 3D computer simulations, as well as their accuracy, clinical implications and contributions to medical advances. The first section focuses on heartbeat classification and the techniques developed to extract and classify abnormal from regular beats. The second section focuses on patient diagnosis from whole recordings, applied to different diseases. The third section presents real-time diagnosis and applications to wearable devices. The fourth section highlights the recent field of personalized ECG computer simulations and their interpretation. Finally, the discussion section outlines the challenges of ECG analysis and provides a critical assessment of the methods presented. The computational methods reported in this review are a strong asset for medical discoveries and their translation to the clinical world may lead to promising advances. PMID:29321268
Numerical simulation of aerobic exercise as a countermeasure in human spaceflight
NASA Astrophysics Data System (ADS)
Perez-Poch, Antoni
The objective of this work is to analyse the efficacy of long-term regular exercise on relevant cardiovascular parameters when the human body is also exposed to microgravity. Computer simulations are an important tool which may be used to predict and analyse these possible effects, and compare them with in-flight experiments. We based our study on a electrical-like computer model (NELME: Numerical Evaluation of Long-term Microgravity Effects) which was developed in our laboratory and validated with the available data, focusing on the cardiovascu-lar parameters affected by changes in gravity exposure. NELME is based on an electrical-like control system model of the physiological changes, that are known to take place when grav-ity changes are applied. The computer implementation has a modular architecture. Hence, different output parameters, potential effects, organs and countermeasures can be easily imple-mented and evaluated. We added to the previous cardiovascular system module a perturbation module to evaluate the effect of regular exercise on the output parameters previously studied. Therefore, we simulated a well-known countermeasure with different protocols of exercising, as a pattern of input electric-like perturbations on the basic module. Different scenarios have been numerically simulated for both men and women, in different patterns of microgravity, reduced gravity and time exposure. Also EVAs were simulated as perturbations to the system. Results show slight differences in gender, with more risk reduction for women than for men after following an aerobic exercise pattern during a simulated mission. Also, risk reduction of a cardiovascular malfunction is evaluated, with a ceiling effect found in all scenarios. A turning point in vascular resistance for a long-term exposure of microgravity below 0.4g has been found of particular interest. In conclusion, we show that computer simulations are a valuable tool to analyse different effects of long-term microgravity exposure on the human body. Potential countermeasures such as physical exercise can also be evaluated as an induced perturbation into the system. Relevant results are compatible with existing data, and are of valuable interest as an assessment of the efficacy of aerobic exercise as a countermeasure in future missions to Mars.
Computational approaches to understand cardiac electrophysiology and arrhythmias
Roberts, Byron N.; Yang, Pei-Chi; Behrens, Steven B.; Moreno, Jonathan D.
2012-01-01
Cardiac rhythms arise from electrical activity generated by precisely timed opening and closing of ion channels in individual cardiac myocytes. These impulses spread throughout the cardiac muscle to manifest as electrical waves in the whole heart. Regularity of electrical waves is critically important since they signal the heart muscle to contract, driving the primary function of the heart to act as a pump and deliver blood to the brain and vital organs. When electrical activity goes awry during a cardiac arrhythmia, the pump does not function, the brain does not receive oxygenated blood, and death ensues. For more than 50 years, mathematically based models of cardiac electrical activity have been used to improve understanding of basic mechanisms of normal and abnormal cardiac electrical function. Computer-based modeling approaches to understand cardiac activity are uniquely helpful because they allow for distillation of complex emergent behaviors into the key contributing components underlying them. Here we review the latest advances and novel concepts in the field as they relate to understanding the complex interplay between electrical, mechanical, structural, and genetic mechanisms during arrhythmia development at the level of ion channels, cells, and tissues. We also discuss the latest computational approaches to guiding arrhythmia therapy. PMID:22886409
One-loop calculations in Supersymmetric Lattice QCD
NASA Astrophysics Data System (ADS)
Costa, M.; Panagopoulos, H.
2017-03-01
We study the self energies of all particles which appear in a lattice regularization of supersymmetric QCD (N = 1). We compute, perturbatively to one-loop, the relevant two-point Green's functions using both the dimensional and the lattice regularizations. Our lattice formulation employs the Wilson fermion acrion for the gluino and quark fields. The gauge group that we consider is SU(Nc) while the number of colors, Nc and the number of flavors, Nf , are kept as generic parameters. We have also searched for relations among the propagators which are computed from our one-loop results. We have obtained analytic expressions for the renormalization functions of the quark field (Zψ), gluon field (Zu), gluino field (Zλ) and squark field (ZA±). We present here results from dimensional regularization, relegating to a forthcoming publication [1] our results along with a more complete list of references. Part of the lattice study regards also the renormalization of quark bilinear operators which, unlike the nonsupersymmetric case, exhibit a rich pattern of operator mixing at the quantum level.
Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE
Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2013-01-01
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478
Zeng, Dong; Xie, Qi; Cao, Wenfei; Lin, Jiahui; Zhang, Hao; Zhang, Shanli; Huang, Jing; Bian, Zhaoying; Meng, Deyu; Xu, Zongben; Liang, Zhengrong; Chen, Wufan
2017-01-01
Dynamic cerebral perfusion computed tomography (DCPCT) has the ability to evaluate the hemodynamic information throughout the brain. However, due to multiple 3-D image volume acquisitions protocol, DCPCT scanning imposes high radiation dose on the patients with growing concerns. To address this issue, in this paper, based on the robust principal component analysis (RPCA, or equivalently the low-rank and sparsity decomposition) model and the DCPCT imaging procedure, we propose a new DCPCT image reconstruction algorithm to improve low dose DCPCT and perfusion maps quality via using a powerful measure, called Kronecker-basis-representation tensor sparsity regularization, for measuring low-rankness extent of a tensor. For simplicity, the first proposed model is termed tensor-based RPCA (T-RPCA). Specifically, the T-RPCA model views the DCPCT sequential images as a mixture of low-rank, sparse, and noise components to describe the maximum temporal coherence of spatial structure among phases in a tensor framework intrinsically. Moreover, the low-rank component corresponds to the “background” part with spatial–temporal correlations, e.g., static anatomical contribution, which is stationary over time about structure, and the sparse component represents the time-varying component with spatial–temporal continuity, e.g., dynamic perfusion enhanced information, which is approximately sparse over time. Furthermore, an improved nonlocal patch-based T-RPCA (NL-T-RPCA) model which describes the 3-D block groups of the “background” in a tensor is also proposed. The NL-T-RPCA model utilizes the intrinsic characteristics underlying the DCPCT images, i.e., nonlocal self-similarity and global correlation. Two efficient algorithms using alternating direction method of multipliers are developed to solve the proposed T-RPCA and NL-T-RPCA models, respectively. Extensive experiments with a digital brain perfusion phantom, preclinical monkey data, and clinical patient data clearly demonstrate that the two proposed models can achieve more gains than the existing popular algorithms in terms of both quantitative and visual quality evaluations from low-dose acquisitions, especially as low as 20 mAs. PMID:28880164
Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow
NASA Astrophysics Data System (ADS)
Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar
2014-09-01
We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.
NASA Astrophysics Data System (ADS)
Li, Tao
2018-06-01
The complexity of aluminum electrolysis process leads the temperature for aluminum reduction cells hard to measure directly. However, temperature is the control center of aluminum production. To solve this problem, combining some aluminum plant's practice data, this paper presents a Soft-sensing model of temperature for aluminum electrolysis process on Improved Twin Support Vector Regression (ITSVR). ITSVR eliminates the slow learning speed of Support Vector Regression (SVR) and the over-fit risk of Twin Support Vector Regression (TSVR) by introducing a regularization term into the objective function of TSVR, which ensures the structural risk minimization principle and lower computational complexity. Finally, the model with some other parameters as auxiliary variable, predicts the temperature by ITSVR. The simulation result shows Soft-sensing model based on ITSVR has short time-consuming and better generalization.
Experimental validation of a linear model for data reduction in chirp-pulse microwave CT.
Miyakawa, M; Orikasa, K; Bertero, M; Boccacci, P; Conte, F; Piana, M
2002-04-01
Chirp-pulse microwave computerized tomography (CP-MCT) is an imaging modality developed at the Department of Biocybernetics, University of Niigata (Niigata, Japan), which intends to reduce the microwave-tomography problem to an X-ray-like situation. We have recently shown that data acquisition in CP-MCT can be described in terms of a linear model derived from scattering theory. In this paper, we validate this model by showing that the theoretically computed response function is in good agreement with the one obtained from a regularized multiple deconvolution of three data sets measured with the prototype of CP-MCT. Furthermore, the reliability of the model as far as image restoration in concerned, is tested in the case of space-invariant conditions by considering the reconstruction of simple on-axis cylindrical phantoms.
Closure properties of Watson-Crick grammars
NASA Astrophysics Data System (ADS)
Zulkufli, Nurul Liyana binti Mohamad; Turaev, Sherzod; Tamrin, Mohd Izzuddin Mohd; Azeddine, Messikh
2015-12-01
In this paper, we define Watson-Crick context-free grammars, as an extension of Watson-Crick regular grammars and Watson-Crick linear grammars with context-free grammar rules. We show the relation of Watson-Crick (regular and linear) grammars to the sticker systems, and study some of the important closure properties of the Watson-Crick grammars. We establish that the Watson-Crick regular grammars are closed under almost all of the main closure operations, while the differences between other Watson-Crick grammars with their corresponding Chomsky grammars depend on the computational power of the Watson-Crick grammars which still need to be studied.
Chen, Wei J.; Ting, Te-Tien; Chang, Chao-Ming; Liu, Ying-Chun; Chen, Chuan-Yu
2014-01-01
The popularity of ketamine for recreational use among young people began to increase, particularly in Asia, in 2000. To gain more knowledge about the use of ketamine among high risk individuals, a respondent-driven sampling (RDS) was implemented among regular alcohol and tobacco users in the Taipei metropolitan area from 2007 to 2010. The sampling was initiated in three different settings (i.e., two in the community and one in a clinic) to recruit seed individuals. Each participant was asked to refer one to five friends known to be regular tobacco smokers and alcohol drinkers to participate in the present study. Incentives were offered differentially upon the completion of an interview and successful referral. Information pertaining to drug use experience was collected by an audio computer-assisted self-interview instrument. Software built for RDS analyses was used for data analyses. Of the 1,115 subjects recruited, about 11.7% of the RDS respondents reported ever having used ketamine. Positive expectancy of ketamine use was positively associated with ketamine use; in contrast, negative expectancy inversely associated with ketamine use. Decision-making characteristics as measured on the Iowa Gambling Task using reinforcement learning models revealed that ketamine users learned less from the most recent event than both tobacco- and drug-naïve controls and regular tobacco and alcohol users. These findings about ketamine use among young people have implications for its prevention and intervention. PMID:25264412
Cyber-T web server: differential analysis of high-throughput data.
Kayala, Matthew A; Baldi, Pierre
2012-07-01
The Bayesian regularization method for high-throughput differential analysis, described in Baldi and Long (A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics 2001: 17: 509-519) and implemented in the Cyber-T web server, is one of the most widely validated. Cyber-T implements a t-test using a Bayesian framework to compute a regularized variance of the measurements associated with each probe under each condition. This regularized estimate is derived by flexibly combining the empirical measurements with a prior, or background, derived from pooling measurements associated with probes in the same neighborhood. This approach flexibly addresses problems associated with low replication levels and technology biases, not only for DNA microarrays, but also for other technologies, such as protein arrays, quantitative mass spectrometry and next-generation sequencing (RNA-seq). Here we present an update to the Cyber-T web server, incorporating several useful new additions and improvements. Several preprocessing data normalization options including logarithmic and (Variance Stabilizing Normalization) VSN transforms are included. To augment two-sample t-tests, a one-way analysis of variance is implemented. Several methods for multiple tests correction, including standard frequentist methods and a probabilistic mixture model treatment, are available. Diagnostic plots allow visual assessment of the results. The web server provides comprehensive documentation and example data sets. The Cyber-T web server, with R source code and data sets, is publicly available at http://cybert.ics.uci.edu/.
Nonlinear Solver Approaches for the Diffusive Wave Approximation to the Shallow Water Equations
NASA Astrophysics Data System (ADS)
Collier, N.; Knepley, M.
2015-12-01
The diffusive wave approximation to the shallow water equations (DSW) is a doubly-degenerate, nonlinear, parabolic partial differential equation used to model overland flows. Despite its challenges, the DSW equation has been extensively used to model the overland flow component of various integrated surface/subsurface models. The equation's complications become increasingly problematic when ponding occurs, a feature which becomes pervasive when solving on large domains with realistic terrain. In this talk I discuss the various forms and regularizations of the DSW equation and highlight their effect on the solvability of the nonlinear system. In addition to this analysis, I present results of a numerical study which tests the applicability of a class of composable nonlinear algebraic solvers recently added to the Portable, Extensible, Toolkit for Scientific Computation (PETSc).
A Demons algorithm for image registration with locally adaptive regularization.
Cahill, Nathan D; Noble, J Alison; Hawkes, David J
2009-01-01
Thirion's Demons is a popular algorithm for nonrigid image registration because of its linear computational complexity and ease of implementation. It approximately solves the diffusion registration problem by successively estimating force vectors that drive the deformation toward alignment and smoothing the force vectors by Gaussian convolution. In this article, we show how the Demons algorithm can be generalized to allow image-driven locally adaptive regularization in a manner that preserves both the linear complexity and ease of implementation of the original Demons algorithm. We show that the proposed algorithm exhibits lower target registration error and requires less computational effort than the original Demons algorithm on the registration of serial chest CT scans of patients with lung nodules.
On the kinematics of scalar iso-surfaces in turbulent flow
NASA Astrophysics Data System (ADS)
Blakeley, Brandon C.; Riley, James J.; Storti, Duane W.; Wang, Weirong
2017-11-01
The behavior of scalar iso-surfaces in turbulent flows is of fundamental interest and importance in a number of problems, e.g., the stoichiometric surface in non-premixed reactions, and the turbulent/non-turbulent interface in localized turbulent shear flows. Of particular interest here is the behavior of the average surface area per unit volume, Σ. We report on the use of direct numerical simulations and sophisticated surface tracking techniques to directly compute Σ and model its evolution. We consider two different scalar configurations in decaying, isotropic turbulence: first, the iso-surface is initially homogenous and isotropic in space, second, the iso-surface is initially planar. A novel method of computing integral properties from regularly-sampled values of a scalar function is leveraged to provide accurate estimates of Σ. Guided by simulation results, modeling is introduced from two perspectives. The first approach models the various terms in the evolution equation for Σ, while the second uses Rice's theorem to model Σ directly. In particular, the two principal effects on the evolution of Σ, i.e., the growth of the surface area due to local surface stretching, and the ultimate decay due to molecular destruction, are addressed.
NASA Astrophysics Data System (ADS)
Huang, Xingguo; Sun, Hui
2018-05-01
Gaussian beam is an important complex geometrical optical technology for modeling seismic wave propagation and diffraction in the subsurface with complex geological structure. Current methods for Gaussian beam modeling rely on the dynamic ray tracing and the evanescent wave tracking. However, the dynamic ray tracing method is based on the paraxial ray approximation and the evanescent wave tracking method cannot describe strongly evanescent fields. This leads to inaccuracy of the computed wave fields in the region with a strong inhomogeneous medium. To address this problem, we compute Gaussian beam wave fields using the complex phase by directly solving the complex eikonal equation. In this method, the fast marching method, which is widely used for phase calculation, is combined with Gauss-Newton optimization algorithm to obtain the complex phase at the regular grid points. The main theoretical challenge in combination of this method with Gaussian beam modeling is to address the irregular boundary near the curved central ray. To cope with this challenge, we present the non-uniform finite difference operator and a modified fast marching method. The numerical results confirm the proposed approach.
Analytic Intermodel Consistent Modeling of Volumetric Human Lung Dynamics.
Ilegbusi, Olusegun; Seyfi, Behnaz; Neylon, John; Santhanam, Anand P
2015-10-01
Human lung undergoes breathing-induced deformation in the form of inhalation and exhalation. Modeling the dynamics is numerically complicated by the lack of information on lung elastic behavior and fluid-structure interactions between air and the tissue. A mathematical method is developed to integrate deformation results from a deformable image registration (DIR) and physics-based modeling approaches in order to represent consistent volumetric lung dynamics. The computational fluid dynamics (CFD) simulation assumes the lung is a poro-elastic medium with spatially distributed elastic property. Simulation is performed on a 3D lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of a human subject. The heterogeneous Young's modulus (YM) is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The deformation obtained from the CFD is then coupled with the displacement obtained from the 4D lung DIR by means of the Tikhonov regularization (TR) algorithm. The numerical results include 4DCT registration, CFD, and optimal displacement data which collectively provide consistent estimate of the volumetric lung dynamics. The fusion method is validated by comparing the optimal displacement with the results obtained from the 4DCT registration.
Regularization of soft-X-ray imaging in the DIII-D tokamak
Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...
2015-03-02
We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less
Quantifying the predictive consequences of model error with linear subspace analysis
White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.
2014-01-01
All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.
Seismic velocity estimation from time migration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, Maria Kourkina
2007-01-01
This is concerned with imaging and wave propagation in nonhomogeneous media, and includes a collection of computational techniques, such as level set methods with material transport, Dijkstra-like Hamilton-Jacobi solvers for first arrival Eikonal equations and techniques for data smoothing. The theoretical components include aspects of seismic ray theory, and the results rely on careful comparison with experiment and incorporation as input into large production-style geophysical processing codes. Producing an accurate image of the Earth's interior is a challenging aspect of oil recovery and earthquake analysis. The ultimate computational goal, which is to accurately produce a detailed interior map of themore » Earth's makeup on the basis of external soundings and measurements, is currently out of reach for several reasons. First, although vast amounts of data have been obtained in some regions, this has not been done uniformly, and the data contain noise and artifacts. Simply sifting through the data is a massive computational job. Second, the fundamental inverse problem, namely to deduce the local sound speeds of the earth that give rise to measured reacted signals, is exceedingly difficult: shadow zones and complex structures can make for ill-posed problems, and require vast computational resources. Nonetheless, seismic imaging is a crucial part of the oil and gas industry. Typically, one makes assumptions about the earth's substructure (such as laterally homogeneous layering), and then uses this model as input to an iterative procedure to build perturbations that more closely satisfy the measured data. Such models often break down when the material substructure is significantly complex: not surprisingly, this is often where the most interesting geological features lie. Data often come in a particular, somewhat non-physical coordinate system, known as time migration coordinates. The construction of substructure models from these data is less and less reliable as the earth becomes horizontally nonconstant. Even mild lateral velocity variations can significantly distort subsurface structures on the time migrated images. Conversely, depth migration provides the potential for more accurate reconstructions, since it can handle significant lateral variations. However, this approach requires good input data, known as a 'velocity model'. We address the problem of estimating seismic velocities inside the earth, i.e., the problem of constructing a velocity model, which is necessary for obtaining seismic images in regular Cartesian coordinates. The main goals are to develop algorithms to convert time-migration velocities to true seismic velocities, and to convert time-migrated images to depth images in regular Cartesian coordinates. Our main results are three-fold. First, we establish a theoretical relation between the true seismic velocities and the 'time migration velocities' using the paraxial ray tracing. Second, we formulate an appropriate inverse problem describing the relation between time migration velocities and depth velocities, and show that this problem is mathematically ill-posed, i.e., unstable to small perturbations. Third, we develop numerical algorithms to solve regularized versions of these equations which can be used to recover smoothed velocity variations. Our algorithms consist of efficient time-to-depth conversion algorithms, based on Dijkstra-like Fast Marching Methods, as well as level set and ray tracing algorithms for transforming Dix velocities into seismic velocities. Our algorithms are applied to both two-dimensional and three-dimensional problems, and we test them on a collection of both synthetic examples and field data.« less
Hanson, Erik A; Lundervold, Arvid
2013-11-01
Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.
Image deblurring based on nonlocal regularization with a non-convex sparsity constraint
NASA Astrophysics Data System (ADS)
Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi
2018-04-01
In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.
Patterned corneal collagen crosslinking for astigmatism: Computational modeling study
Seven, Ibrahim; Roy, Abhijit Sinha; Dupps, William J.
2014-01-01
PURPOSE To test the hypothesis that spatially selective corneal stromal stiffening can alter corneal astigmatism and assess the effects of treatment orientation, pattern, and material model complexity in computational models using patient-specific geometries. SETTING Cornea and Refractive Surgery Service, Academic Eye Institute, Cleveland, Ohio, USA. DESIGN Computational modeling study. METHODS Three-dimensional corneal geometries from 10 patients with corneal astigmatism were exported from a clinical tomography system (Pentacam). Corneoscleral finite element models of each eye were generated. Four candidate treatment patterns were simulated, and the effects of treatment orientation and magnitude of stiffening on anterior curvature and aberrations were studied. The effect of material model complexity on simulated outcomes was also assessed. RESULTS Pretreatment anterior corneal astigmatism ranged from 1.22 to 3.92 diopters (D) in a series that included regular and irregular astigmatic patterns. All simulated treatment patterns oriented on the flat axis resulted in mean reductions in corneal astigmatism and depended on the pattern geometry. The linear bow-tie pattern produced a greater mean reduction in astigmatism (1.08 D ± 0.13 [SD]; range 0.74 to 1.23 D) than other patterns tested under an assumed 2-times increase in corneal stiffness, and it had a nonlinear relationship to the degree of stiffening. The mean astigmatic effect did not change significantly with a fiber- or depth-dependent model, but it did affect the coupling ratio. CONCLUSIONS In silico simulations based on patient-specific geometries suggest that clinically significant reductions in astigmatism are possible with patterned collagen crosslinking. Effect magnitude was dependent on patient-specific geometry, effective stiffening pattern, and treatment orientation. PMID:24767795
Towards a Universal Calving Law: Modeling Ice Shelves Using Damage Mechanics
NASA Astrophysics Data System (ADS)
Whitcomb, M.; Bassis, J. N.; Price, S. F.; Lipscomb, W. H.
2017-12-01
Modeling iceberg calving from ice shelves and ice tongues is a particularly difficult problem in glaciology because of the wide range of observed calving rates. Ice shelves naturally calve large tabular icebergs at infrequent intervals, but may instead calve smaller bergs regularly or disintegrate due to hydrofracturing in warmer conditions. Any complete theory of iceberg calving in ice shelves must be able to generate realistic calving rate values depending on the magnitudes of the external forcings. Here we show that a simple damage evolution law, which represents crevasse distributions as a continuum field, produces reasonable estimates of ice shelf calving rates when added to the Community Ice Sheet Model (CISM). Our damage formulation is based on a linear stability analysis and depends upon the bulk stress and strain rate in the ice shelf, as well as the surface and basal melt rates. The basal melt parameter in our model enhances crevasse growth near the ice shelf terminus, leading to an increased iceberg production rate. This implies that increasing ocean temperatures underneath ice shelves will drive ice shelf retreat, as has been observed in the Amundsen and Bellingshausen Seas. We show that our model predicts broadly correct calving rates for ice tongues ranging in length from 10 km (Erebus) to over 100 km (Drygalski), by matching the computed steady state lengths to observations. In addition, we apply the model to idealized Antarctic ice shelves and show that we can also predict realistic ice shelf extents. Our damage mechanics model provides a promising, computationally efficient way to compute calving fluxes and links ice shelf stability to climate forcing.
Eikonal-Based Inversion of GPR Data from the Vaucluse Karst Aquifer
NASA Astrophysics Data System (ADS)
Yedlin, M. J.; van Vorst, D.; Guglielmi, Y.; Cappa, F.; Gaffet, S.
2009-12-01
In this paper, we present an easy-to-implement eikonal-based travel time inversion algorithm and apply it to borehole GPR measurement data obtained from a karst aquifer located in the Vaucluse in Provence. The boreholes are situated with a fault zone deep inside the aquifer, in the Laboratoire Souterrain à Bas Bruit (LSBB). The measurements were made using 250 MHz MALA RAMAC borehole GPR antennas. The inversion formulation is unique in its application of a fast-sweeping eikonal solver (Zhao [1]) to the minimization of an objective functional that is composed of a travel time misfit and a model-based regularization [2]. The solver is robust in the presence of large velocity contrasts, efficient, easy to implement, and does not require the use of a sorting algorithm. The computation of sensitivities, which are required for the inversion process, is achieved by tracing rays backward from receiver to source following the gradient of the travel time field [2]. A user wishing to implement this algorithm can opt to avoid the ray tracing step and simply perturb the model to obtain the required sensitivities. Despite the obvious computational inefficiency of such an approach, it is acceptable for 2D problems. The relationship between travel time and the velocity profile is non-linear, requiring an iterative approach to be used. At each iteration, a set of matrix equations is solved to determine the model update. As the inversion continues, the weighting of the regularization parameter is adjusted until an appropriate data misfit is obtained. The inversion results, shown in the attached image, are consistent with previously obtained geological structure. Future work will look at improving inversion resolution and incorporating other measurement methodologies, with the goal of providing useful data for groundwater analysis. References: [1] H. Zhao, “A fast sweeping method for Eikonal equations,” Mathematics of Computation, vol. 74, no. 250, pp. 603-627, 2004. [2] D. Aldridge and D. Oldenburg, “Two-dimensional tomographic inversion with finite-difference traveltimes,” Journal of Seismic Exploration, vol. 2, pp. 257-274, 1993. Recovered Permittivity Profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez Anez, Francisco
This paper presents two development projects (STARMATE and VIRMAN) focused on supporting training on maintenance. Both projects aim at specifying, designing, developing, and demonstrating prototypes allowing computer guided maintenance of complex mechanical elements using Augmented and Virtual Reality techniques. VIRMAN is a Spanish development project. The objective is to create a computer tool for maintenance training course elaborations and training delivery based on 3D virtual reality models of complex components. The training delivery includes 3D record displays on maintenance procedures with all complementary information for intervention understanding. Users are requested to perform the maintenance intervention trying to follow up themore » procedure. Users can be evaluated about the level of knowledge achieved. Instructors can check the evaluation records left during the training sessions. VIRMAN is simple software supported by a regular computer and can be used in an Internet framework. STARMATE is a forward step in the area of virtual reality. STARMATE is a European Commission project in the frame of 'Information Societies Technologies'. A consortium of five companies and one research institute shares their expertise in this new technology. STARMATE provides two main functionalities (1) user assistance for achieving assembly/de-assembly and following maintenance procedures, and (2) workforce training. The project relies on Augmented Reality techniques, which is a growing area in Virtual Reality research. The idea of Augmented Reality is to combine a real scene, viewed by the user, with a virtual scene, generated by a computer, augmenting the reality with additional information. The user interface is see-through goggles, headphones, microphone and an optical tracking system. All these devices are integrated in a helmet connected with two regular computers. The user has his hands free for performing the maintenance intervention and he can navigate in the virtual world thanks to a voice recognition system and a virtual pointing device. The maintenance work is guided with audio instructions, 2D and 3D information are directly displayed into the user's goggles: There is a position-tracking system that allows 3D virtual models to be displayed in the real counterpart positions independently of the user allocation. The user can create his own virtual environment, placing the information required wherever he wants. The STARMATE system is applicable to a large variety of real work situations. (author)« less
ERIC Educational Resources Information Center
Mac Iver, Douglas J.; Balfanz, Robert; Plank, Stephen B.
In Talent Development Middle Schools, students needing extra help in mathematics participate in the Computer- and Team-Assisted Mathematics Acceleration (CATAMA) course. CATAMA is an innovative combination of computer-assisted instruction and structured cooperative learning that students receive in addition to their regular math course for about…
ERIC Educational Resources Information Center
Henney, Maribeth
Two related studies were conducted to determine whether students read all-capital text and mixed text displayed on a computer screen with the same speed and accuracy. Seventy-seven college students read M. A. Tinker's "Basic Reading Rate Test" displayed on a PLATO computer screen. One treatment consisted of paragraphs in all-capital type…
Consistent Partial Least Squares Path Modeling via Regularization
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present. PMID:29515491
The CAN Microcluster: Parallel Processing over the Controller Area Network
ERIC Educational Resources Information Center
Kuban, Paul A.; Ragade, Rammohan K.
2005-01-01
Most electrical engineering and computer science undergraduate programs include at least one course on microcontrollers and assembly language programming. Some departments offer legacy courses in C programming, but few include C programming from an embedded systems perspective, where it is still regularly used. Distributed computing and parallel…
Number Strings: Daily Computational Fluency
ERIC Educational Resources Information Center
Lambert, Rachel; Imm, Kara; Williams, Dina A.
2017-01-01
In this article, the authors illustrate how the practice of number strings--used regularly in a classroom community--can simultaneously support computational fluency and building conceptual understanding. Specifically, the authors will demonstrate how a lesson about multi-digit addition (CCSSM 2NBT.B.5) can simultaneously serve as an invitation to…
Experimental Mathematics and Computational Statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.; Borwein, Jonathan M.
2009-04-30
The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.
29 CFR 548.502 - Other payments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Computation of Overtime Pay § 548.502 Other... would have been included in the regular rate of pay. 26 26 Unless specifically excluded by agreement or... employee's daily average hourly earnings, and under the employment agreement he is paid one and one-half...
Computer-Assisted Law Instruction: Clinical Education's Bionic Sibling
ERIC Educational Resources Information Center
Henn, Harry G.; Platt, Robert C.
1977-01-01
Computer-assisted instruction (CAI), like clinical education, has considerable potential for legal training. As an initial Cornell Law School experiment, a lesson in applying different corporate statutory dividend formulations, with a cross-section of balance sheets and other financial data, was used to supplement regular class assignments.…
Microfabrication Technology for Photonics
1990-06-01
specifically addressed by a "folded," parallel architecture currently being proposed by A. Huang(35) who calls it "Computational Origami ." 25 IV...34Computational Origami " U.S. Patent Pending; H.M. Lu, "computatiortal Origami : A Geometric Approach to Regular Multiprocessing," MIT Master’s Thesis in
Exploring the Use of Technology for Active Aging and Thriving.
Øderud, Tone; Østensen, Elisabeth; Gjevjon, Edith Roth; Moen, Anne
2017-01-01
The study explores how older adults with limited digital experience become users of tablet computers (iPad) with Internet access, and how the tablet computers become part of their daily life facilitating active aging and thriving. Volunteer adolescents were mobilised to teach and follow up the participants regularly.
Computational fluid dynamics characterization of a novel mixed cell raceway design
USDA-ARS?s Scientific Manuscript database
Computational fluid dynamics (CFD) analysis was performed on a new type of mixed cell raceway (MCR) that incorporates longitudinal plug flow using inlet and outlet weirs for the primary fraction of the total flow. As opposed to regular MCR wherein vortices are entirely characterized by the boundary ...
Calculation of turbulence-driven secondary motion in ducts with arbitrary cross section
NASA Technical Reports Server (NTRS)
Demuren, A. O.
1989-01-01
Calculation methods for turbulent duct flows are generalized for ducts with arbitrary cross-sections. The irregular physical geometry is transformed into a regular one in computational space, and the flow equations are solved with a finite-volume numerical procedure. The turbulent stresses are calculated with an algebraic stress model derived by simplifying model transport equations for the individual Reynolds stresses. Two variants of such a model are considered. These procedures enable the prediction of both the turbulence-driven secondary flow and the anisotropy of the Reynolds stresses, in contrast to some of the earlier calculation methods. Model predictions are compared to experimental data for developed flow in triangular duct, trapezoidal duct and a rod-bundle geometry. The correct trends are predicted, and the quantitative agreement is mostly fair. The simpler variant of the algebraic stress model procured better agreement with the measured data.
Kadkhodapour, J; Montazerian, H; Darabi, A Ch; Anaraki, A P; Ahmadi, S M; Zadpoor, A A; Schmauder, S
2015-10-01
Since the advent of additive manufacturing techniques, regular porous biomaterials have emerged as promising candidates for tissue engineering scaffolds owing to their controllable pore architecture and feasibility in producing scaffolds from a variety of biomaterials. The architecture of scaffolds could be designed to achieve similar mechanical properties as in the host bone tissue, thereby avoiding issues such as stress shielding in bone replacement procedure. In this paper, the deformation and failure mechanisms of porous titanium (Ti6Al4V) biomaterials manufactured by selective laser melting from two different types of repeating unit cells, namely cubic and diamond lattice structures, with four different porosities are studied. The mechanical behavior of the above-mentioned porous biomaterials was studied using finite element models. The computational results were compared with the experimental findings from a previous study of ours. The Johnson-Cook plasticity and damage model was implemented in the finite element models to simulate the failure of the additively manufactured scaffolds under compression. The computationally predicted stress-strain curves were compared with the experimental ones. The computational models incorporating the Johnson-Cook damage model could predict the plateau stress and maximum stress at the first peak with less than 18% error. Moreover, the computationally predicted deformation modes were in good agreement with the results of scaling law analysis. A layer-by-layer failure mechanism was found for the stretch-dominated structures, i.e. structures made from the cubic unit cell, while the failure of the bending-dominated structures, i.e. structures made from the diamond unit cells, was accompanied by the shearing bands of 45°. Copyright © 2015 Elsevier Ltd. All rights reserved.
Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Lin, C. T.
1989-01-01
The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.
Höfler, K; Schwarzer, S
2000-06-01
Building on an idea of Fogelson and Peskin [J. Comput. Phys. 79, 50 (1988)] we describe the implementation and verification of a simulation technique for systems of non-Brownian particles in fluids at Reynolds numbers up to about 20 on the particle scale. This direct simulation technique fills a gap between simulations in the viscous regime and high-Reynolds-number modeling. It also combines sufficient computational accuracy with numerical efficiency and allows studies of several thousand, in principle arbitrarily shaped, extended and hydrodynamically interacting particles on regular work stations. We verify the algorithm in two and three dimensions for (i) single falling particles and (ii) a fluid flowing through a bed of fixed spheres. In the context of sedimentation we compute the volume fraction dependence of the mean sedimentation velocity. The results are compared with experimental and other numerical results both in the viscous and inertial regime and we find very satisfactory agreement.
Jasim, Sarah B; Li, Zhuo; Guest, Ellen E; Hirst, Jonathan D
2017-12-16
A fully quantitative theory connecting protein conformation and optical spectroscopy would facilitate deeper insights into biophysical and simulation studies of protein dynamics and folding. The web server DichroCalc (http://comp.chem.nottingham.ac.uk/dichrocalc) allows one to compute from first principles the electronic circular dichroism spectrum of a (modeled or experimental) protein structure or ensemble of structures. The regular, repeating, chiral nature of secondary structure elements leads to intense bands in the far-ultraviolet (UV). The near-UV bands are much weaker and have been challenging to compute theoretically. We report some advances in the accuracy of calculations in the near-UV, realized through the consideration of the vibrational structure of the electronic transitions of aromatic side chains. The improvements have been assessed over a set of diverse proteins. We illustrate them using bovine pancreatic trypsin inhibitor and present a new, detailed analysis of the interactions which are most important in determining the near-UV circular dichroism spectrum. Copyright © 2018. Published by Elsevier Ltd.
3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy.
Li, Ruijiang; Lewis, John H; Jia, Xun; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Song, William Y; Jiang, Steve B
2011-05-01
To evaluate an algorithm for real-time 3D tumor localization from a single x-ray projection image for lung cancer radiotherapy. Recently, we have developed an algorithm for reconstructing volumetric images and extracting 3D tumor motion information from a single x-ray projection [Li et al., Med. Phys. 37, 2822-2826 (2010)]. We have demonstrated its feasibility using a digital respiratory phantom with regular breathing patterns. In this work, we present a detailed description and a comprehensive evaluation of the improved algorithm. The algorithm was improved by incorporating respiratory motion prediction. The accuracy and efficiency of using this algorithm for 3D tumor localization were then evaluated on (1) a digital respiratory phantom, (2) a physical respiratory phantom, and (3) five lung cancer patients. These evaluation cases include both regular and irregular breathing patterns that are different from the training dataset. For the digital respiratory phantom with regular and irregular breathing, the average 3D tumor localization error is less than 1 mm which does not seem to be affected by amplitude change, period change, or baseline shift. On an NVIDIA Tesla C1060 graphic processing unit (GPU) card, the average computation time for 3D tumor localization from each projection ranges between 0.19 and 0.26 s, for both regular and irregular breathing, which is about a 10% improvement over previously reported results. For the physical respiratory phantom, an average tumor localization error below 1 mm was achieved with an average computation time of 0.13 and 0.16 s on the same graphic processing unit (GPU) card, for regular and irregular breathing, respectively. For the five lung cancer patients, the average tumor localization error is below 2 mm in both the axial and tangential directions. The average computation time on the same GPU card ranges between 0.26 and 0.34 s. Through a comprehensive evaluation of our algorithm, we have established its accuracy in 3D tumor localization to be on the order of 1 mm on average and 2 mm at 95 percentile for both digital and physical phantoms, and within 2 mm on average and 4 mm at 95 percentile for lung cancer patients. The results also indicate that the accuracy is not affected by the breathing pattern, be it regular or irregular. High computational efficiency can be achieved on GPU, requiring 0.1-0.3 s for each x-ray projection.
Dynamics of coherent states in regular and chaotic regimes of the non-integrable Dicke model
NASA Astrophysics Data System (ADS)
Lerma-Hernández, S.; Chávez-Carlos, J.; Bastarrachea-Magnani, M. A.; López-del-Carpio, B.; Hirsch, J. G.
2018-04-01
The quantum dynamics of initial coherent states is studied in the Dicke model and correlated with the dynamics, regular or chaotic, of their classical limit. Analytical expressions for the survival probability, i.e. the probability of finding the system in its initial state at time t, are provided in the regular regions of the model. The results for regular regimes are compared with those of the chaotic ones. It is found that initial coherent states in regular regions have a much longer equilibration time than those located in chaotic regions. The properties of the distributions for the initial coherent states in the Hamiltonian eigenbasis are also studied. It is found that for regular states the components with no negligible contribution are organized in sequences of energy levels distributed according to Gaussian functions. In the case of chaotic coherent states, the energy components do not have a simple structure and the number of participating energy levels is larger than in the regular cases.
Vinckier, F; Gaillard, R; Palminteri, S; Rigoux, L; Salvador, A; Fornito, A; Adapa, R; Krebs, M O; Pessiglione, M; Fletcher, P C
2016-01-01
A state of pathological uncertainty about environmental regularities might represent a key step in the pathway to psychotic illness. Early psychosis can be investigated in healthy volunteers under ketamine, an NMDA receptor antagonist. Here, we explored the effects of ketamine on contingency learning using a placebo-controlled, double-blind, crossover design. During functional magnetic resonance imaging, participants performed an instrumental learning task, in which cue-outcome contingencies were probabilistic and reversed between blocks. Bayesian model comparison indicated that in such an unstable environment, reinforcement learning parameters are downregulated depending on confidence level, an adaptive mechanism that was specifically disrupted by ketamine administration. Drug effects were underpinned by altered neural activity in a fronto-parietal network, which reflected the confidence-based shift to exploitation of learned contingencies. Our findings suggest that an early characteristic of psychosis lies in a persistent doubt that undermines the stabilization of behavioral policy resulting in a failure to exploit regularities in the environment. PMID:26055423
Vinckier, F; Gaillard, R; Palminteri, S; Rigoux, L; Salvador, A; Fornito, A; Adapa, R; Krebs, M O; Pessiglione, M; Fletcher, P C
2016-07-01
A state of pathological uncertainty about environmental regularities might represent a key step in the pathway to psychotic illness. Early psychosis can be investigated in healthy volunteers under ketamine, an NMDA receptor antagonist. Here, we explored the effects of ketamine on contingency learning using a placebo-controlled, double-blind, crossover design. During functional magnetic resonance imaging, participants performed an instrumental learning task, in which cue-outcome contingencies were probabilistic and reversed between blocks. Bayesian model comparison indicated that in such an unstable environment, reinforcement learning parameters are downregulated depending on confidence level, an adaptive mechanism that was specifically disrupted by ketamine administration. Drug effects were underpinned by altered neural activity in a fronto-parietal network, which reflected the confidence-based shift to exploitation of learned contingencies. Our findings suggest that an early characteristic of psychosis lies in a persistent doubt that undermines the stabilization of behavioral policy resulting in a failure to exploit regularities in the environment.
Measuring and Modeling the Growth Dynamics of Self-Catalyzed GaP Nanowire Arrays.
Oehler, Fabrice; Cattoni, Andrea; Scaccabarozzi, Andrea; Patriarche, Gilles; Glas, Frank; Harmand, Jean-Christophe
2018-02-14
The bottom-up fabrication of regular nanowire (NW) arrays on a masked substrate is technologically relevant, but the growth dynamic is rather complex due to the superposition of severe shadowing effects that vary with array pitch, NW diameter, NW height, and growth duration. By inserting GaAsP marker layers at a regular time interval during the growth of a self-catalyzed GaP NW array, we are able to retrieve precisely the time evolution of the diameter and height of a single NW. We then propose a simple numerical scheme which fully computes shadowing effects at play in infinite arrays of NWs. By confronting the simulated and experimental results, we infer that re-emission of Ga from the mask is necessary to sustain the NW growth while Ga migration on the mask must be negligible. When compared to random cosine or random uniform re-emission from the mask, the simple case of specular reflection on the mask gives the most accurate account of the Ga balance during the growth.
Eglen, Stephen J; Wong, James C T
2008-01-01
Most types of retinal neurons are spatially positioned in non-random patterns, termed retinal mosaics. Several developmental mechanisms are thought to be important in the formation of these mosaics. Most evidence to date suggests that homotypic constraints within a type of neuron are dominant, and that heterotypic interactions between different types of neuron are rare. In an analysis of macaque H1 and H2 horizontal cell mosaics, Wässle et al. (2000) suggested that the high regularity index of the combined H1 and H2 mosaic might be caused by heterotypic interactions during development. Here we use computer modeling to suggest that the high regularity index of the combined H1 and H2 mosaic is a by-product of the basic constraint that two neurons cannot occupy the same space. The spatial arrangement of type A and type B horizontal cells in cat retina also follow this same principle.
Chen, Jing; Tang, Yuan Yan; Chen, C L Philip; Fang, Bin; Lin, Yuewei; Shang, Zhaowei
2014-12-01
Protein subcellular location prediction aims to predict the location where a protein resides within a cell using computational methods. Considering the main limitations of the existing methods, we propose a hierarchical multi-label learning model FHML for both single-location proteins and multi-location proteins. The latent concepts are extracted through feature space decomposition and label space decomposition under the nonnegative data factorization framework. The extracted latent concepts are used as the codebook to indirectly connect the protein features to their annotations. We construct dual fuzzy hypergraphs to capture the intrinsic high-order relations embedded in not only feature space, but also label space. Finally, the subcellular location annotation information is propagated from the labeled proteins to the unlabeled proteins by performing dual fuzzy hypergraph Laplacian regularization. The experimental results on the six protein benchmark datasets demonstrate the superiority of our proposed method by comparing it with the state-of-the-art methods, and illustrate the benefit of exploiting both feature correlations and label correlations.
GMove: Group-Level Mobility Modeling Using Geo-Tagged Social Media.
Zhang, Chao; Zhang, Keyang; Yuan, Quan; Zhang, Luming; Hanratty, Tim; Han, Jiawei
2016-08-01
Understanding human mobility is of great importance to various applications, such as urban planning, traffic scheduling, and location prediction. While there has been fruitful research on modeling human mobility using tracking data ( e.g. , GPS traces), the recent growth of geo-tagged social media (GeoSM) brings new opportunities to this task because of its sheer size and multi-dimensional nature. Nevertheless, how to obtain quality mobility models from the highly sparse and complex GeoSM data remains a challenge that cannot be readily addressed by existing techniques. We propose GMove, a group-level mobility modeling method using GeoSM data. Our insight is that the GeoSM data usually contains multiple user groups, where the users within the same group share significant movement regularity. Meanwhile, user grouping and mobility modeling are two intertwined tasks: (1) better user grouping offers better within-group data consistency and thus leads to more reliable mobility models; and (2) better mobility models serve as useful guidance that helps infer the group a user belongs to. GMove thus alternates between user grouping and mobility modeling, and generates an ensemble of Hidden Markov Models (HMMs) to characterize group-level movement regularity. Furthermore, to reduce text sparsity of GeoSM data, GMove also features a text augmenter. The augmenter computes keyword correlations by examining their spatiotemporal distributions. With such correlations as auxiliary knowledge, it performs sampling-based augmentation to alleviate text sparsity and produce high-quality HMMs. Our extensive experiments on two real-life data sets demonstrate that GMove can effectively generate meaningful group-level mobility models. Moreover, with context-aware location prediction as an example application, we find that GMove significantly outperforms baseline mobility models in terms of prediction accuracy.
GMove: Group-Level Mobility Modeling Using Geo-Tagged Social Media
Zhang, Chao; Zhang, Keyang; Yuan, Quan; Zhang, Luming; Hanratty, Tim; Han, Jiawei
2017-01-01
Understanding human mobility is of great importance to various applications, such as urban planning, traffic scheduling, and location prediction. While there has been fruitful research on modeling human mobility using tracking data (e.g., GPS traces), the recent growth of geo-tagged social media (GeoSM) brings new opportunities to this task because of its sheer size and multi-dimensional nature. Nevertheless, how to obtain quality mobility models from the highly sparse and complex GeoSM data remains a challenge that cannot be readily addressed by existing techniques. We propose GMove, a group-level mobility modeling method using GeoSM data. Our insight is that the GeoSM data usually contains multiple user groups, where the users within the same group share significant movement regularity. Meanwhile, user grouping and mobility modeling are two intertwined tasks: (1) better user grouping offers better within-group data consistency and thus leads to more reliable mobility models; and (2) better mobility models serve as useful guidance that helps infer the group a user belongs to. GMove thus alternates between user grouping and mobility modeling, and generates an ensemble of Hidden Markov Models (HMMs) to characterize group-level movement regularity. Furthermore, to reduce text sparsity of GeoSM data, GMove also features a text augmenter. The augmenter computes keyword correlations by examining their spatiotemporal distributions. With such correlations as auxiliary knowledge, it performs sampling-based augmentation to alleviate text sparsity and produce high-quality HMMs. Our extensive experiments on two real-life data sets demonstrate that GMove can effectively generate meaningful group-level mobility models. Moreover, with context-aware location prediction as an example application, we find that GMove significantly outperforms baseline mobility models in terms of prediction accuracy. PMID:28163978
Gait symmetry and regularity in transfemoral amputees assessed by trunk accelerations
2010-01-01
Background The aim of this study was to evaluate a method based on a single accelerometer for the assessment of gait symmetry and regularity in subjects wearing lower limb prostheses. Methods Ten transfemoral amputees and ten healthy control subjects were studied. For the purpose of this study, subjects wore a triaxial accelerometer on their thorax, and foot insoles. Subjects were asked to walk straight ahead for 70 m at their natural speed, and at a lower and faster speed. Indices of step and stride regularity (Ad1 and Ad2, respectively) were obtained by the autocorrelation coefficients computed from the three acceleration components. Step and stride durations were calculated from the plantar pressure data and were used to compute two reference indices (SI1 and SI2) for step and stride regularity. Results Regression analysis showed that both Ad1 well correlates with SI1 (R2 up to 0.74), and Ad2 well correlates with SI2 (R2 up to 0.52). A ROC analysis showed that Ad1 and Ad2 has generally a good sensitivity and specificity in classifying amputee's walking trial, as having a normal or a pathologic step or stride regularity as defined by means of the reference indices SI1 and SI2. In particular, the antero-posterior component of Ad1 and the vertical component of Ad2 had a sensitivity of 90.6% and 87.2%, and a specificity of 92.3% and 81.8%, respectively. Conclusions The use of a simple accelerometer, whose components can be analyzed by the autocorrelation function method, is adequate for the assessment of gait symmetry and regularity in transfemoral amputees. PMID:20085653
Computed myography: three-dimensional reconstruction of motor functions from surface EMG data
NASA Astrophysics Data System (ADS)
van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.
2008-12-01
We describe a methodology called computed myography to qualitatively and quantitatively determine the activation level of individual muscles by voltage measurements from an array of voltage sensors on the skin surface. A finite element model for electrostatics simulation is constructed from morphometric data. For the inverse problem, we utilize a generalized Tikhonov regularization. This imposes smoothness on the reconstructed sources inside the muscles and suppresses sources outside the muscles using a penalty term. Results from experiments with simulated and human data are presented for activation reconstructions of three muscles in the upper arm (biceps brachii, bracialis and triceps). This approach potentially offers a new clinical tool to sensitively assess muscle function in patients suffering from neurological disorders (e.g., spinal cord injury), and could more accurately guide advances in the evaluation of specific rehabilitation training regimens.
NASA Astrophysics Data System (ADS)
Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young
2017-05-01
This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.
Renormalization Group Theory of Bolgiano Scaling in Boussinesq Turbulence
NASA Technical Reports Server (NTRS)
Rubinstein, Robert
1994-01-01
Bolgiano scaling in Boussinesq turbulence is analyzed using the Yakhot-Orszag renormalization group. For this purpose, an isotropic model is introduced. Scaling exponents are calculated by forcing the temperature equation so that the temperature variance flux is constant in the inertial range. Universal amplitudes associated with the scaling laws are computed by expanding about a logarithmic theory. Connections between this formalism and the direct interaction approximation are discussed. It is suggested that the Yakhot-Orszag theory yields a lowest order approximate solution of a regularized direct interaction approximation which can be corrected by a simple iterative procedure.